Study: AI accurately predicts childhood disease from health records

Xia Huimin and Guangzhou Women and Children’s Medical Center researchers used AI to read 1.36 million pediatric health records, and diagnosed disease as accurately as doctors, according to a recent study.

Common childhood diseases were detected after processing symptoms, medical history and other clinical data from this massive sample.  The goal is the diagnosis of complex or rare diseases by providing more diagnostic predictions, and to assist triage patients.


Join ApplySci at the 11th Wearable Tech + Digital Health + Neurotech Boston conference on November 14th at Harvard Medical School

Glutamate sensor could predict migraines, monitor CNS drug effectiveness

Riyi Shi and Purdue colleagues have developed a tiny, spinal cord-implanted, 3D printed sensor that quickly and accurately tracks glutamate in spinal trauma and brain disease. The goal  is to monitor drug effectiveness, and predict migraine headaches in humans, although it has only been tested on animals.

Glutamate spikes are often missed.  Damaged nerve structures allow glutamate to leak into spaces outside of cells, over-exciting and damaging them. Brain diseases, including Alzheimer’s and Parkinson’s, also show elevated levels of glutamate.

Devices to date have not been sensitive, fast, or affordable enough. Measuring levels in vivo would help researchers to study how spinal cord injuries happen, and  how brain diseases develop.

In a recent animal study, the device captured spikes immediately, vs current devices, where researchers must  to wait 30 minutes for data after damaging the spinal cord.

Quick to view Purdue video


Join ApplySci at the 11th Wearable Tech + Digital Health + Neurotech Boston conference on November 14th at Harvard Medical School

CNBC feature on Sana Health | Feb 22, 2019 – ApplySci @ Stanford

Richard Hanbury discussed Sana Health‘s pain management technology at Wearable Tech + Digital Health + Neurotech Silicon Valley, on February 22, 2019 at Stanford.  ApplySci was delighted that CNBC chose to film this segment at the conference.

Click to view CNBC video


Join ApplySci at the 11th Wearable Tech + Digital Health + Neurotech Boston conference on November 14th at Harvard Medical School

3D printed bioreactor-grown bone for craniofacial surgery

Antonios MikosAlexander Tatara, and Rice colleagues are using a 3D printed mold, attached to a rib, to grow live bones to repair craniofacial injuries. Stem cells and blood vessels from the rib infiltrate scaffold material and replace it with natural, custom-fit bone.

Current reconstruction methods use a patient’s own bone graft tissues, harvested from the lower leg, hip and shoulder.

According to Mikos: “We chose to use ribs because they’re easily accessed and a rich source of stem cells and vessels, which infiltrate the scaffold and grow into new bone tissue that matches the patient.”  New bone can potentially be grown on multiple ribs, simultaneously.

The technology has only been tested on animals, but shows promise, with custom geometry and a reduced risk of rejection.

First 5G-enabled remote brain surgery

Dr Ling Zhipei at PLAGH,  has used a 5G mobile network to remotely implant DBS electrodes in a Parkinson’s patient’s brain. China Mobile and Huawei technology enabled him to control surgical robots from a distance of 1800 miles.  5G technology could transform medical care for those living in poor and remote areas.  According to Ling: “The 5G network has solved problems like video lag and remote control delay experienced under the 4G network, ensuring a nearly real-time operation. And you barely feel that the patient is 3,000 kilometers away.”

Wireless, skin-like sensors monitor baby heart rate, respiration, temperature, blood pressure

John Rogers and Northwestern colleagues have developed soft, flexible, battery-free, wireless, skin-like sensors to replace multi wire-based sensors that currently monitor babies in hospitals’ neonatal intensive care units.  The goal is to enable more accurate monitoring, and unobstructed physical bonding.

The dual wireless sensors monitor heart rate, respiration rate and body temperature — from opposite ends of the body. One sensor lies across the chest or back, and the other wraps around a foot. This allows physicians to gather an infant’s core temperature as well as body temperature from a peripheral region.

Physicians also can measure blood pressure by continuously tracking when the pulse leaves the heart and arrives at the foot. Currently, there is not a good way to collect a reliable blood pressure measurement. A blood pressure cuff can bruise or damage an infant’s fragile skin. The other option is to insert a catheter into an artery, which is tricky because of the slight diameter of a premature newborn’s blood vessels. It also introduces a risk of infection, clotting and death.

The device also could help fill in information gaps that exist during skin-to-skin contact. The sensors also can be worn during X-rays, MRIs and CT scans.

Click to view Northwestern video

“Monorail” could halt spread of brain tumors

Duke’s Ravi Bellamkonda has developed a “Tumor Monorail” which tricks aggressive brain tumors such as glioblastoma into migrating into an external container rather than throughout the brain.  It has been designated “Breakthrough Device” by the U.S. Food and Drug Administration (FDA).

The device mimics the physical properties of the brain’s white matter to entice aggressive tumors to migrate toward the exterior of the brain, where the migrating cells can be collected and removed. It does not to destroy the tumor, but does halt its lethal spread. There are no chemicals or enzymes involved, and there are a wide variety of materials that the device could be made from.

The work is based on rat studies from 2014.  The team hopes to receive FDA approval for human trials by the end of 2019.

Click to view Georgia Tech (whose researchers collaborated with colleagues at Emory and Duke) video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Pierrick Arnal – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay – Tracy Laabs – George Kouvas

Artificial skin sensor could help burn victims “feel”

UConn chemists Islam Mosa and Professor James Rusling have developed a sensor that could detect pressure, temperature, and vibration when placed on skin.  

The sensor and silicone tube are wrapped in copper wire and filled with an  iron oxide nanoparticle fluid, which creates an electric current. The copper wire detects the current. When the tube experiences pressure, the nanoparticles move and electric signal changes.

Sound waves also create waves in the fluid, and the signal changes differently than when the tube is bumped.

Magnetic fields were found to alter the signal differently than from pressure or sound waves.  The team could distinguish between the signals caused by walking, running, jumping, and swimming.

The researcher’s goals are to  help burn victims “feel” again, and to provide  early warning for workers exposed to high magnetic fields. The waterproof sensor could also serve as a pool-depth monitoring wearable for children.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Pierrick Arnal – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay – Tracy Laabs – George Kouvas

Wireless,biodegradable, flexible arterial-pulse sensor monitors blood flow

Zhenan Bao and colleagues have developed a wireless, battery-free, biodegradable sensor to provide continuous monitoring of blood flow through an artery.  This could provide critical information to doctors after vascular, transplant, reconstructive and cardiac surgery, with out the need for a visit.

Monitoring the success of surgery on blood vessels is difficult, as by the time a problem is detected, additional surgery is usually required.  The goal of the sensor is much earlier intervention.

The sensor wraps  around the healing vessel, where blood pulsing past pushes on its inner surface. As the shape of that surface changes, it alters the sensor’s capacity to store electric charge, which doctors can detect remotely from a device located near the skin but outside the body. That device solicits a reading by pinging the antenna of the sensor, similar to an ID card scanner. In the future, this device could come in the form of a stick-on patch or be integrated into other technology, like a wearable device or smartphone.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Pierrick Arnal – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay – Tracy Laabs – George Kouvas

Neural signals translated into speech

Columbia University’s Nima Mesgarani is developing a computer-generated speech method for those who are unable to talk.

How brain signals translate to speech sounds varies from person to person, therefore computer models must be trained individually. The models are most successful when used during open skull surgeries, to remove brain tumors or when electrodes are implanted to pinpoint the origin of seizures before surgery.

Data is fed into neural networks, which process patterns by passing information through layers of computational nodes. The networks learn by adjusting connections between nodes. In the study, networks were exposed to recordings of speech that a person produced or heard and data on simultaneous brain activity.

Mesgarani’s team used data from five epilepsy patients. The network analyzed recordings from the auditory cortex as participants heard recordings of stories and people naming digits from zero to nine. The computer then reconstructed spoken numbers from neural data alone.

Click to view Science magazine’s sound file of the computer reconstruction of brain activity.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Pierrick Arnal – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay