AI detects brain aneurysms, predicts rupture risk in surgery

Fujitsu, GE Healthcare, Macquarie University and Macquarie Medical Imaging are using AI to detect and monitor brain aneurysms on scans faster and more efficiently. Fujitsu will use AI to analyze brain images generated by GE’s Revolution C scanner and an algorithm that detect abnormalities and aneurysms. The algorithm will be capable of highlighting an arterial ring at the base of the brain that can have one or more aneurysms, and the tech will track aneurysms over time.The next phase will include a planning tool for surgical stent intervention. Fluid dynamic modeling will be used to predict the risk of aneurysm rupture.

Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School featuring talks by Brad Ringeisen, DARPA – Joe Wang, UCSD – Carlos Pena, FDA  – George Church, Harvard – Diane Chan, MIT – Giovanni Traverso, Harvard | Brigham & Womens – Anupam Goel, UnitedHealthcare  – Nathan Intrator, Tel Aviv University | Neurosteer – Arto Nurmikko, Brown – Constance Lehman, Harvard | MGH – Mikael Eliasson, Roche – David Rhew, Microsoft

Join ApplySci at the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University featuring talks by Zhenan Bao, Stanford – Rudy Tanzi, Harvard – David Rhew, Microsoft – Carla Pugh, Stanford – Nathan Intrator, Tel Aviv University | Neurosteer

New electrodes, brain signal analysis, for smaller, lower power, wireless BCI

Building on his prior brain-controlled prosthetic work, Stanford’s Krishna Shenoy has developed a simpler way to study brain electrical activity, which he believes will lead to tiny, low-power, wireless brain sensors that would bring thought-controlled prosthetics into much wider use.

The method involved decoding neural activity in aggregate, instead of  “spike sorting.”  Spike sorting must be done for every neuron in every experiment, taking thousands of research hours.  Future brain sensors, with 1,000 or more electrodes — up from 100 today — would take a neuroscientist 100 hours or more to sort the spikes by hand for every experiment.

In the study, the researchers used a  statistics theory  to uncover patterns of brain activity when several neurons are recorded on a single electrode. An electrode designed to pick up brain signals in mice used the technology to record brain signals of rhesus monkeys. Hundreds of neurons were recorded at the same time, and accurately portrayed the monkey’s brain activity, without spike sorting.

The team believes that this work will ultimately lead to neural implants with simpler electronics, to track more neurons, more accurately than before.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Blood-brain-barrier recreated inside organ chip with pluripotent stem cells

Clive Svendsen, Gad Vatine, and  Cedars Sinai and Ben Gurion University of the Negev colleagues  have  recreated  the blood-brain barrier outside of the body using induced pluripotent stem cells for the first time.  In a study, the recreated bbb functioned as it would in the individual who provided the cells to make it. This could facilitate a new understanding of brain disease and/or predict which drugs will work best for an individual.

The stem cells were used to create the neurons, blood-vessel linings and support cells, which comprise the blood-brain barrier. They were placed inside organ-chips, which recreated the body’s microenvironment with the natural physiology and mechanical forces that cells experience.

The living cells formed a functioning unit of a blood-brain barrier that act as it does in the body, including blocking entry of certain drugs. Significantly, when this blood-brain barrier was derived from cells of patients with Huntington’s disease or Allan-Herndon-Dudley syndrome, a rare congenital neurological disorder, the barrier malfunctioned in the same way that it does in patients with these diseases.

This is the first time that induced pluripotent stem cells were used generate a functioning blood-brain barrier, inside an Organ-Chip, that displayed a characteristic defect of the individual patient’s disease.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

App optimizes meditation length to improve attention and memory

Adam Gazzaley and UCSF colleagues have developed a focus-driven digital meditation program that improved attention and memory in healthy adults in a recent study.

MediTrain tailors meditation session length to participant abilities, and challenges users to increase session time. Subjects received significant benefits in 6 weeks.  On their first day, they focused on their breath for an average of 20 seconds. After 30 days, they were able to focus for an average of six minutes.

According to Gazzaley: “We took an ancient experiential treatment of focused meditation, reformulated it and delivered it through a digital technology, and improved attention span in millennials, an age group that is intimately familiar with the digital world, but also faces multiple challenges to sustained attention.”

At the end of each segment, participants were asked whether they paid continuous attention for the allotted time.  The app adapted  slightly longer meditation periods for those who said yes, and shorter ones for those who said no. The team believes that  user participation contributed to the app’s usefulness.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Study: Noninvasive BCI improves function in paraplegia

Miguel Nicolelis has developed a non-invasive system for lower-limb neurorehabilitation.

Study subjects wore an EEG headset  to record brain activity and detect movement intention. Eight electrodes were attached to each leg, stimulating muscles involved in walking.  After training, patients used their own brain activity to send electric impulses to their leg muscles, imposing a physiological gait. With a walker and a support harness, they learned to walk again, and increased their sensorimotor skills. A wearable haptic display delivered tactile feedback to forearms, to provide continuous proprioceptive walking feedback.

The system was tested on two patients with chronic paraplegia. Both were able to move with less dependency on walking assistance, and one displayed motor improvement. Cardiovascular capacity and muscle volume also improved.

Click to view EPFL video


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

AI detects depression in children’s voices

University of Vermont researchers have developed an algorithm that detects anxiety and depression in children’s voices with 80 per cent accuracy, according to a recent study.

Standard diagnosis involves a 60-90 minute semi-structured interview with a trained clinician and their primary care-giver. AI can make diagnosis faster and more reliable.

The researchers used an adapted version of  the Trier-Social Stress Task, which is intended to cause feelings of stress and anxiety in the subject.  71 children between ages three and eight were asked to improvise a three-minute story, and told that they would be judged based on how interesting it was. The researcher remained stern throughout the speech, and gave only neutral or negative feedback, to create stress.  After 90 seconds, and again with 30 seconds left, a buzzer would sound and the judge would tell them how much time was left.

The children were also diagnosed using a structured clinical interview and parent questionnaire.

The algorithm analyzed statistical features of the audio recordings of each child’s story and relate them to the the diagnosis. The algorithm diagnosed children with 80 per cent accuracy. The middle phase of the recordings, between the two buzzers, was the most predictive of a diagnosis.

Eight  audio features were identified.  Three stood out as highly indicative of internalizing disorders: low-pitched voices, with repeatable speech inflections and content, and a higher-pitched response to the surprising buzzer.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Study: Blood + spinal fluid test detects Alzheimer’s 8 years before symptoms

Klaus Gerwert at Ruhr-Universität Bochum has developed a blood + CSF test that he claims can detect Alzheimer’s disease 8 years before the onset of symptoms.  The goal is early stage therapy to achieve better results than current treatment protocols.

To reduce false positive results from the initial study, the researchers first used a blood test to identify high-risk individuals. They added a dementia-specific biomarker, tau protein, for participants shown to have Alzheimer’s in the first step. The second analysis was carried out in cerebrospinal fluid extracted from the spinal cord — an invasive procedure that the team is working to eliminate from the next phase of research. If both biomarkers were positive, it was determined that the presence of Alzheimer’s disease was  highly likely.

According to Gerwert: “Through the combination of both analyses, 87 of 100 Alzheimer’s patients were correctly identified in our study. And we reduced the number of false positive diagnoses in healthy subjects to 3 of 100.  Now, new clinical studies with test participants in very early stages of the disease can be launched. Recently, two major promising studies have failed, especially Crenezumab and Aducanumab – not least because it had probably already been too late by the time therapy was taken up. The new test opens up a new therapy window.”

Researcher Andreas Nabers added: “Once amyloid plaques have formed, it seems that the disease can no longer be treated. We are now conducting in-depth research to detect the second biomarker, namely tau protein, in the blood, in order to supply a solely blood-based test in future.”


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Thought generated speech

Edward Chang and UCSF colleagues are developing technology that will translate signals from the brain into synthetic speech.  The research team believes that the sounds would be nearly as sharp and normal as a real person’s voice. Sounds made by the human lips, jaw, tongue and larynx would be simulated.

The goal is a communication method for those with disease and paralysis.

According to Chang: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity.”

Berkeley’s Bob Knight has developed related technology, using HFB activity to decode imagined speech to develop a BCI for treatment of disabling language deficits.  He described this work at the 2018 ApplySci conference at Stanford.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Voice-detected PTSD

Charles Marmar, Adam Brown, and NYU colleagues are using AI-based voice analysis to detect PTSD with 89 per cent accuracy, according to a recent study.

PTSD is typically determined by bias-prone clinical interviews or self-reports.

The team recorded standard diagnostic interviews of 53 Iraq and Afghanistan veterans with military-service-related PTSD, as well as 78 veterans without the disease. The recordings were then fed into voice software to yield 40,526 speech-based features captured in short spurts of talk, which were then sifted  for patterns.

The  program linked less clear speech and a lifeless metallic tone with PTSD., While the study did not explore  disease mechanisms behind PTSD, the team believes that traumatic events change brain circuits that process emotion and muscle tone, affecting a person’s voice.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Trigeminal nerve stimulation to treat ADHD

NeuroSigma has received FDA clearance for its forehead patch which stimulates the trigeminal nerve during sleep to treat ADHD. The device won CE Mark approval in Europe in  2015.

The  approval was based on study of  62 subjects. Over four weeks, those who received the treatment showed  a decrease in ADHD-RS by -31.4%. The control group showed a  -18.4% decrease.

The FDA’s Carlos Pena said: “This new device offers a safe, non-drug option for treatment of ADHD in pediatric patients through the use of mild nerve stimulation, a first of its kind.”

Trigeminal nerve stimulation is also being studied in Epilepsy and PTSD.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University