Trigeminal nerve stimulation to treat ADHD

NeuroSigma has received FDA clearance for its forehead patch which stimulates the trigeminal nerve during sleep to treat ADHD. The device won CE Mark approval in Europe in  2015.

The  approval was based on study of  62 subjects. Over four weeks, those who received the treatment showed  a decrease in ADHD-RS by -31.4%. The control group showed a  -18.4% decrease.

The FDA’s Carlos Pena said: “This new device offers a safe, non-drug option for treatment of ADHD in pediatric patients through the use of mild nerve stimulation, a first of its kind.”

Trigeminal nerve stimulation is also being studied in Epilepsy and PTSD.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

MRI detected intracellular calcium signaling

Alan Jasanoff and MIT colleagues are using MRI to monitor calcium activity at a much deeper level in the brain than previously possible, to show how neurons communicate with each other.  The research team believes that this enables neural activity to be linked with specific behaviors.

To create their intracellular calcium sensors, the researchers used manganese as a contrast agent, bound to an organic compound that can penetrate cell membranes, containint a calcium-binding chelator.

Once inside the cell, if calcium levels are low, the calcium chelator binds weakly to the manganese atom, shielding the manganese from MRI detection. When calcium flows into the cell, the chelator binds to the calcium and releases the manganese, which makes the contrast agent appear brighter in an MRI.

The technique could also be used to image calcium as it performs in facilitating the activation of immune cells, or in diagnostic brain or heart imaging.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

Starving cancer stem cells as a new approach to glioblastoma

Luis Parada and Sloan Kettering colleagues are focusing on cancer stem cells as a new approach to glioblastoma.

Like normal stem cells, cancer stem cells have the ability to rebuild a tumor, even after most of it has been removed, leading to cancer relapse and metastasis.

According to Parada: “The pharmaceutical industry has traditionally used established cancer cell lines to screen for new drugs, but these cell lines don’t always reflect how cancer behaves in the body. The therapies that are currently in use were designed to target cells that are rapidly dividing. But what we’ve concluded in our studies is that glioblastoma stem cells divide relatively slowly within tumors, leaving them unaffected by these treatments.”

Even if most of the tumor is destroyed, the stem cells allow it to regrow.

The team discovered a drug, which they called Gboxin, that effectively treated glioblastoma in mice, and killed human glioblastoma cells.  They then discovered that Gboxin killed cancer stem cells by starving them of energy – . by preventing cells from making ATP through oxidative phosphorylation in mitochondria.  When Gboxin accumulates within cancer stem cells, it essentially strangles the mitochondria and shuts energy production down.

The next step is to determine that Gboxin will be able to cross the blood-brain barrier, and potential side effects of the drug.


Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University

“Monorail” could halt spread of brain tumors

Duke’s Ravi Bellamkonda has developed a “Tumor Monorail” which tricks aggressive brain tumors such as glioblastoma into migrating into an external container rather than throughout the brain.  It has been designated “Breakthrough Device” by the U.S. Food and Drug Administration (FDA).

The device mimics the physical properties of the brain’s white matter to entice aggressive tumors to migrate toward the exterior of the brain, where the migrating cells can be collected and removed. It does not to destroy the tumor, but does halt its lethal spread. There are no chemicals or enzymes involved, and there are a wide variety of materials that the device could be made from.

The work is based on rat studies from 2014.  The team hopes to receive FDA approval for human trials by the end of 2019.

Click to view Georgia Tech (whose researchers collaborated with colleagues at Emory and Duke) video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Pierrick Arnal – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay – Tracy Laabs – George Kouvas

Neural signals translated into speech

Columbia University’s Nima Mesgarani is developing a computer-generated speech method for those who are unable to talk.

How brain signals translate to speech sounds varies from person to person, therefore computer models must be trained individually. The models are most successful when used during open skull surgeries, to remove brain tumors or when electrodes are implanted to pinpoint the origin of seizures before surgery.

Data is fed into neural networks, which process patterns by passing information through layers of computational nodes. The networks learn by adjusting connections between nodes. In the study, networks were exposed to recordings of speech that a person produced or heard and data on simultaneous brain activity.

Mesgarani’s team used data from five epilepsy patients. The network analyzed recordings from the auditory cortex as participants heard recordings of stories and people naming digits from zero to nine. The computer then reconstructed spoken numbers from neural data alone.

Click to view Science magazine’s sound file of the computer reconstruction of brain activity.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Pierrick Arnal – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay

Alzheimer’s detected by AI 6 years before diagnosis

In a recent study, Jae Ho Sohn and UCSD colleagues used an AI to analyze glucose-monitoring PET scans to detect early-stage Alzheimer’s disease six years before  diagnosis.

The algorithm was trained on PET scans from patients who were eventually diagnosed with  Alzheimer’s disease, MCI, or no disorder. It was able to  identify 92% of patients who developed Alzheimer’s disease in the first test set and 98% in the second test set, 75.8 months before diagnosis on average.v


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier – Shea Balish – Kareem Ayyad – Mehran Talebinejad – Liam Kaufman – Scott Barclay

DARPA-developed closed loop therapies for neuropsychiatric illness

Led by Justin Sanchez, DARPA’s SUBNETS program develops responsive, adaptable, personalized closed-loop therapies for neuropsychiatric illness that incorporate recording and analysis of brain activity with near-real-time neural stimulation to correct or mitigate brain dysfunction.

The technology detects ongoing dynamic changes in brain activity associated with fluctuations in mood, and uses the data to deliver precisely timed therapeutic stimulation.

The premise is that brain function and dysfunction — rather than being relegated to distinct anatomical regions of the brain — play out across distributed neural systems. By understanding what healthy brain activity looks like across these sub-networks, compared to unhealthy brain activity, and identifying predictive biomarkers that indicate changing state, DARPA is developing interventions that maintain a healthy brain state within a normal range of emotions. 

Three recent papers show that decoding technology can predict changes in mood from recorded neural signals; a brain sub-network appears to contribute to depression, especially in those with anxiety; and moderate to severe depression symptoms can be alleviated using open-loop neural stimulation delivered to the orbitofrontal cortex to modulate a sub-network that contributes to depression. 

This work is inspired by Sanchez’s commitment to finding better treatments for the millions of veterans who suffer from neuropsychiatric illness, which have been limited by a lack of a mechanistic understanding of how these illnesses manifest in the brain.

These findings encompass key discoveries and technologies to enable the SUBNETS goal of a closed-loop system that can detect ongoing dynamic changes in brain activity associated with fluctuations in mood, and that can use this information to deliver precisely timed therapeutic stimulation to improve brain function in individuals living with neuropsychiatric illnesses.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

EEG identifies cognitive motor dissociation

Nicholas Schiff and Weill Cornell colleagues have developed an EEG-based method for measuring the delay in brain processing of continuous natural speech in patients with severe brain injury. Study results correlated with fMRI obtained evidence, commonly used to identify the capacity to perform cognitively demanding tasks. EEG can be used for long periods, and is cheaper and more accessible than fMRI.

This type of monitoring can identify patients with severe brain injury who have preserved high-level cognition despite showing limited or no consciousness.

According to Schiff: “This approach may be a more effective and efficient method for initially identifying patients with severe brain injuries who are very aware but are otherwise unable to respond, a condition called cognitive motor dissociation.”


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

Thought controlled tablets

The BrainGate/Brown/Stanford/MGH/VA consortium has published a study describing three teraplegic patients who were able to control an off the shelf tablet with their thoughts. They surfed the web, checked the weather and shopped online. A musician played part of Beethoven’s “Ode to Joy” on a digital piano interface.

The BrainGate BCI included a small implant that detected and recorded signals associated with intended movements produced in the motor cortex. Neural signals were routed to a Bluetooth interface that worked like a wireless mouse, which was paired to an unmodified tablet.

Participants made up to  22 point-and-click selections per minute while using several apps. They typed up to 30 characters per minute with standard email and text interfaces.

The researchers believe that the technology can open new lines of communication between brain disorder patients and their caregivers.

Click to view BrainGate video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

Thought controlled television

Samsung and EPFL researchers, including Ricardo Chavarriaga, are developing Project Pontis, a BCI system meant to allow the disabled to control a TV with their thoughts.

The prototype uses a 64 sensor headset plus eye tracking to determine when a user has selected a particular movie. Machine learning is used to build a profile of videos one is interested in, allowing future content suggestions.  The user ultimately makes a selection using eye tracking.  The team is now working on a system that relies on brain signals alone for users who aren’t able to control their eyes or other muscles reliably,

Click to view Samsung video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar