EEG identifies cognitive motor dissociation

Nicholas Schiff and Weill Cornell colleagues have developed an EEG-based method for measuring the delay in brain processing of continuous natural speech in patients with severe brain injury. Study results correlated with fMRI obtained evidence, commonly used to identify the capacity to perform cognitively demanding tasks. EEG can be used for long periods, and is cheaper and more accessible than fMRI.

This type of monitoring can identify patients with severe brain injury who have preserved high-level cognition despite showing limited or no consciousness.

According to Schiff: “This approach may be a more effective and efficient method for initially identifying patients with severe brain injuries who are very aware but are otherwise unable to respond, a condition called cognitive motor dissociation.”


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

Thought controlled tablets

The BrainGate/Brown/Stanford/MGH/VA consortium has published a study describing three teraplegic patients who were able to control an off the shelf tablet with their thoughts. They surfed the web, checked the weather and shopped online. A musician played part of Beethoven’s “Ode to Joy” on a digital piano interface.

The BrainGate BCI included a small implant that detected and recorded signals associated with intended movements produced in the motor cortex. Neural signals were routed to a Bluetooth interface that worked like a wireless mouse, which was paired to an unmodified tablet.

Participants made up to  22 point-and-click selections per minute while using several apps. They typed up to 30 characters per minute with standard email and text interfaces.

The researchers believe that the technology can open new lines of communication between brain disorder patients and their caregivers.

Click to view BrainGate video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar – Hugo Mercier

Thought controlled television

Samsung and EPFL researchers, including Ricardo Chavarriaga, are developing Project Pontis, a BCI system meant to allow the disabled to control a TV with their thoughts.

The prototype uses a 64 sensor headset plus eye tracking to determine when a user has selected a particular movie. Machine learning is used to build a profile of videos one is interested in, allowing future content suggestions.  The user ultimately makes a selection using eye tracking.  The team is now working on a system that relies on brain signals alone for users who aren’t able to control their eyes or other muscles reliably,

Click to view Samsung video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le – Anima Anandkumar

Focused ultrasound thalamotomy in Parkinson’s Disease

UVA’s Scott Sperling and Jeff Elias, who already used focused ultrasound to treat essential tremor, have just published the results of  a small study showing the efficacy of the technology in Parkinson’s Disease.

The sound waves were shown to interrupt brain circuits responsible for the uncontrollable shaking associated with the disease. The researchers claim that their study also offers “comprehensive evidence of safety” in its effect on mood, behavior and cognitive ability, which has not previously been studied.

According to Sperling, “In this study, we extended these initial results and showed that focused ultrasound thalamotomy is not only safe from a cognitive and mood perspective, but that patients who underwent surgery realized significant and sustained benefits in terms of functional disability and overall quality of life.”

27 adults with severe Parkinson’s tremor that had not responded to previous treatment were divided  into two groups. Twenty received the procedure, and a control group of seven (who were later offered the procedure) did not. Participants reported improved quality of life, including their ability to perform simple daily tasks, emotional wellbeing, and a lessened sense of stigma due to their tremor, at both three and twelve months.

The team found that mood and cognition, and the ability to go about daily life, ultimately had more effect on participants’ assessment of their overall quality of life than did remor severity or the amount of tremor improvement.

Cognitive decline was seen in some participants after the study, in that they were less able to name colors and think of and speak words. The cause of this was unclear, and must be investigated. The researchers suggested this could be a result of the natural progression of Parkinson’s.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le

Wearable sensor monitors shunt function in hydrocephalus

Northwestern’s John Rogers has created another minimal, flexible, wireless, adhesive wearable — this time to help hydrocephalus patients manage their condition.

The band-aid like sensor determines whether a shunt is working properly.

Shunts often fail.  When this happens, a patient can experience headaches, nausea and low energy, and must go to a hospital immediately.  However, a patient can have similar symptoms with a properly working shunt. The wearable determines, in five minutes, if the shunt is functioning, and if it is, a patient could avoid a hospital visit, CT, MRI, and potential surgery to determine the shunt’s functionality.

Click to view Northwestern University video


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod KhoslaWalter Greenleaf – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi Emmanuel Mignot Michael Snyder Joe Wang – Josh Duyan – Aviad Hai Anne Andrews Tan Le

Minimally invasive sensor detects electrical activity, optical signals in brain for MRI

MIT’s Aviad Hai has developed a minimally invasive sensor to detect electrical activity or optical signals in the brain for MRI. No power source is needed, as radio signals that an external MRI scanner emits power the sensor.  It is implanted but does not require a wired connection to the brain. The researchers believe that it could also be adapted to measure glucose or other chemicals.

The team previously developed MRI sensors to detect calcium, serotonin and dopamine. The new sensor is meant to replace current electrical activity monitoring, which is extremely invasive, and can cause tissue damage.

Hai and colleagues shrank a radio antenna down to a few millimeters, so that it could be implanted directly into the brain to receive radio waves generated by water in the tissue.

The sensor is first tuned to the same frequency as the radio waves emitted by the hydrogen atoms. When an electromagnetic signal is detected, its tuning changes and  it no longer matches the hydrogen atom frequency. A weaker image then arises when the sensor is scanned by an external MRI machine.

In a study, the sensors were able to pick up electrical signals similar to those produced by action potentials or local field potentials.

Hai plans to further miniaturize the sensor, to enable multiple injections, to image light or electrical fields over a larger brain area.

Dr. Hai will discuss this work at ApplySci’s Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod Khosla – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi – Emmanuel Mignot – Michael Snyder – Joe Wang – Josh Duyan – Aviad Hai

Brain-to-brain communication interface

Rajesh Rao and University of Washington colleagues have developed BrainNet, a non-invasive direct brain-to-brain interface for multiple people.  The goal is a social network of human brains for problem solving. The interface combines EEG to record brain signals and TMS to deliver information to the brain, enabling 3 people to collaborate via direct brain-to-brain communication.

In a recent study, two of the three subjects were “Senders.” Their brain signals were decoded with real-time EEG analysis to extract decisions about whether to rotate a block in a Tetris-like game before it is dropped to fill a line. The Senders’ decisions were sent via the Internet to the brain of a third subject, the “Receiver.”  Decisions were delivered to the Receiver’s brain via magnetic stimulation of the occipital cortex. The Receiver integrated the information and made a decision, using an EEG interface, to either turn a block or keep it in the same position.  A second round of the game gave Senders another chance to validate and provide feedback to the Receiver’s action.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod Khosla – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi

Implanted sensors track dopamine for a year

Helen Schwerdt, Ann Graybiel, Michael Cima, Bob Langer, and MIT colleagues have developed and implantable sensor that can measure dopamine in the brain of rodents for more than one year.  They believe that this can inform the treatment and understanding of Parkinson’s and other brain diseases.

According to Graybiel, “Despite all that is known about dopamine as a crucial signaling molecule in the brain, implicated in neurologic and neuropsychiatric conditions as well as our abilty to learn, it has been impossible to monitor changes in the online release of dopamine over time periods long enough to relate these to clinical conditions.”

The sensors arenearly invisible to the immune system, avoiding scar tissue that would impede accuracy. After  implantation, populations of microglia  and astrocytes were the same as those in brain tissue that did not have the probes.

In a recent animal  study, three to five sensors per were implanted 5 millimeters deep in the striatum. Readings were taken every few weeks, after dopamine release was stimulated in the brainstem, traveling to the striatum. Measurements remained consistent for up to 393 days.

If developed for use in humans, these sensors could be useful for monitoring Parkinson’s patients who receive deep brain stimulation.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

VR + motion capture to study movement, sensory processing, in autism, AD, TBI

MoBi, developed by John Foxe at the University of Rochester, combines VR, EEG, and motion capture sensors to study movement difficulties associated with neurological disorders.

According to Foxe, “The MoBI system allows us to get people walking, using their senses, and solving the types of tasks you face every day, all the while measuring brain activity and tracking how the processes associated with cognition and movement interact.”

Motion sensor and EEG data, collected while a subject is walking in a virtual environment, are synchronized, allowing researchers to track which areas of the brain are being activated when walking or performing task. Brain response while moving, performing tasks, or doing both at the same time, is analyzed.

This technique could potentially guide treatment in Autism, dementia, and TBI, characterized by difficulty in processing sensory information from multiple sources and an abnormal gait.

Click to view University of Rochester video


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

PREFERRED REGISTRATION AVAILABLE THROUGH TODAY, SEPTEMBER 7TH

Brain imaging to detect suicidal thoughts

Last year, Carnegie Mellon professor Marcel Just and Pitt professor David Brent used brain imagining to identify suicidal thoughts.

Supported by the NIMH, they are now working to establish reliable neurocognitive markers of suicidal ideation and attempt. They will examine the differences in brain activation patterns between suicidal and non-suicidal young adults as they think about words related to suicide — such as positive and negative concepts — and use machine learning to identify neural signatures of suicidal ideation and behavior.

According to Just,  “We were previously able to obtain consistent neural signatures to determine whether someone was thinking about objects like a banana or a hammer by examining their fMRI brain activation patterns. But now we are able to tell whether someone is thinking about ‘trouble’ or ‘death’ in an unusual way. The alterations in the signatures of these concepts are the ‘neurocognitive thought markers’ that our machine learning program looks for.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

PREFERRED REGISTRATION AVAILABLE THROUGH FRIDAY, SEPTEMBER 7TH