Brain-to-brain communication interface

Rajesh Rao and University of Washington colleagues have developed BrainNet, a non-invasive direct brain-to-brain interface for multiple people.  The goal is a social network of human brains for problem solving. The interface combines EEG to record brain signals and TMS to deliver information to the brain, enabling 3 people to collaborate via direct brain-to-brain communication.

In a recent study, two of the three subjects were “Senders.” Their brain signals were decoded with real-time EEG analysis to extract decisions about whether to rotate a block in a Tetris-like game before it is dropped to fill a line. The Senders’ decisions were sent via the Internet to the brain of a third subject, the “Receiver.”  Decisions were delivered to the Receiver’s brain via magnetic stimulation of the occipital cortex. The Receiver integrated the information and made a decision, using an EEG interface, to either turn a block or keep it in the same position.  A second round of the game gave Senders another chance to validate and provide feedback to the Receiver’s action.


Join ApplySci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22 at Stanford University — Featuring:  Zhenan BaoChristof KochVinod Khosla – Nathan IntratorJohn MattisonDavid EaglemanUnity Stoakes Shahin Farshchi

Implanted sensors track dopamine for a year

Helen Schwerdt, Ann Graybiel, Michael Cima, Bob Langer, and MIT colleagues have developed and implantable sensor that can measure dopamine in the brain of rodents for more than one year.  They believe that this can inform the treatment and understanding of Parkinson’s and other brain diseases.

According to Graybiel, “Despite all that is known about dopamine as a crucial signaling molecule in the brain, implicated in neurologic and neuropsychiatric conditions as well as our abilty to learn, it has been impossible to monitor changes in the online release of dopamine over time periods long enough to relate these to clinical conditions.”

The sensors arenearly invisible to the immune system, avoiding scar tissue that would impede accuracy. After  implantation, populations of microglia  and astrocytes were the same as those in brain tissue that did not have the probes.

In a recent animal  study, three to five sensors per were implanted 5 millimeters deep in the striatum. Readings were taken every few weeks, after dopamine release was stimulated in the brainstem, traveling to the striatum. Measurements remained consistent for up to 393 days.

If developed for use in humans, these sensors could be useful for monitoring Parkinson’s patients who receive deep brain stimulation.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

DARPA: Three aircraft virtually controlled with brain chip

Building on 2015 research that enabled a paralyzed person to virtually control an F-35 jet, DARPA’s Justin Sanchez has announced that the brain can be used to command and control three types of aircraft simultaneously.

Click to view Justin Sanchez’s talk at ApplySci’s 2018 conference at Stanford University


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

VR + motion capture to study movement, sensory processing, in autism, AD, TBI

MoBi, developed by John Foxe at the University of Rochester, combines VR, EEG, and motion capture sensors to study movement difficulties associated with neurological disorders.

According to Foxe, “The MoBI system allows us to get people walking, using their senses, and solving the types of tasks you face every day, all the while measuring brain activity and tracking how the processes associated with cognition and movement interact.”

Motion sensor and EEG data, collected while a subject is walking in a virtual environment, are synchronized, allowing researchers to track which areas of the brain are being activated when walking or performing task. Brain response while moving, performing tasks, or doing both at the same time, is analyzed.

This technique could potentially guide treatment in Autism, dementia, and TBI, characterized by difficulty in processing sensory information from multiple sources and an abnormal gait.

Click to view University of Rochester video


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

PREFERRED REGISTRATION AVAILABLE THROUGH TODAY, SEPTEMBER 7TH

Brain imaging to detect suicidal thoughts

Last year, Carnegie Mellon professor Marcel Just and Pitt professor David Brent used brain imagining to identify suicidal thoughts.

Supported by the NIMH, they are now working to establish reliable neurocognitive markers of suicidal ideation and attempt. They will examine the differences in brain activation patterns between suicidal and non-suicidal young adults as they think about words related to suicide — such as positive and negative concepts — and use machine learning to identify neural signatures of suicidal ideation and behavior.

According to Just,  “We were previously able to obtain consistent neural signatures to determine whether someone was thinking about objects like a banana or a hammer by examining their fMRI brain activation patterns. But now we are able to tell whether someone is thinking about ‘trouble’ or ‘death’ in an unusual way. The alterations in the signatures of these concepts are the ‘neurocognitive thought markers’ that our machine learning program looks for.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

PREFERRED REGISTRATION AVAILABLE THROUGH FRIDAY, SEPTEMBER 7TH

AI predicts response to antipsychotic drugs, could distinguish between disorders

Lawson Health Research Institute, Mind Research Network and Brainnetome Center researchers have developed an algorithm that analyzes brain scans to classify illness in patients with complex mood disorders and help predict their response to medication.

A recent study analyzed and compared fMRI scans of those with MDD, bipolar I,  and no history of mental illness, and found that each group’s brain networks differed, including regions in the default mode network and thalamus.

When tested against participants with a known MDD or Bipolar I diagnosis, the algorithm correctly classified illness with 92.4 per cent accuracy.

The team also imaged the brains of 12 complex mood disorder patients with out a clear diagnosis, to predict diagnosis and examine medication response.

The researchers hypothesized that participants classified by the algorithm as having MDD would respond to antidepressants while those classified as having bipolar I would respond to mood stabilizers. When tested with the complex patients, 11 out of 12 responded to the medication predicted by the algorithm.

According to lead researcher Elizabeth Osuch:: “This study takes a major step towards finding a biomarker of medication response in emerging adults with complex mood disorders. It also suggests that we may one day have an objective measure of psychiatric illness through brain imaging that would make diagnosis faster, more effective and more consistent across health care providers.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

Invasive deep brain stimulation for alcoholism?

Stanford’s Casey Halpern and Allen Ho have used deep brain stimulation to target nucleus accumbens, thought to reduce impulsive behavior, to combat alcoholism in animal and pilot human studies.

DBS is used in severe Parkinson’s disease and is not approved by the FDA for addiction. Infection and other complications are risks of this invasive surgery.

ApplySci hopes that strides in behavioral therapy, including Alcoholics Anonymous, will continue to improve outcomes in addicted individuals, diminishing the need for invasive procedures.

The Stanford study was published in Neurosurgical Focus.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

AI – optimized glioblastoma chemotherapy

Pratik Shah, Gregory Yauney,  and MIT Media Lab researchers have developed an AI  model that could make glioblastoma chemotherapy regimens less toxic but still effective. It analyzes current regimens and iteratively adjusts doses to optimize treatment with the lowest possible potency and frequency toreduce tumor sizes.

In simulated trials of 50 patients, the machine-learning model designed treatment cycles that reduced the potency to a quarter or half of the doses It often skipped administration, which were then scheduled twice a year instead of monthly.

Reinforced learning was used to teach the model to favor certain behavior that lead to a desired outcome.  A combination of  temozolomide and procarbazine, lomustine, and vincristine, administered over weeks or months, were studied.

As the model explored the regimen, at each planned dosing interval it decided on actions. It either initiated or withheld a dose. If it administered, it then decided if the entire dose, or a portion, was necessary. It pinged another clinical model with each action to see if the the mean tumor diameter shrunk.

When full doses were given, the model was penalized, so it instead chose fewer, smaller doses. According to Shah, harmful actions were reduced to get to the desired outcome.

The J Crain Venter Institute’s Nicholas Schork said that the model offers a major improvement over the conventional “eye-balling” method of administering doses, observing how patients respond, and adjusting accordingly.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

Sensor could continuously monitor brain aneurysm treatment

Georgia Tech’s Woon-Hong Yeo has  developed a proof of concept, flexible, stretchable sensor that can continuously monitor hemodynamics when integrated with a stent like flow diverter after a brain aneurysm. Blood flow is measured using  capacitance changes.

According to Pittsburgh professor Youngjae Chun, who collaborated with Yeo, “We have developed a highly stretchable, hyper-elastic flow diverter using a highly-porous thin film nitinol,” Chun explained. “None of the existing flow diverters, however, provide quantitative, real-time monitoring of hemodynamics within the sac of cerebral aneurysm. Through the collaboration with Dr. Yeo’s group at Georgia Tech, we have developed a smart flow-diverter system that can actively monitor the flow alterations during and after surgery.”

The goal is a batteryless, wireless device that is extremely stretchable and flexible that can be miniaturized enough to be routed through the tiny and complex blood vessels of the brain and then deployed without damage  According to Yeo, “It’s a very challenging to insert such electronic system into the brain’s narrow and contoured blood vessels.”

The sensor uses a micro-membrane made of two metal layers surrounding a dielectric material, and wraps around the flow diverter. The device is a few hundred nanometers thick, and is produced using nanofabrication and material transfer printing techniques, encapsulated in a soft elastomeric material.

“The membrane is deflected by the flow through the diverter, and depending on the strength of the flow, the velocity difference, the amount of deflection changes,” Yeo explained. “We measure the amount of deflection based on the capacitance change, because the capacitance is inversely proportional to the distance between two metal layers.”

Because the brain’s blood vessels are so small, the flow diverters can be no more than five to ten millimeters long and a few millimeters in diameter. That rules out the use of conventional sensors with rigid and bulky electronic circuits.

“Putting functional materials and circuits into something that size is pretty much impossible right now,” Yeo said. “What we are doing is very challenging based on conventional materials and design strategies.”

The researchers tested three materials for their sensors: gold, magnesium and the nickel-titanium alloy known as nitinol. All can be safely used in the body, but magnesium offers the potential to be dissolved into the bloodstream after it is no longer needed.

The proof-of-principle sensor was connected to a guide wire in the in vitro testing, but Yeo and his colleagues are now working on a wireless version that could be implanted in a living animal model. While implantable sensors are being used clinically to monitor abdominal blood vessels, application in the brain creates significant challenges.

“The sensor has to be completely compressed for placement, so it must be capable of stretching 300 or 400 percent,” said Yeo. “The sensor structure has to be able to endure that kind of handling while being conformable and bending to fit inside the blood vessel.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

Google incorporates depression screening in search

Google has introduced a new depression screening feature.  When the word “depression” is used in search, mobile users are offered a PHQ-9 questionnaire, which recognizes symptoms. A “Knowledge Panel” containing information and potential treatments appears on top of the page.

The goal is self awareness, and encouragement to seek help when needed.

Another company dedicated to improving brain health through mobile technology is Mindstrong Health.  The startup is developing clinically validated, phone-based mental illness screening, monitoring and treatment methods.  Co-founder Tom Insel will discuss their work at ApplySci’s upcoming Wearable Tech + Digital Health + Neurotech conference, on September 19th at the MIT Media Lab.


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab – featuring  Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator –  Tom Insel – John Rogers – Jamshid Ghajar – Phillip Alvelda – Michael Weintraub – Nancy Brown – Steve Kraus – Bill Geary – Mary Lou Jepsen

Registration rates increase Friday, August 25th.


ANNOUNCING WEARABLE TECH + DIGITAL HEALTH + NEUROTECH SILICON VALLEY – FEBRUARY 26 -27, 2018 @ STANFORD UNIVERSITY –  FEATURING:  ZHENAN BAO – JUSTIN SANCHEZ – BRYAN JOHNSON – NATHAN INTRATOR – VINOD KHOSLA