Brain imaging to detect suicidal thoughts

Last year, Carnegie Mellon professor Marcel Just and Pitt professor David Brent used brain imagining to identify suicidal thoughts.

Supported by the NIMH, they are now working to establish reliable neurocognitive markers of suicidal ideation and attempt. They will examine the differences in brain activation patterns between suicidal and non-suicidal young adults as they think about words related to suicide — such as positive and negative concepts — and use machine learning to identify neural signatures of suicidal ideation and behavior.

According to Just,  “We were previously able to obtain consistent neural signatures to determine whether someone was thinking about objects like a banana or a hammer by examining their fMRI brain activation patterns. But now we are able to tell whether someone is thinking about ‘trouble’ or ‘death’ in an unusual way. The alterations in the signatures of these concepts are the ‘neurocognitive thought markers’ that our machine learning program looks for.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

PREFERRED REGISTRATION AVAILABLE THROUGH FRIDAY, SEPTEMBER 7TH

AI predicts response to antipsychotic drugs, could distinguish between disorders

Lawson Health Research Institute, Mind Research Network and Brainnetome Center researchers have developed an algorithm that analyzes brain scans to classify illness in patients with complex mood disorders and help predict their response to medication.

A recent study analyzed and compared fMRI scans of those with MDD, bipolar I,  and no history of mental illness, and found that each group’s brain networks differed, including regions in the default mode network and thalamus.

When tested against participants with a known MDD or Bipolar I diagnosis, the algorithm correctly classified illness with 92.4 per cent accuracy.

The team also imaged the brains of 12 complex mood disorder patients with out a clear diagnosis, to predict diagnosis and examine medication response.

The researchers hypothesized that participants classified by the algorithm as having MDD would respond to antidepressants while those classified as having bipolar I would respond to mood stabilizers. When tested with the complex patients, 11 out of 12 responded to the medication predicted by the algorithm.

According to lead researcher Elizabeth Osuch:: “This study takes a major step towards finding a biomarker of medication response in emerging adults with complex mood disorders. It also suggests that we may one day have an objective measure of psychiatric illness through brain imaging that would make diagnosis faster, more effective and more consistent across health care providers.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

Invasive deep brain stimulation for alcoholism?

Stanford’s Casey Halpern and Allen Ho have used deep brain stimulation to target nucleus accumbens, thought to reduce impulsive behavior, to combat alcoholism in animal and pilot human studies.

DBS is used in severe Parkinson’s disease and is not approved by the FDA for addiction. Infection and other complications are risks of this invasive surgery.

ApplySci hopes that strides in behavioral therapy, including Alcoholics Anonymous, will continue to improve outcomes in addicted individuals, diminishing the need for invasive procedures.

The Stanford study was published in Neurosurgical Focus.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

AI – optimized glioblastoma chemotherapy

Pratik Shah, Gregory Yauney,  and MIT Media Lab researchers have developed an AI  model that could make glioblastoma chemotherapy regimens less toxic but still effective. It analyzes current regimens and iteratively adjusts doses to optimize treatment with the lowest possible potency and frequency toreduce tumor sizes.

In simulated trials of 50 patients, the machine-learning model designed treatment cycles that reduced the potency to a quarter or half of the doses It often skipped administration, which were then scheduled twice a year instead of monthly.

Reinforced learning was used to teach the model to favor certain behavior that lead to a desired outcome.  A combination of  temozolomide and procarbazine, lomustine, and vincristine, administered over weeks or months, were studied.

As the model explored the regimen, at each planned dosing interval it decided on actions. It either initiated or withheld a dose. If it administered, it then decided if the entire dose, or a portion, was necessary. It pinged another clinical model with each action to see if the the mean tumor diameter shrunk.

When full doses were given, the model was penalized, so it instead chose fewer, smaller doses. According to Shah, harmful actions were reduced to get to the desired outcome.

The J Crain Venter Institute’s Nicholas Schork said that the model offers a major improvement over the conventional “eye-balling” method of administering doses, observing how patients respond, and adjusting accordingly.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

Sensor could continuously monitor brain aneurysm treatment

Georgia Tech’s Woon-Hong Yeo has  developed a proof of concept, flexible, stretchable sensor that can continuously monitor hemodynamics when integrated with a stent like flow diverter after a brain aneurysm. Blood flow is measured using  capacitance changes.

According to Pittsburgh professor Youngjae Chun, who collaborated with Yeo, “We have developed a highly stretchable, hyper-elastic flow diverter using a highly-porous thin film nitinol,” Chun explained. “None of the existing flow diverters, however, provide quantitative, real-time monitoring of hemodynamics within the sac of cerebral aneurysm. Through the collaboration with Dr. Yeo’s group at Georgia Tech, we have developed a smart flow-diverter system that can actively monitor the flow alterations during and after surgery.”

The goal is a batteryless, wireless device that is extremely stretchable and flexible that can be miniaturized enough to be routed through the tiny and complex blood vessels of the brain and then deployed without damage  According to Yeo, “It’s a very challenging to insert such electronic system into the brain’s narrow and contoured blood vessels.”

The sensor uses a micro-membrane made of two metal layers surrounding a dielectric material, and wraps around the flow diverter. The device is a few hundred nanometers thick, and is produced using nanofabrication and material transfer printing techniques, encapsulated in a soft elastomeric material.

“The membrane is deflected by the flow through the diverter, and depending on the strength of the flow, the velocity difference, the amount of deflection changes,” Yeo explained. “We measure the amount of deflection based on the capacitance change, because the capacitance is inversely proportional to the distance between two metal layers.”

Because the brain’s blood vessels are so small, the flow diverters can be no more than five to ten millimeters long and a few millimeters in diameter. That rules out the use of conventional sensors with rigid and bulky electronic circuits.

“Putting functional materials and circuits into something that size is pretty much impossible right now,” Yeo said. “What we are doing is very challenging based on conventional materials and design strategies.”

The researchers tested three materials for their sensors: gold, magnesium and the nickel-titanium alloy known as nitinol. All can be safely used in the body, but magnesium offers the potential to be dissolved into the bloodstream after it is no longer needed.

The proof-of-principle sensor was connected to a guide wire in the in vitro testing, but Yeo and his colleagues are now working on a wireless version that could be implanted in a living animal model. While implantable sensors are being used clinically to monitor abdominal blood vessels, application in the brain creates significant challenges.

“The sensor has to be completely compressed for placement, so it must be capable of stretching 300 or 400 percent,” said Yeo. “The sensor structure has to be able to endure that kind of handling while being conformable and bending to fit inside the blood vessel.”


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson – Ed Simcox – Sean Lane

David Axelrod: VR in healthcare & the Stanford Virtual Heart | ApplySci @ Stanford

David Axelrod discussed VR-based learning in healthcare, and the Stanford Virtual Heart, at ApplySci’s recent Wearable Tech + Digital Health + Neurotech conference at Stanford;


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE JULY 6th

Combined BCI + FES system could improve stroke recovery

Jose Millan and EPFL colleagues have combined a brain computer interface with functional electrical stimulation in a system that, in a study, showed the ability to enhance the restoration of limb use after a stroke.

According to Millan: “The key is to stimulate the nerves of the paralyzed arm precisely when the stroke-affected part of the brain activates to move the limb, even if the patient can’t actually carry out the movement. That helps re-establish the link between the two nerve pathways where the signal comes in and goes out.”

27 patients with a similar lesion that resulted in moderate to severe arm paralysis following a stroke participated in the trial. Half were treated with the dual-therapy approach, and reported clinically significant improvements.  A BCI system  enabled the researchers to pinpoint where the electrical activity occurred in the brain when they tried to extend their hands. Each time the electrical activity was identified, the system stimulated the muscle controlling the corresponding wrist and finger movements.

The control group received FES only, and had their arm muscles stimulated randomly. This allowed the scientists to understand how much additional motor function improvement could be attributed to the BCI system.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE JUNE 29TH

Tony Chahine on human presence, reimagined | ApplySci @ Stanford

Myant‘s Tony Chahine reimagined human presence at ApplySci’s recent Wearable Tech + Digital Health + Neurotech conference at Stanford:


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE JUNE 29TH

Thought, gesture-controlled robots

MIT CSAIL’s Daniela Rus has developed an EEG/EMG robot control system based on brain signals and finger gestures.

Building on the team’s previous brain-controlled robot work, the new system detects, in real-time, if a person notices a robot’s error. Muscle activity measurement enables the use of hand gestures to select the correct option.

According to Rus: “This work, combining EEG and EMG feedback, enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback. By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

The researchers used  a humanoid robot from Rethink Robotics, while a human controller wore electrodes on her or his head and arm.

Human supervision  increased the choice of correct target from 70 to 97 per cent.

The goal is system that can be used for people with limited mobility or language disorders.

Click to view CSAIL video


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE FRIDAY, JUNE 22nd

Phillip Alvelda: More intelligent; less artificial | ApplySci @ Stanford

Phillip Alvelda discussed AI and the brain at ApplySci’s recent Wearable Tech + Digital Health + Neurotech Silicon Valley conference at Stanford:


Dr. Alvelda will join us again at Wearable Tech + Digital Health + Neurotech Boston, on September 24, 2018 at the MIT Media Lab.  Other speakers include: Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum Marom Bikson

REGISTRATION RATES INCREASE JUNE 22nd