Category Archives: Brain

TMS + VR for sensory, motor skill recovery after stroke


EPFL’s Michela Bassolino has used transcranial magnetic stimulation to create hand sensations when combined with VR.

By stimulating the motor cortex,  subjects’ hand muscles  were activated, and involuntary short movements were induced.

In a recent study, when subjects observed a virtual hand moving at the same time and in a similar manner to their own during TMS, they felt that a virtual hand was a controllable body part.

25 of 32 participants experienced the effect within two minutes of stimulation. Bassolino believes that the effect may also be achieved through less immersive video.

The technology could  help patients recover sensory and motor skills after a stroke — and also be used as a gaming enhancement.

Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 25, 2018 at the MIT Media Lab

Brain scans, spinal fluid Alzheimer’s biomarkers


Clifford Jack and Mayo Clinic colleagues have proposed a biomarker, not behavior, based standard for Alzheimer’s disease diagnosis.

Instead of defining the disease through symptoms such as memory or thinking problems, the researchers focus on biological changes, including brain plaques and tangles, determined by brain scans and spinal fluid tests.

The new approach can help researchers study patients with normal brain function who are likely to develop dementia, and help avoid misdiagnosis.  Up to 30 per cent of behavior-based, Alzheimer’s diagnosed patients do not have the disease, with memory or thinking problems caused by something else.

Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 25, 2018 at the MIT Media Lab

Bone-conduction headset for voice-free communication


MIT’s Arnav Kapur has created a device that senses and interprets neuromuscular signals created when we subvocalize. AlterEgo rests on the ear and extends across the jaw.  A pad sticks beneath the lower lip, and another below the chin. It senses jaw and facial tissue bone-conduction, undetectable by humans.

 Two bone-conduction headphones pick up inner ear vibrations, and four electrodes detect neuromuscular signals. Algorithms determine what a wearer is subvocalizing, and can report silently back. This enables communication with out speaking.

In studies,  researchers interacted with a computer to solve problems; a participant asked a computer the time and got an accurate response; and  another played a game of chess with a colleague.

Click to view MIT Media Lab video

Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 25, 2018 at the MIT Media Lab

Prosthetic system uses one’s own patterns to encode, recall memory


Robert Hampson, and Wake Forest and USC colleagues, have developed a prosthetic system that uses a person’s own memory patterns to facilitate the brain’s ability to encode and recall memory.

A recent study showed participants’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements. The study focused on improving episodic memory, which is the most common type of memory loss in people with Alzheimer’s disease, stroke and head injury.

According to Hampson: “This is the first time scientists have been able to identify a patient’s own brain cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better, an important first step in potentially restoring memory loss.”

Join ApplySci’s 9th Wearable Tech + Digital Health + Neurotech conference – September 25, 2018 at the MIT Media Lab



Smartphone-derived cognitive function biomarkers


Mindstrong Health, led by Paul Dagum and Tom Insel, has completed a study suggesting that passive measures from smartphone use could be a continuous ecological surrogate for laboratory-based neuropsychological assessment.

Smartphone use of 27 subjects who had received a gold standard neuropsychological assessment was analyzed for 7 days.

Digital biomarkers with high correlations (p < 10−4) for working memory, memory, executive function, language, and intelligence  were identified.

Click to view Tom Insel discussing digital biomarkers at ApplySci’s Wearable Tech + Digital Health + Neurotech conference at the MIT Media Lab in September, 2017

Announcing ApplySci’s 9th Wearable Tech + Digital Health + Neurotech Boston conference.  September 25, 2018 at the MIT Media Lab.

EEG + embedded sensors anticipate driver actions


José del R. Millán, EPFL and Nissan researchers are using EEG to read a driver’s brain signals and send them to a smart vehicle to anticipate if the driver will accelerate, brake or change lanes. Embedded sensors also monitor its environment to help in difficult conditions.

Frontal motor cortex signals are detected using  EEG and processed by the vehicle, which customizes its software accordingly, storing regular routes and driving habit data to anticipate what each driver might do at any time.

Announcing ApplySci’s 9th Wearable Tech + Digital Health + Neurotech conference.  September 25, 2018 at the MIT Media Lab

Software records, organizes, analyzes 1 million neurons in real-time


Martin Garwicz and Lund University colleagues have developed a novel method for recording, organizing, and analyzing enormous amounts of neurohysiological data  from  implanted brain computer interfaces.

The technology simultaneously acquires data from 1 million neurons in real time. It converts spike data and sends it for processing and storage on conventional systems. Subject feedback is provided in  25 milliseconds — stimulating up to 100,000 neurons.

This has implications for  basic research, clinical diagnosis, and brain disease treatment, and is built for implantable, bidirectional brain computer interfaces, used to communicate complex data between neurons and computers. This includes monitoring the brain of paralyzed patients, early detection of epileptic seizures, and real-time feedback to control to robotic prostheses.

Announcing ApplySci’s 9th Wearable Tech + Digital Health + Neurotech conference — September 25, 2018 at the MIT Media Lab