Category Archives: BCI

Bob Knight on decoding language from direct brain recordings | ApplySci @ Stanford

FacebooktwitterlinkedinFacebooktwitterlinkedin

Berkeley’s Bob Knight discussed (and demonstrated) decoding language from direct brain recordings at ApplySci’s recent Wearable Tech + Digital Health + Neurotech Silicon Valley at Stanford:


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda

Join Apply Sci at the 10th Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 21-22, 2019 at Stanford University

Nathan Intrator on epilepsy, AI, and digital signal processing | ApplySci @ Stanford

FacebooktwitterlinkedinFacebooktwitterlinkedin

Nathan Intrator discussed epilepsy, AI and digital signal processing at ApplySci’s Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 26-27, 2018 at Stanford University:


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum

DARPA’s Justin Sanchez on driving and reshaping biotechnology | ApplySci @ Stanford

FacebooktwitterlinkedinFacebooktwitterlinkedin

DARPA Biological Technologies Office Director Dr. Justin Sanchez on driving and reshaping biotechnology.  Recorded at ApplySci’s Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 26-27, 2018 at Stanford University.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 25, 2018 at the MIT Media Lab

TMS + VR for sensory, motor skill recovery after stroke

FacebooktwitterlinkedinFacebooktwitterlinkedin

EPFL’s Michela Bassolino has used transcranial magnetic stimulation to create hand sensations when combined with VR.

By stimulating the motor cortex,  subjects’ hand muscles  were activated, and involuntary short movements were induced.

In a recent study, when subjects observed a virtual hand moving at the same time and in a similar manner to their own during TMS, they felt that a virtual hand was a controllable body part.

25 of 32 participants experienced the effect within two minutes of stimulation. Bassolino believes that the effect may also be achieved through less immersive video.

The technology could  help patients recover sensory and motor skills after a stroke — and also be used as a gaming enhancement.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 25, 2018 at the MIT Media Lab

Bone-conduction headset for voice-free communication

FacebooktwitterlinkedinFacebooktwitterlinkedin

MIT’s Arnav Kapur has created a device that senses and interprets neuromuscular signals created when we subvocalize. AlterEgo rests on the ear and extends across the jaw.  A pad sticks beneath the lower lip, and another below the chin. It senses jaw and facial tissue bone-conduction, undetectable by humans.

 Two bone-conduction headphones pick up inner ear vibrations, and four electrodes detect neuromuscular signals. Algorithms determine what a wearer is subvocalizing, and can report silently back. This enables communication with out speaking.

In studies,  researchers interacted with a computer to solve problems; a participant asked a computer the time and got an accurate response; and  another played a game of chess with a colleague.

Click to view MIT Media Lab video


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 25, 2018 at the MIT Media Lab

Software records, organizes, analyzes 1 million neurons in real-time

FacebooktwitterlinkedinFacebooktwitterlinkedin

Martin Garwicz and Lund University colleagues have developed a novel method for recording, organizing, and analyzing enormous amounts of neurohysiological data  from  implanted brain computer interfaces.

The technology simultaneously acquires data from 1 million neurons in real time. It converts spike data and sends it for processing and storage on conventional systems. Subject feedback is provided in  25 milliseconds — stimulating up to 100,000 neurons.

This has implications for  basic research, clinical diagnosis, and brain disease treatment, and is built for implantable, bidirectional brain computer interfaces, used to communicate complex data between neurons and computers. This includes monitoring the brain of paralyzed patients, early detection of epileptic seizures, and real-time feedback to control to robotic prostheses.


Announcing ApplySci’s 9th Wearable Tech + Digital Health + Neurotech conference — September 25, 2018 at the MIT Media Lab

Lightweight, highly portable, brain-controlled exoskeleton

FacebooktwitterlinkedinFacebooktwitterlinkedin

EPFL’s José Millán has developed a brain-controlled, highly portable exoskeleton, that can be quickly  secured around joints with velcro. Metal cables act as soft tendons on the back of each finger, with the palm free to feel hand sensations.  Motors that push and pull the cables are worn on the chest. Fingers are flexed when the cables are pushed and extended when they are pulled.

The control interface can be eye-movement monitoring, phone-based voice controls, residual muscular activity, or EEG-driven brainwave analysis. Hand motions induced by the device elicited brain patterns typical of healthy hand motions.  Exoskeleton-induced hand motions combined with the brain interface lead to unusual neural patterns that could facilitate control of the device. Contralateral brain activity was observed in people who passively received hand motion by the exoskeleton. When subjects were asked to control the exoskeleton with their thoughts, same-side patterns were consistent.

Click to view EPFL video


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf – Jacobo Penide – David Sarno – Peter Fischer

Registration rates increase on January 26th