Non-invasive glucose monitoring patch

FacebooktwitterlinkedinFacebooktwitterlinkedin

Richard Guy and University of Bath colleagues have created a non-invasive, adhesive patch, to measure glucose levels through the skin without a finger-prick blood test.

The patch draws glucose from fluid between cells across hair follicles, accessed individually via an array of miniature sensors using a small electric current. The glucose collects in tiny reservoirs and is measured. Readings can be taken every 10 to 15 minutes over several hours. Calibration with a blood sample is not required.

The goal is the development of a low-cost, wearable sensor that sends regular, clinically relevant glucose measurements to one’s phone or watch, with alerts when action is required.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference – September 25, 2018 at the MIT Media Lab

 

Prosthetic system uses one’s own patterns to encode, recall memory

FacebooktwitterlinkedinFacebooktwitterlinkedin

Robert Hampson, and Wake Forest and USC colleagues, have developed a prosthetic system that uses a person’s own memory patterns to facilitate the brain’s ability to encode and recall memory.

A recent study showed participants’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements. The study focused on improving episodic memory, which is the most common type of memory loss in people with Alzheimer’s disease, stroke and head injury.

According to Hampson: “This is the first time scientists have been able to identify a patient’s own brain cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better, an important first step in potentially restoring memory loss.”


Join ApplySci’s 9th Wearable Tech + Digital Health + Neurotech conference – September 25, 2018 at the MIT Media Lab

 

 

Smartphone-derived cognitive function biomarkers

FacebooktwitterlinkedinFacebooktwitterlinkedin

Mindstrong Health, led by Paul Dagum and Tom Insel, has completed a study suggesting that passive measures from smartphone use could be a continuous ecological surrogate for laboratory-based neuropsychological assessment.

Smartphone use of 27 subjects who had received a gold standard neuropsychological assessment was analyzed for 7 days.

Digital biomarkers with high correlations (p < 10−4) for working memory, memory, executive function, language, and intelligence  were identified.

Click to view Tom Insel discussing digital biomarkers at ApplySci’s Wearable Tech + Digital Health + Neurotech conference at the MIT Media Lab in September, 2017


Announcing ApplySci’s 9th Wearable Tech + Digital Health + Neurotech Boston conference.  September 25, 2018 at the MIT Media Lab.

EEG + embedded sensors anticipate driver actions

FacebooktwitterlinkedinFacebooktwitterlinkedin

José del R. Millán, EPFL and Nissan researchers are using EEG to read a driver’s brain signals and send them to a smart vehicle to anticipate if the driver will accelerate, brake or change lanes. Embedded sensors also monitor its environment to help in difficult conditions.

Frontal motor cortex signals are detected using  EEG and processed by the vehicle, which customizes its software accordingly, storing regular routes and driving habit data to anticipate what each driver might do at any time.


Announcing ApplySci’s 9th Wearable Tech + Digital Health + Neurotech conference.  September 25, 2018 at the MIT Media Lab

Software records, organizes, analyzes 1 million neurons in real-time

FacebooktwitterlinkedinFacebooktwitterlinkedin

Martin Garwicz and Lund University colleagues have developed a novel method for recording, organizing, and analyzing enormous amounts of neurohysiological data  from  implanted brain computer interfaces.

The technology simultaneously acquires data from 1 million neurons in real time. It converts spike data and sends it for processing and storage on conventional systems. Subject feedback is provided in  25 milliseconds — stimulating up to 100,000 neurons.

This has implications for  basic research, clinical diagnosis, and brain disease treatment, and is built for implantable, bidirectional brain computer interfaces, used to communicate complex data between neurons and computers. This includes monitoring the brain of paralyzed patients, early detection of epileptic seizures, and real-time feedback to control to robotic prostheses.


Announcing ApplySci’s 9th Wearable Tech + Digital Health + Neurotech conference — September 25, 2018 at the MIT Media Lab

Heart attack, stroke, predicted via retinal images

FacebooktwitterlinkedinFacebooktwitterlinkedin

Google’s Lily Peng has developed an algorithm that can predict heart attacks and strokes by analyzing images of the retina.

The system also shows which eye areas lead to successful predictions, which can provide insight into the causes of cardiovascular disease.

The dataset consisted of 48,101 patients from the UK Biobank database and 236,234 patients from EyePACS database.  A study of  12,026 and 999 patients showed a high level of accuracy:

-Retinal images of a smoker from a non-smoker 71 percent of the time, compared to a ~50 percent human  accuracy.

-While doctors can typically distinguish between the retinal images of patients with severe high blood pressure and normal patients, Google AI’s algorithm predicts the systolic blood pressure within 11 mmHg on average for patients overall, including those with and without high blood pressure.

-According to the company the algorithm predicted direct cardiovascular events “fairly accurately, ” statin that “given the retinal image of one patient who (up to 5 years) later experienced a major CV event (such as a heart attack) and the image of another patient who did not, our algorithm could pick out the patient who had the CV event 70% of the time. This performance approaches the accuracy of other CV risk calculators that require a blood draw to measure cholesterol.”

According to Peng: “Given the retinal image of one patient who (up to 5 years) later experienced a major CV event (such as a heart attack) and the image of another patient who did not, our algorithm could pick out the patient who had the CV event 70 percent of the time, This performance approaches the accuracy of other CV risk calculators that require a blood draw to measure cholesterol.”


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Shreyas Shah– Walter Greenleaf – Jacobo Penide  – Peter Fischer – Ed Boyden

**LAST TICKETS AVAILABLE

Throat-worn sensor-sticker transforms stroke rehab

FacebooktwitterlinkedinFacebooktwitterlinkedin

John Rogers‘ latest stretchable electronics breakthrough will transform stroke rehabilitation.

The throat-worn wearable, developed with the  Shirley Ryan AbilityLab, measures patients’ swallowing ability and patterns of speech, and aids in aphasia diagnosis.

The Shirley Ryan AbilityLab uses the throat sensor in conjunction with Rogers-developed biosensors on the legs, arms and chest to monitor stroke patients’ recovery progress. Data is sent to clinicians’ phones and computers, providing real-time, quantitative, full-body analysis of patients’ advanced physical and physiological responses.

Click to view Shirley Ryan Ability Lab video

Click to view John Rogers’ talk at ApplySci’s Wearable Tech + Digital Health + Neurotech conference, on September 19, 2017 at the MIT Media Lab


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Shreyas Shah– Walter Greenleaf – Jacobo Penide  – Peter Fischer – Ed Boyden

**LAST TICKETS AVAILABLE

Tissue-paper sensor tracks pulse, finger and eye movement, gait

FacebooktwitterlinkedinFacebooktwitterlinkedin

University of Washington’s Jae-Hyun Chung has developed a  disposable wearable sensor made with tissue paper. It can detect a heartbeat, finger force, finger movement, eyeball movement, gait patterns, and other actions.

Tearing the nanocomposite paper breaks its fibers and makes it act as a sensor. It is light, flexible and cheap, and meant to be thrown away after one use.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Shreyas Shah– Walter Greenleaf – Jacobo Penide  – Peter Fischer – Ed Boyden

**LAST TICKETS AVAILABLE

3D, real-scale blood brain barrier model used to study new therapeutics

FacebooktwitterlinkedinFacebooktwitterlinkedin

Gianni Ciofani  of ITT Pisa has created a device that reproduces a 1:1 scale model of the blood-brain barrier.  The combination of 3D printed artificial and biological components will allow the study of new therapeutic strategies to overcome the blood-brain barrier and treat brain diseases, including tumors, Alzheimers, and multiple sclerosis.

A laser that scans through a liquid photopolymer and solidifies the material locally and layer by layer built complex 3D objects with submicron resolution.  This enabled the researchers to engineer an accurate real-scale model of the BBB made from a photopolymer resin. Mimicking the brain microcapillaries, the model consists of a microfluidic system of 50 parallel cylindrical channels connected by junctions and featuring pores on the cylinder walls. Each of the tubular structures has a diameter of 10 μm and pores of 1 μm diameter uniformly distributed on all cylinders. After the fabrication of the complex scaffold-like polymer structure, endothelial cells were cultivated around the porous microcapillary system. Covering the 3D printed structure, the cells built a biological barrier resulting in a biohybrid system which resembles its natural model. The device is few millimeters big and fluids can pass through it at the same pressure as blood in brain vessels.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Shreyas Shah– Walter Greenleaf – Jacobo Penide – David Sarno – Peter Fischer

**LAST TICKETS AVAILABLE