AI on a chip for voice, image recognition

Horizon Robotics, led by Yu Kai, Baidu’s former deep learning head,  is developing AI chips and software to mimic how the human brain solves abstract tasks, such as voice and image recognition.  The company believes that this will provide more consistent and reliable services than cloud based systems.

The goal is to enable fast and intelligent responses to user commands, with out an internet connection, to control appliances, cars, and other objects.  Health applications are a logical next step, although not yet discussed.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

BCI controlled wheelchair

Miguel Nicolelis has developed a brain computer interface that allows monkeys to steer a robotic wheelchair with their thoughts.  The study is meant to demonstrate the potential of humans to do the same.

Signals from hundreds of neurons simultaneously recorded in two brain regions were translated into the real-time operation of a wheelchair.

Nicolelis said: “In some severely disabled people, even blinking is not possible. For them, using a wheelchair or device controlled by noninvasive measures like an EEG may not be sufficient. We show clearly that if you have intracranial implants, you get better control of a wheelchair than with noninvasive devices.”

ApplySci looks forward to the day when non-invasive methods will allow similar brain-driven functioning for the disabled.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Machine learning for faster stroke diagnosis

MedyMatch uses big data and artificial intelligence to improve stroke diagnosis, with the goal of faster treatment.

Patient CT photos are scanned  and immediately compared with hundreds of thousands of other patient results.  Almost any deviation from a normal CT is quickly detected.

With current methods, medical imaging errors can occur when emergency room radiologists miss subtle aspects of brain scans, leading to delayed treatment. Fast detection of stroke can prevent paralysis and death.

The company claims that it can detect irregularities more accurately than a human can. Findings are presented as 3D brain images, enabling a doctor to make better informed decisions. The cloud-based system allows scans to be uploaded from any location.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Sleep app uses wearable sensors, cloud analytics

The American Sleep Apnea Association,  Apple and IBM have begun a study about the impact of sleep quality on daily activity level, alertness, productivity,  health and medical conditions. iPhone and Apple Watch sensors and the ResearchKit framework collect data from healthy and unhealthy sleepers, which is sent to the Watson Health Cloud.

The SleepHealth app uses the watch’s  heart rate monitor to detect sleep, and gathers movement data with its accelerometer and gyroscope. The app includes a  “personal sleep concierge” and nap tracker, meant to help users develop better sleeping habits.

Data is stored and analyzed on the Watson Health Cloud, allowing researchers to see common patterns .  The long term goal is to develop  effective interventions.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8,2016 @ the New York Academy of Sciences

 

Self-adjusting lenses adapt to user needs

DeepOptics is developing is vision-enhancing wearable lenses, with sensors that gauge viewing distance, and precisely adjust the lenses to bring an object into focus.

Electronic volts are sent into three layered liquid crystal lenses, changing the refractive index to provide the specific optical compensation needed to correct vision in every situation.

The company also believes that its technology can offer VR/AR devices the ability to deliver better experiences.

Click to view the DeepOptics video:


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

DeepMind Health identifies complication risks

Google has announced DeepMind Health, which creates non-AI based apps to identify patientscomplication risk.  It is expected for AI to be integrated in the future. Acute kidney injury is the group’s initial focus, being tested by the UK National Health Service and the Royal Free Hospital London.

The initial app, Streams, quickly alerts hospital staff of critical patient information.  One of Streams’ designers, Chris Laing, said that  “using Streams meant I was able to review blood tests for patients at risk of AKI within seconds of them becoming available. I intervened earlier and was able to improve the care of over half the patients Streams identified in our pilot studies.”

The company plans to integrate patient treatment prioritization features based on the Hark clinical management system.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Stethoscope software analyzes lung sounds

Hiroshima University and Fukushima Medical University researchers have created software and an electronic stethoscope to classify lung sounds into five common diagnostic categories.

Currently, doctors listening to heart and lung sounds on a stethoscope need to overcome background noise and recognize multiple irregularities. The system will be able to “hear” what a doctor might miss, and automatically identify multiple lung problems.

Recorded lung sounds of 878 patients were classified by respiratory physicians. The diagnoses were turned into templates, to create a mathematical formula that evaluates the length, frequency, and intensity of lung sounds. Software analyzed sound patterns during patient exams enable respiratory diagnoses.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Machine learning analysis of doctor notes predicts cancer progression

Gunnar Rätsch and Memorial Sloan Kettering colleagues are using AI to find similarities between cancer cases.  Ratsch’s algorithm has analyzed 100 million sentences taken from clinical notes of about 200,000 cancer patients to predict disease progression.

In a recent study, machine learning was used to classify  patient symptoms, medical histories and doctors’ observations into 10,000 clusters. Each cluster represented a common observation in medical records, including recommended treatments and typical symptoms. Connections between clusters were mapped to  show inter-relationships. In another study, algorithms were used to  find hidden associations between written notes and patients’ gene and blood sequencing.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

 

Mind controlled prosthetic fingers

Johns Hopkins researchers have developed a proof-of-concept for a prosthetic arm with fingers that, for the first time, can be controlled with a wearer’s thoughts.

The technology was tested on an epileptic patient who was not missing any limbs.  The researchers used brain mapping technology to bypass control of his arms and hands.  (The patient was already scheduled for a brain mapping procedure.) Brain electrical activity was measured for each finger.

This was an invasive procedure, which required implanting an array of 128 electrode sensors, on sheet of film, in the part of the brain that  controls hand and arm movement. Each sensor measured a circle of brain tissue 1 millimeter in diameter.

After compiling the motor and sensory data, the arm was programmed to allow the patient to move individual fingers based on which part of his brain was active.

The team said said that the prosthetic was initially 76 percent accurate, and when they combined the signals for the ring and pinkie fingers, accuracy increased to 88 percent.

Click to view Johns Hopkins video.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

First human optogenetics vision trial

Retina Foundation of the Southwest scientists, in a study sponsored by Retrosense Therapeutics, will for the first time use optogenetics — a combination of gene therapy and light to  control nerve cells – in an attempt to restore human sight.  Previously, optogenetic therapies were only tested on mice and monkeys.

Viruses with DNA from light-sensitive algae will be injected into the eye’s ganglion cells, which transmit signals from the retina to the brain, in an attempt to make them directly responsive to light.  15 legally blind patients will participate in the study, which was first reported by the MIT Technology Review.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences