Genetic disease patients identified via machine learning

Stanford’s Nigam Shah and Joshua Knowles are using machine learning to search for people with familial hypercholesterolemia, a genetic disorder that causes high levels of LDL cholesterol in the blood.

Only a 10 percent of people  with the disorder are aware of it, and it is often diagnosed after a cardiac event — the risk of which can be dramatically reduced with early treatment. (Men with the disorder have a 50 percent chance of having a heart attack by age 50; women have a 30 percent chance by age 60.)

Using electronic health records, the researchers identified 120 people known to have FH  from Stanford’s network,  and others with high LDL who don’t have the genetic disorder.

Algorithms then spotted people with FH by analyzing records and  identifying  cholesterol levels, age, and prescribed drugs. The algorithms then looked for and identified undiagnosed FH within the health record data.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech SanFrancisco – April 6, 2016 @ the Mission Bay Conference Center

Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Machine learning based cancer diagnostics

Search engine giant Yandex is using its advanced machine learning capabilities to detect cancer predisposition.

Yandex Data Factory has partnered with AstraZeneca to develop the RAY platform, which analyzes DNA testing results, generates a report about patient genome mutations, and provides treatment recommendations and side effect information. Testing will begin next month.

The two companies have signed a cooperation agreement  to launch big data projects in epidemiology, pathologic physiology, diagnostics and treatment of diseases (focused on contagious diseases, cancer, endocrinology, cardiology, pulmonology, and psychiatry).

WEARABLE TECH + DIGITAL HEALTH SAN FRANCISCO – APRIL 5, 2016 @ THE MISSION BAY CONFERENCE CENTER

NEUROTECH SAN FRANCISCO – APRIL 6, 2016 @ THE MISSION BAY CONFERENCE CENTER

Mobile hyperspectral “tri-corder”

Tel Aviv University‘s David Menlovic and Ariel Raz are turning smartphones into hyperspectral sensors, capable of identifying chemical components of objects from a distance.

The technology, being commercialized by Unispectral and Ramot, improves camera resolution and noise filtering, and is compatible with smartphone lenses.

The new lens and software allow in much more light than current smartphone camera filter arrays. The software keeps the image resolution clean as the camera zooms further in.  Once the camera has acquired the image, data is sent to a third party to process and analyze  material compounds and the amount of each component. The third-party analyzer then sends the information back to the smartphone.

Unispectral is in talks with smartphone makers, auto makers, and security organizations to be third party analyzers.  To analyze the data from  camera images, the partner will need a large  database of hyperspectral signatures.

Wearable Tech + Digital Health NYC 2015 – June 30 @ New York Academy of Sciences.  Register now and save $300.

App helps orthopedic surgeons plan procedures

Tel Aviv based Voyant Health‘s TraumaCad Mobile app helps orthopedic surgeons plan operations and create result simulations.  The system offers modules for  hip, knee, deformity, pediatric, upper limb, spine, foot and ankle, and trauma surgery.  The iPad app mobile version of this decade old system was recently approved by the FDA.

Surgeons can securely import medical images from the cloud or hospital imaging systems to perform measurements, fix prostheses, simulate osteotomies, and visualize fracture reductions. The app overlays prosthesis templates on radiological images and includes tools for performing measurements on the image and positioning the template.  In total hip replacement surgery, it automatically aligns implants and assembles components to calculate leg length discrepancy and offset.

Wearable Tech + Digital Health NYC 2015 – June 30, 2015 @ New York Academy of Sciences.  Register before April 24 and save $300.

Physiological and mathematical models simulate body systems

Another CES standout was LifeQ, a company that combines physiological and bio-mathematical modeling to provide health decision data.

LifeQ Lens is a multi-wavelength optical sensor that can be integrated into wearable devices. It monitors key metrics, with what the company claims to be laboratory level accuracy, using a proprietary algorithm. Raw data is translated through bio-mathematical models, called LifeQ Core. The models are turned into digital, virtual simulations of body systems. LifeQ Link is an open access platform through which partners can use the technology.

The system can be used by athletes, individuals monitoring nutrition, stress and sleep, doctors seeking data to help inform diagnoses  and manage chronic conditions.

The company foresees their data providing population level  health analysis for research purposes. They hope to be able to monitor clinical trials to help create safer medicines and more effective treatments.

Portable, lens free, on chip microscope for 3-D imaging

UCLA professor Aydogan Ozcan has developed a lens-free microscope for high throughput 3-D tissue imaging to detect cancer or other cell level abnormalities.

Laser or light-emitting-diodes  illuminate a tissue or blood sample on a slide inserted into the device. A sensor array on a microchip  captures and records the pattern of shadows created by the sample. The patterns are processed as a series of holograms, forming 3-D images of the specimen. An algorithm color codes the reconstructed images.  Contrasts in the samples more apparent than they would be in the holograms, detecting abnormalities.

This could lead to cheaper and more portable technology for examining tissue, blood and other biomedical specimens. It will benefit patients in remote areas and in cases where large numbers of samples need to be examined quickly.

Speech app detects bipolar mood swings early

PRIORI is an android app that monitors subtle voice changes to detect bipolar mood swings.  It was developed by Zahi Karam, Emily Mower Provost and Melvin McInnis at the University of Michigan.  The hope is to anticipate swings before they happen, and intervene.  PRIORI was inspired by the families of bipolar patients, who often were first to detect an imminent mood swing during conversations.

Doctors routinely look for speech characteristics to assess mood in bipolar patients. Those heading toward a manic episode may speak louder or faster than usual, and may jump from topic to topic. A recent study showed depressed patients having longer speech pause times. The pauses often shorten as patients are treated with antidepressants.  Another study showed differences in pitch and jitter in different mood states among bipolar patients. PRIORI identifies these signals and notifies the patient or doctor.

The app monitors voice patterns during calls made, and during weekly conversations with a member of the care team.  Characteristics of the sounds  and silences of each conversation are analyzed. Only the patient’s side of calls is recorded.  The recordings are encrypted and not available to the research team. They see the results of the analysis, which are stored in secure servers to ensure privacy.  Standardized weekly mood assessments with a clinician provide a mood benchmark, and are used to correlate the acoustic features of speech with a patient’s mood state.

As other conditions also cause voice changes, the same technology is being tested for schizophrenia, PTSD and Parkinson’s patients.

Face video scan detects heart disease

According to University of Rochester professor Jean-Philippe Couderc, cardiac disease can be identified and diagnosed using contactless video monitoring of the face.

A recent study describes technology and an algorithm that scan the face and detect skin color changes imperceptible to the naked eye.

Sensors in digital cameras record the colors red, green, and blue. Hemoglobin  “absorbs” more of the green spectrum of light, which can be detected by the camera’s sensor. The face is the ideal place to detect this phenomenon, because the skin is thinner and blood vessels are closer to the surface. Participants were connected to an electrocardiogram to compare results from the facial scan to the heart’s electrical activity. Color changes detected by video monitoring corresponded with an individual’s heart rate as detected on an ECG.  Irregular electrical activity of the heart found in people with atrial fibrillation could be identified by observing the pulses of blood flowing through the veins on the face as it absorbed or reflected green light with each heartbeat.

The video monitoring technique – “videoplethymography” – had an error rate of 20 percent, comparable to the 17 to 29 percent error rate associated with automated ECG measurements.

Context sensitive robot understands casual language

Cornell professor Ashutosh Saxenam is developing a context sensitive robot that is able to understand natural language commands, from different speakers, in colloquial English.  The goal is to help robots account for missing information when receiving instructions and adapt to their environment.

Tell Me Dave is equipped with a 3D camera for viewing its surroundings.  Machine learning has enabled it to respond to entire commands with flexibly defined actions. Its  computer “brain” has been fed simulated video simulations of actions, accompanied by voice commands from different speakers of different dialects and accents. The robot matches instructions to a range of potential actions and determines the the right one using the context of other words in the command and environment details.

At this time, Tell Me Dave can correctly follow human voice instructions 64 percent of the time.

MD Anderson uses IBM’s Watson supercomputer to accelerate cancer fighting knowledge

http://www-03.ibm.com/press/us/en/pressrelease/42214.wss

Houston’s MD Anderson Cancer Center is feeding IBM’s Watson “cognitive computer” case histories on more than 1 million leukemia patients, along with information about the disease, research and treatment options. Hospital staff and doctors hope it will help guide care and reduce the death rate. They also hope the supercomputer will be able to spot trends missed by researchers, possibly leading to suggestions for new targets for cancer drugs.

The collaboration with IBM is part of MD Anderson’s Moon Shots Program designed to use innovative approaches to fight eight deadly cancers.