Category Archives: AI

Heart attack, stroke, predicted via retinal images

FacebooktwitterlinkedinFacebooktwitterlinkedin

Google’s Lily Peng has developed an algorithm that can predict heart attacks and strokes by analyzing images of the retina.

The system also shows which eye areas lead to successful predictions, which can provide insight into the causes of cardiovascular disease.

The dataset consisted of 48,101 patients from the UK Biobank database and 236,234 patients from EyePACS database.  A study of  12,026 and 999 patients showed a high level of accuracy:

-Retinal images of a smoker from a non-smoker 71 percent of the time, compared to a ~50 percent human  accuracy.

-While doctors can typically distinguish between the retinal images of patients with severe high blood pressure and normal patients, Google AI’s algorithm predicts the systolic blood pressure within 11 mmHg on average for patients overall, including those with and without high blood pressure.

-According to the company the algorithm predicted direct cardiovascular events “fairly accurately, ” statin that “given the retinal image of one patient who (up to 5 years) later experienced a major CV event (such as a heart attack) and the image of another patient who did not, our algorithm could pick out the patient who had the CV event 70% of the time. This performance approaches the accuracy of other CV risk calculators that require a blood draw to measure cholesterol.”

According to Peng: “Given the retinal image of one patient who (up to 5 years) later experienced a major CV event (such as a heart attack) and the image of another patient who did not, our algorithm could pick out the patient who had the CV event 70 percent of the time, This performance approaches the accuracy of other CV risk calculators that require a blood draw to measure cholesterol.”


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Shreyas Shah– Walter Greenleaf – Jacobo Penide  – Peter Fischer – Ed Boyden

**LAST TICKETS AVAILABLE

EEG + AI assists drivers in manual and autonomous cars

FacebooktwitterlinkedinFacebooktwitterlinkedin

Nissan’s Brain-to-Vehicle (B2V) technology will enable vehicles to interpret signals from a driver’s brain.

The company describes two aspects of the system — prediction and detection, which depend on a driver wearing EEG electrodes:

Predicton: By detecting, via the brain, that the driver is about to move, including turning the steering wheel or pushing the accelerator pedal, B2V can begin the action more quickly.

Detection: When driver discomfort is detected, and the car is in autonomous mode, AI tools change the driving configuration or style.

Lucian Gheorghe, an innovation researcher Nissan, said that the system can use AR to adjust what the driver sees, and can turn the wheel or slow the car  0.2 to 0.5 seconds faster than the driver.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf – Jacobo Penide – David Sarno

Registration rates increase today – January 5th.

Robots visualize actions, plan, with out human instruction

FacebooktwitterlinkedinFacebooktwitterlinkedin

Sergey Levine and UC Berkeley colleagues have developed robotic learning technology that enables robots to visualize how different behaviors will affect the world around them, with out human instruction.  This ability to plan, in various scenarios,  could improve self-driving cars and robotic home assistants.

Visual foresight allows robots to predict what their cameras will see if they perform a particular sequence of movements. The robot can then learn to perform tasks without human help  or prior knowledge of physics, its environment or what the objects are.

The deep learning technology is based on dynamic neural advection. These  models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. This has enabled robotic control based on video prediction to perform increasingly complex tasks.

Click to view UC Berkeley video


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf

AI detects pneumonia from chest X-rays

FacebooktwitterlinkedinFacebooktwitterlinkedin

Andrew Ng and Stanford colleagues used AI to detect pneumonia from x-rays with similar accuracy to trained radiologists.  The CheXNet model analyzed 112,200 frontal-view X-ray images of 30,805 unique patients released by the NIH (ChestX-ray14.)

Deep Learning algorithms also detected14 diseases including fibrosis, hernias, and cell masses, with fewer false positives and negatives than NIH benchmark research.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli

Registration rates increase today, November 17th, 2017

AI detects bowel cancer in less than 1 second in small study

FacebooktwitterlinkedinFacebooktwitterlinkedin

Yuichi Mori and Showa University colleagues haved used AI to identify bowel cancer by analyzing colonoscopy derived polyps in less than a second.

The  system compares a magnified view of a colorectal polyp with 30,000 endocytoscopic images. The researchers claimed  86% accuracy, based on a study of 300 polyps.

While further testing the technology, Mori said that the team will focus on creating a system that can automatically detect polyps.

Click to view Endoscopy Thieme video


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda

Machine learning improves breast cancer detection

FacebooktwitterlinkedinFacebooktwitterlinkedin

MIT’s Regina Barzilay has used AI to improve breast cancer detection and diagnosis. Machine learning tools predict if a high-risk lesion identified on needle biopsy after a mammogram will upgrade to cancer at surgery, potentially eliminating unnecessary procedures.

In current practice, when a mammogram detects a suspicious lesion, a needle biopsy is performed to determine if it is cancer. Approximately 70 percent of the lesions are benign, 20 percent are malignant, and 10 percent are high-risk.

Using a method known as a “random-forest classifier,” the AI model resulted in 30 per cent fewer  surgeries, compared to the strategy of always doing surgery, while diagnosing more cancerous lesions (97 per cent vs 79 per cent) than the strategy of only doing surgery on traditional “high-risk lesions.”

Trained on information about 600 high-risk lesions, the technology looks for data patterns that include demographics, family history, past biopsies, and pathology reports.

MGH radiologists will begin incorporating the method into their clinical practice over the next year.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University, featuring:  Vinod KhoslaJustin SanchezBrian OtisBryan JohnsonZhenan BaoNathan IntratorCarla PughJamshid Ghajar – Mark Kendall – Robert Greenberg Darin Okuda Jason Heikenfeld

Detecting dementia with automated speech analysis

FacebooktwitterlinkedinFacebooktwitterlinkedin

WinterLight Labs is developing speech analyzing algorithms to detect and monitor dementia and aphasia.  A one minute speech sample is used to determine the lexical diversity, syntactic complexity, semantic content, and articulation associated with these conditions.

Clinicians currently conduct similar tests by interviewing patients and writing their impressions on paper.

The company believes that their automated system could inform clinical trials, medical care, and speech training.

If the platform could be used with mobile phones, the potential for widespread early detection is obvious.  Unfortunately, detection, even early detection, does not at this point translate into a cure.  ApplySci looks forward to the day when advanced neurodegenerative disease monitoring will be used to track progress toward healthy brain functioning.


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab – featuring  Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator –  Tom Insel – John Rogers – Jamshid Ghajar – Riccardo Sabatini – Phillip Alvelda – Michael Weintraub – Nancy Brown – Steve Kraus – Bill Geary – Mary Lou Jepsen


ANNOUNCING WEARABLE TECH + DIGITAL HEALTH + NEUROTECH SILICON VALLEY – FEBRUARY 26 -27, 2018 @ STANFORD UNIVERSITY –  FEATURING:  ZHENAN BAO – JUSTIN SANCHEZ – BRYAN JOHNSON – NATHAN INTRATOR – VINOD KHOSLA

AI driven, music-triggered brain state therapy for pain, sleep, stress, gait

FacebooktwitterlinkedinFacebooktwitterlinkedin

The Sync Project has developed a novel, music-based, non-pharmaceutical approach to treating pain, sleep, stress, and Parkinson’s gait issues.

Recent studies showed Parkinson’s patients improved their gait when listening to a song with the right beat pattern, and post surgery patients used 1/3 the amount of self-administered morphine after listening to an hour of music.

Lifestyle applications include Unwind, an app detects ones heartbeat, and responds with relaxing music (customized by machine learning tools) to aid sleep, and the Sync Music Bot, which uses Spotify to deliver daily music to enhance work, relaxation, and exercise.

With further clinical validation, this non-invasive therapy could replace drugs for better, targeted, personalized interventions.


Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab – featuring  Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator –  Tom Insel – John Rogers – Jamshid Ghajar – Phillip Alvelda – Michael Weintraub – Nancy Brown – Steve Kraus – Bill Geary – Mary Lou Jepsen – Daniela Rus

Registration rates increase Friday, July 14th