Category Archives: AI

Tony Chahine on human presence, reimagined | ApplySci @ Stanford

FacebooktwitterlinkedinFacebooktwitterlinkedin

Myant‘s Tony Chahine reimagined human presence at ApplySci’s recent Wearable Tech + Digital Health + Neurotech conference at Stanford:


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab.  Speakers include:  Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum – Phillip Alvelda Marom Bikson

REGISTRATION RATES INCREASE JUNE 29TH

Phillip Alvelda: More intelligent; less artificial | ApplySci @ Stanford

FacebooktwitterlinkedinFacebooktwitterlinkedin

Phillip Alvelda discussed AI and the brain at ApplySci’s recent Wearable Tech + Digital Health + Neurotech Silicon Valley conference at Stanford:


Dr. Alvelda will join us again at Wearable Tech + Digital Health + Neurotech Boston, on September 24, 2018 at the MIT Media Lab.  Other speakers include: Rudy Tanzi – Mary Lou Jepsen – George ChurchRoz PicardNathan IntratorKeith JohnsonJuan EnriquezJohn MattisonRoozbeh GhaffariPoppy Crum Marom Bikson

REGISTRATION RATES INCREASE JUNE 22nd

AI CT analysis speeds stroke identification, treatment

FacebooktwitterlinkedinFacebooktwitterlinkedin

Viz.ai‘s algorithms analyze brain scans and immediately transfer data to ensure rapid stroke treatment. The system connects to a hospital CT and sends alerts when a suspected LVO stroke has been identified.  Radiological images are sent to a doctor’s phone.  The company claims that the median time from picture to notification is less than 6 minutes, which can be life-saving, as they also claim that standard stroke workflow is now 66 minutes. Patient transfer to  interventional centers is initiated through messaging and call capabilities connected with emergency and transportation services.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab

DARPA’s Justin Sanchez on driving and reshaping biotechnology | ApplySci @ Stanford

FacebooktwitterlinkedinFacebooktwitterlinkedin

DARPA Biological Technologies Office Director Dr. Justin Sanchez on driving and reshaping biotechnology.  Recorded at ApplySci’s Wearable Tech + Digital Health + Neurotech Silicon Valley conference on February 26-27, 2018 at Stanford University.


Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 25, 2018 at the MIT Media Lab

Heart attack, stroke, predicted via retinal images

FacebooktwitterlinkedinFacebooktwitterlinkedin

Google’s Lily Peng has developed an algorithm that can predict heart attacks and strokes by analyzing images of the retina.

The system also shows which eye areas lead to successful predictions, which can provide insight into the causes of cardiovascular disease.

The dataset consisted of 48,101 patients from the UK Biobank database and 236,234 patients from EyePACS database.  A study of  12,026 and 999 patients showed a high level of accuracy:

-Retinal images of a smoker from a non-smoker 71 percent of the time, compared to a ~50 percent human  accuracy.

-While doctors can typically distinguish between the retinal images of patients with severe high blood pressure and normal patients, Google AI’s algorithm predicts the systolic blood pressure within 11 mmHg on average for patients overall, including those with and without high blood pressure.

-According to the company the algorithm predicted direct cardiovascular events “fairly accurately, ” statin that “given the retinal image of one patient who (up to 5 years) later experienced a major CV event (such as a heart attack) and the image of another patient who did not, our algorithm could pick out the patient who had the CV event 70% of the time. This performance approaches the accuracy of other CV risk calculators that require a blood draw to measure cholesterol.”

According to Peng: “Given the retinal image of one patient who (up to 5 years) later experienced a major CV event (such as a heart attack) and the image of another patient who did not, our algorithm could pick out the patient who had the CV event 70 percent of the time, This performance approaches the accuracy of other CV risk calculators that require a blood draw to measure cholesterol.”


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Shreyas Shah– Walter Greenleaf – Jacobo Penide  – Peter Fischer – Ed Boyden

**LAST TICKETS AVAILABLE

EEG + AI assists drivers in manual and autonomous cars

FacebooktwitterlinkedinFacebooktwitterlinkedin

Nissan’s Brain-to-Vehicle (B2V) technology will enable vehicles to interpret signals from a driver’s brain.

The company describes two aspects of the system — prediction and detection, which depend on a driver wearing EEG electrodes:

Predicton: By detecting, via the brain, that the driver is about to move, including turning the steering wheel or pushing the accelerator pedal, B2V can begin the action more quickly.

Detection: When driver discomfort is detected, and the car is in autonomous mode, AI tools change the driving configuration or style.

Lucian Gheorghe, an innovation researcher Nissan, said that the system can use AR to adjust what the driver sees, and can turn the wheel or slow the car  0.2 to 0.5 seconds faster than the driver.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf – Jacobo Penide – David Sarno

Registration rates increase today – January 5th.

Robots visualize actions, plan, with out human instruction

FacebooktwitterlinkedinFacebooktwitterlinkedin

Sergey Levine and UC Berkeley colleagues have developed robotic learning technology that enables robots to visualize how different behaviors will affect the world around them, with out human instruction.  This ability to plan, in various scenarios,  could improve self-driving cars and robotic home assistants.

Visual foresight allows robots to predict what their cameras will see if they perform a particular sequence of movements. The robot can then learn to perform tasks without human help  or prior knowledge of physics, its environment or what the objects are.

The deep learning technology is based on dynamic neural advection. These  models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. This has enabled robotic control based on video prediction to perform increasingly complex tasks.

Click to view UC Berkeley video


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf

AI detects pneumonia from chest X-rays

FacebooktwitterlinkedinFacebooktwitterlinkedin

Andrew Ng and Stanford colleagues used AI to detect pneumonia from x-rays with similar accuracy to trained radiologists.  The CheXNet model analyzed 112,200 frontal-view X-ray images of 30,805 unique patients released by the NIH (ChestX-ray14.)

Deep Learning algorithms also detected14 diseases including fibrosis, hernias, and cell masses, with fewer false positives and negatives than NIH benchmark research.


Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include:  Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian –  Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli

Registration rates increase today, November 17th, 2017