Category Archives: Computer Vision

Diabetic retinopathy-detecting algorithm for remote diagnosis

FacebooktwitterlinkedinFacebooktwitterlinkedin

Google has developed an algorithm which it claims is capable of detecting diabetic retinopathy in photographs.  The goal is to improve the quality and availability of screening for, and early detection of,  the common and debilitating condition.

Typically, highly trained specialists are required to examine photos, to detect the lesions that indicate bleeding and fluid leakage in the eye. This obviously makes screening difficult in poor and remote locations.

Google developed a dataset of 128,000 images, each evaluated by 3-7 specially-trained doctors, which trained  a neural network to detect referable diabetic retinopathy.  Performance was tested on two clinical validation sets of 12,000 images. The majority decision of a panel 7 or 8 ophthalmologists served as the reference standard. The results showed that the accuracy of the  Google  algorithm was equal to that of the physicians.


ApplySci’s 6th   Wearable Tech + Digital Health + NeuroTech Silicon Valley  –  February 7-8 2017 @ Stanford   |   Featuring:   Vinod Khosla – Tom Insel – Zhenan Bao – Phillip Alvelda – Nathan Intrator – John Rogers – Roozbeh Ghaffari –Tarun Wadhwa – Eythor Bender – Unity Stoakes – Mounir Zok – Krishna Shenoy – Karl Deisseroth – Shahin Farshchi – Casper de Clercq – Mary Lou Jepsen – Vivek Wadhwa – Dirk Schapeler – Miguel Nicolelis

“Augmented attention” wearable assists the visually impaired

FacebooktwitterlinkedinFacebooktwitterlinkedin

OrCam is a disruptive artificial vision company that creates assistive devices for the visually impaired.  It is led by Hebrew University professor Amnon Shashua.

MyMe, its latest product, uses artificial intelligence to respond to audio and visual information in real-time.  A clip on camera and Bluetooth earpiece create what the company calls an “augmented attention” experience, meant to enrich interactions.

The device is aware of all daily actions — including people we meet, conversation topics,  visual surroundings, food we eat, and activities we participate in. Visual and audio processing functions serve as an extension to a wearers’ awareness.  A built in fitness tracker will also be included.

More details will be available after MyMe is unveiled at CES next week.


Wearable Tech + Digital Health San Francisco – April 5, 2016 @ the Mission Bay Conference Center

NeuroTech San Francisco – April 6, 2016 @ the Mission Bay Conference Center

2nd Annual Wearable Tech + Digital Health NYC – June 7, 2016 @ the New York Academy of Sciences

NeuroTech NYC – June 8, 2016 @ the New York Academy of Sciences

 

Smart walker monitors gait, assesses falling probability

FacebooktwitterlinkedinFacebooktwitterlinkedin

Footprints by Quanticare is a walker that  continuously collects passive and contextual gait data, with the goal of predicting and preventing senior falls.  Its computer vision algorithm captures spatio-temporal gait metrics of the user and sends the data to a health care provider.

The company claims that  the walker could measure an osteoarthritic limp to improve PT protocols, and that it can gauge  MS progression by measuring the difference between  steps.

WEARABLE TECH + DIGITAL HEALTH SAN FRANCISCO – APRIL 5, 2016 @ THE MISSION BAY CONFERENCE CENTER

NEUROTECH SAN FRANCISCO – APRIL 6, 2016 @ THE MISSION BAY CONFERENCE CENTER

*PREFERRED REGISTRATION RATE ENDS TODAY – 10/23/15

Deep neural networks for face recognition in darkness

FacebooktwitterlinkedinFacebooktwitterlinkedin

Karlsruhe Institute of Technology researchers have used the heat from one’s face to enable facial recognition in darkness. Thermal imaging creates an infrared picture, which can then be matched to a photograph taken in the light.

In a recent study, a “deep neural network” accurately identified faces in the dark 80 percent of the time when shown a large number of photos.  However,  the technology is at an early stage, and was only successful 55 percent of the time when presented with a single  image.

Mobile hyperspectral “tri-corder”

FacebooktwitterlinkedinFacebooktwitterlinkedin

Tel Aviv University‘s David Menlovic and Ariel Raz are turning smartphones into hyperspectral sensors, capable of identifying chemical components of objects from a distance.

The technology, being commercialized by Unispectral and Ramot, improves camera resolution and noise filtering, and is compatible with smartphone lenses.

The new lens and software allow in much more light than current smartphone camera filter arrays. The software keeps the image resolution clean as the camera zooms further in.  Once the camera has acquired the image, data is sent to a third party to process and analyze  material compounds and the amount of each component. The third-party analyzer then sends the information back to the smartphone.

Unispectral is in talks with smartphone makers, auto makers, and security organizations to be third party analyzers.  To analyze the data from  camera images, the partner will need a large  database of hyperspectral signatures.

Wearable Tech + Digital Health NYC 2015 – June 30 @ New York Academy of Sciences.  Register now and save $300.

App helps orthopedic surgeons plan procedures

FacebooktwitterlinkedinFacebooktwitterlinkedin

Tel Aviv based Voyant Health‘s TraumaCad Mobile app helps orthopedic surgeons plan operations and create result simulations.  The system offers modules for  hip, knee, deformity, pediatric, upper limb, spine, foot and ankle, and trauma surgery.  The iPad app mobile version of this decade old system was recently approved by the FDA.

Surgeons can securely import medical images from the cloud or hospital imaging systems to perform measurements, fix prostheses, simulate osteotomies, and visualize fracture reductions. The app overlays prosthesis templates on radiological images and includes tools for performing measurements on the image and positioning the template.  In total hip replacement surgery, it automatically aligns implants and assembles components to calculate leg length discrepancy and offset.

Wearable Tech + Digital Health NYC 2015 – June 30, 2015 @ New York Academy of Sciences.  Register before April 24 and save $300.

Robotic-assisted platform to improve surgical accuracy

FacebooktwitterlinkedinFacebooktwitterlinkedin

Google and Johnson & Johnson have announced Ethicon, a robotic assisted surgical platform partnership.  Google’s machine vision and image analysis software will help surgeons see more clearly as they operate.

During an operation, surgeons rely on several screens for information such as medical images, test results or guidance on atypical condition procedures.  Google’s software could show this data on one screen by overlaying it on the interface that surgeons use to control the robots and delivering information when it’s needed.  The software could also highlight structures in the body that are difficult to view on a screen, such as blood vessels or nerves.

Wearable Tech + Digital Health NYC 2015 – June 30 @ New York Academy of Sciences.  Register now and save $300.

Sensor probe to prevent hospital pressure ulcers

FacebooktwitterlinkedinFacebooktwitterlinkedin

GE and the US Dept of Veterans Affairs have developed a multi sensor probe to detect the earliest signs of pressure ulcer formation.

The device combines computer vision with motion detection, thermal profiling, image classification, 3-D object reconstruction and vapor detection to identify patients at risk and improve treatment.

Hospitals generally advise caregivers to turn patients every two to four hours to prevent ulcers.   Last year ApplySci described Leaf, a sensor that automates and prioritizes turning schedules for large groups of patients.  Traditionally, when ulcers appear, healing is monitored manually by measuring and recording the dimensions of visible lesions.  The VA believes that  by combining physical inspection with  real-time monitoring, ulcers may be prevented from forming or advancing.

Wearable Tech + Digital Health NYC 2015 – June 30 @ New York Academy of Sciences.  Early registration rate available until March 27th.

Gesture controlled smartphone for the disabled

FacebooktwitterlinkedinFacebooktwitterlinkedin

Sesame is a touch-free smartphone that is controlled by very small head movements.  It is being crowdfunded on IndieGogo.

Head movements are tracked with a  front-facing camera, and combined with computer vision algorithms to create a cursor that appears on the phone’s screen.   The cursor is controlled by the position and movements of the head, enabling users to touch and swipe as if they were using a finger.  They can make calls, send texts, browse the internet, watch videos, use social media, and play games.  Integrated voice control allows the phone to be turned on when one says “Open Sesame.”

Tiny wearable computer uses audio feedback to assist the vision impaired

FacebooktwitterlinkedinFacebooktwitterlinkedin

http://www.orcam.com

OrCam, led by Hebrew University Professor Amnon Shashua, one the most exciting computer vision entrepreneurs in Israel, has developed a device that uses audio feedback to relay visual information to visually impaired people.   The tiny wearable computer works with a 5-mega pixel camera attached to glasses.   A computer vision algorithm enables it to read text, and it can be taught to recognize faces and objects with the help of the user.