Google, NASA and the Universities Space Research Association will put a 512 qubit machine from D-Wave at the disposal of researchers around the globe. The USRA will invite teams of scientists and engineers to share time on the unique supercomputer. The goal is to study how quantum computing might be leveraged to advance machine learning.
Professor Bradley Nelson and researchers at ETH Zurich have created a miniature robot that can be injected into the eye to precisely measure the retina’s oxygen supply. Many diseases, including Glaucoma, can interfere with oxygen delivery to the retina. Rapid diagnosis and treatment is essential in the attempt to preserve vision.
New technology developed at UC Berkeley uses wireless signals to provide real-time, noninvasive diagnoses of brain swelling or bleeding. The device analyzes data from low-energy electromagnetic waves, similar to the kind used to transmit radio and mobile signals. It is sensitive enough to distinguish between a normal brain and a diseased brain with one single noncontact set of measurements.
Professor Todd Coleman of UCSD is developing foldable, stretchable electrode arrays that can non-invasively measure neural signals. They can also provide more in-depth analysis by including thermal sensors to monitor skin temperature and light detectors to analyze blood oxygen levels. The device is powered by micro solar panels and uses antennae to wirelessly transmit or receive data. Professor Coleman wants to use the device on premature babies to monitor their mental state and detect the onset of seizures that can lead to brain development problems such as epilepsy.
A team led by Mitchell Lerner at the University of Pennsylvania has developed a carbon nanotube based transistor that can detect glucose levels in body fluids, including saliva. The nanotubes are coated with molecules of pyrene-1-boronic acid, which makes them highly sensitive for glucose detection. When exposed to glucose, the nanotube transistor’s current-voltage curve changes, and that change can be measured to indicate the glucose concentration.
fMRI-driven neurofeedback has been used in various contexts, but never applied to the treatment of anxiety.
Yale University researchers used fMRI to display the activity of the orbitofrontal cortex, a brain region just above the eyes, to subjects in real time. Through a process of trial and error, the subjects learned to control their brain activity. This neurofeedback led to changes in brain connectivity and increased control over anxiety. The changes were still present several days after the exercise.
Ireland’s Tyndall National Institute’s “haptic hand” sensorized glove collects hand movement data to assist doctors’ understanding of arthritis patient mobility. Sensors built into the glove will provide 3-D simulations of joint movement and information on hand stiffness. The glove could potentially also be used to track hand movements in other applications, such as stroke rehab and training of surgeons.
SimSensei software, developed by Stefan Scherer and colleagues at the University of Southern California, combines computer vision algorithms and the psychological model of depression. An on-screen psychologist asks you a series of questions and watches how you physically respond. Using Kinect, the computer vision algorithms build up a very detailed model of your face and body, including your “smile level,” horizontal gaze and vertical gaze, how wide open your eyes are, and whether you are leaning toward or away from the camera. From these markers, SimSensei can determine whether you’re exhibiting signs that indicate depression — gaze aversion, smiling less, and fidgeting.
A group at the Tokyo Institute of Technology, led by Dr. Osamu Hasegawa, has advanced SOINN, their machine learning algorithm, which can now use the internet to learn how to perform new tasks. The system, which is under development as an artificial brain for autonomous mental development robots, is currently being used to learn about objects in photos using image searches on the internet. It can also take aspects of other known objects and combine them to make guesses about objects it doesn’t yet recognize.