KIT‘s Tanja Schultz has reconstructed spoken sentences from brain activity patterns.
Speech is produced in the cerebral cortex. Associated brain waves can be recorded with surface electrodes. Schultz reconstructed basic units, words, and complete sentences from brain waves, and generated corresponding text.
This was achieved by a combination of advanced signal processing and automatic speech recognition. Speech was continuously decoded and transformed into a textual representation. Cortical information was combined with linguistic knowledge and machine learning algorithms to extract the most likely word sequence. Brain-to-Text is currently based on audible speech. The goal is to be able to recognize speech from thought alone.
WEARABLE TECH + DIGITAL HEALTH NYC 2015 – JUNE 30 @ NEW YORK ACADEMY OF SCIENCES. REGISTER HERE.
Professor Hossam Haick at the Technion – Israel Institute of Technology has developed a sensor equipped smartphone that screens a user’s breath for early cancer detection.
SNIFFPHONE uses micro and nano sensors that read exhaled breath. The information is transferred through the phone to a signal processing system for analysis. According to Haick, the NaNose system can detect benign and malignant tumors more quickly, efficiently and cheaply than previously possible, replacing clinical follow up that would lead to the same intervention. He claims that NaNose has a 90 percent accuracy rate.
This is one of several biomedical sensor breakthroughs that Professor Haick is working on. In July 2013, ApplySci described his flexible sensor that could be integrated into electronic skin, enabling those with prosthetic limbs to feel changes in their environments. This is similar to Roozbeh Ghaffari’s work at MC10, which we described last month and will be included in our June 30th conference, Wearable Tech + Digital Health NYC 2015.
MIT scientists have developed a low power signal processing chip that could lead to a cochlear implant requiring no external hardware. Harvard Medical School and Massachusetts Eye and Ear Infirmary doctors collaborated with the researchers. The implant would be wirelessly recharged and run for eight hours.
Instead of an external microphone, the implant would use the natural microphone of the middle ear, which is almost always intact in cochlear implant patients.
The design exploits the mechanism of a middle ear implant. Middle ear ossicles convey the vibrations of the eardrum to the cochlea, which converts acoustic signals to electrical signals. In patients with middle ear implants, the cochlea is functional, but the stapes ossicle doesn’t vibrate with enough force to stimulate the auditory nerve. A middle ear implant consists of a tiny sensor that detects the ossicles’ vibrations and an actuator that helps drive the stapes.
The new device would use the same type of sensor, but the signal it generates would travel to a microchip implanted in the ear, which would convert it to an electrical signal and pass it to an electrode in the cochlea. Lowering the power requirements of the converter chip was the key to eliminating the skull mounted hardware.
From the Babbage Analytical Engine of 1822 through thought control – a brief history of the intersection of mind and machine.
A team of researchers at Xerox is working on technology that would allow doctors to obtain patients’ vital signs using a simple webcam. Already, the team is testing use of the technology to monitor the pulse rate of premature babies and to track irregular heartbeats in patients suffering from arrhythmia.
By applying further signal-processing algorithms to the images, doctors can get a read-out of a baby’s blood-oxygen level. If the camera can see more than one part of the child it can also measure that child’s blood pressure. It does this by recording the time each pulse caused by the heartbeat takes to arrive in different arteries.