From the Babbage Analytical Engine of 1822 through thought control – a brief history of the intersection of mind and machine.
In a new paper, Stanford’s Andrew Ng describes how to use graphics microprocessors to build a $20,000 computerized brain that is similar to the cat-detector he developed with Google last year for $1M.
To test his hypothesis about GPU-driven Deep Learning, he also built a larger version of the platform for $100,000. It utilized 64 Nvidia GTX 680 GPUs on 16 computers. It was able to accomplish the same cat-spotting tasks as the Google system, which needed 1,000 computers to operate. That system, modeling the activities and processes of the human brain, was able to learn what a cat looks like, and then translate that knowledge into spotting different cats across multiple YouTube videos.
University of Minnesota researchers led by Professor Bin He have been able to control a small helicopter using only their minds, pushing the potential of technology that could help paralyzed or motion-impaired people interact with the world around them.
An EEG cap with 64 electrodes was put on head of the person controlling the helicopter. The researchers mapped the controller’s brain activity while they performed certain tasks (for example, making a fist or looking up). They then mapped those patterns to controls in the helicopter. If the researchers mapped “go up” to a clenched fist, the copter went up. After that, the copter would go up automatically when the controller clenches a fist.
DARPA continues to build technology with academic partners to enable amputees to control prosthetic limbs with their minds. Examples follow:
Researchers at the Rehabilitation Institute of Chicago demonstrated a type of peripheral interface called targeted muscle re-innervation (TMR). By rewiring nerves from amputated limbs, new interfaces allow for prosthetic control with existing muscles.
Researchers at Case Western Reserve University used a flat interface nerve electrode (FINE) to demonstrate direct sensory feedback. By interfacing with residual nerves in the patient’s partial limb, some sense of touch by the fingers is restored. Other existing prosthetic limb control systems rely solely on visual feedback. Unlike visual feedback, direct sensory feedback allows patients to move a hand without keeping their eyes on it—enabling simple tasks, like searching a bag for small items, not possible with today’s prosthetics.
Cornell University researchers have programmed a PR-2 robot to not only carry out everyday tasks, but to anticipate human behavior and adjust its actions.
From a database of 120 3-D videos of people performing common household activities, the robot has been trained to identify human activities by tracking the movements of the body – reduced to a symbolic skeleton for easy calculation – breaking them down into sub-activities like reaching, carrying, pouring or drinking, and to associate the activities with objects.
Observing a new scene with its Microsoft Kinnect 3-D camera, the robot identifies the activities it sees, considers what uses are possible with the objects in the scene and how those uses fit with the activities; it then generates a set of possible continuations into the future – such as eating, drinking, cleaning, putting away – and finally chooses the most probable. As the action continues, it constantly updates and refines its predictions.
Already popular in Japan, today’s New York Times reports on the developing trend of robotic companions for the elderly.
A typical Japanese example is the Tsukuba University created Hybrid Assistive Limb. The battery-powered suit senses and amplifies the wearer’s muscle action when carrying or lifting heavy objects. Caregivers can also use the suit to aid them while lifting patients from a bed, and patients can wear it to support their movements. Other Japanese devices include a small, battery-powered trolley to aid independent walking; a portable, self-cleaning bedside toilet; and a monitoring robot which tracks and reports the location of dementia patients.
The Times describes several interesting US developed robots: Cody, a Georgia Tech created robotic nurse cable of bathing patients; HERB, a Carnegie Mellon developed butler which retrieves objects and cleans; Hector, a University of Reading robot which provides medication reminders, locates lost objects, and can assist in a fall; and Paro, a baby seal looking robot which calms dementia patients.
Professor Todd Coleman of UCSD is developing foldable, stretchable electrode arrays that can non-invasively measure neural signals. They can also provide more in-depth analysis by including thermal sensors to monitor skin temperature and light detectors to analyze blood oxygen levels. The device is powered by micro solar panels and uses antennae to wirelessly transmit or receive data. Professor Coleman wants to use the device on premature babies to monitor their mental state and detect the onset of seizures that can lead to brain development problems such as epilepsy.
A group at the Tokyo Institute of Technology, led by Dr. Osamu Hasegawa, has advanced SOINN, their machine learning algorithm, which can now use the internet to learn how to perform new tasks. The system, which is under development as an artificial brain for autonomous mental development robots, is currently being used to learn about objects in photos using image searches on the internet. It can also take aspects of other known objects and combine them to make guesses about objects it doesn’t yet recognize.
By placing a small sensor in the brain’s motor cortex, interfaces can pick up on electrical activity, and translate it into commands that control a robotic arm. Now scientists have gone a step further. Instead of a wired brain-arm link, they have now developed a wireless connection powerful enough to work at a distance of three feet.
“Clinical applications may include thought-controlled prostheses for severely neurologically impaired patients, wireless access to motorized wheelchairs or other assistive technologies, and diagnostic monitoring such as in epilepsy, where patients currently are tethered to the bedside during assessment,” says David Borton, at Brown University.