MIT’s Subramanian Sundaram has developed a sensor glove that identifies objects through touch. This could improve assistive robot performance and enhance prosthetic design.The cheap “scalable tactile glove” includes 550 tiny, pressure-capturing sensors. A neural network uses the data to classify objects and predict their weights. No visual input is required.
In a Nature paper, the system accurately detected objects, including a soda can, scissors, tennis ball, spoon, pen, and mug 76 percent of the time.
The tactile sensing sensors could be used in combination with traditional computer vision and image-based datasets to give robots a more human-like understanding of interacting with objects. The dataset also measured cooperation between regions of the hand during interactions, which could be used to customize prosthetics.
Similar sensor-based gloves used cost thousands of dollars and typically 50 sensors. The STAG glove costs approximately $10 to produce.
The artificial nerve circuit integrates three components:
A touch sensor that can detect minuscule forces.
A flexible electronic neuron which receives signals from the touch sensor.
An artificial synaptic transistor modeled after human synapses which is stimulated by theses sensory signals.
The system was successfully tested to generate both reflexes and a sense touch. The team also hopes to create low-power, artificial sensor nets to cover robots, to provide feedback that makes them more agile.
Join ApplySci at the 9th Wearable Tech + Digital Health + Neurotech Boston conference on September 24, 2018 at the MIT Media Lab. Speakers include: Mary Lou Jepsen – George Church – Roz Picard – Nathan Intrator – Keith Johnson – Juan Enriquez – John Mattison – Roozbeh Ghaffari – Poppy Crum – Phillip Alvelda
Earlier this year, University of Houston’s Jose Luis Contreras-Vidal developed a closed-loop BCI/EEG/VR/physical therapy system to control gait as part of a stroke/spinal cord injury rehab program. The goal was to promote and enhance cortical involvement during walking.
In a study, 8 subjects walked on a treadmill while watching an avatar and wearing a 64 channel EEG headset and motion sensors at the hip, knee and ankle.
The avatar was first activated by the motion sensors, allowing its movement to precisely mimic that of the test subject. It was then controlled by the brain-computer interface, although this was less precise than the movement with the motion sensors. Contreras-Vidal believes that as subjects learn how to use the interface, the result will be closer to that of the sensors. The researchers reported increased activity in the posterior parietal cortex and the inferior parietal lobe, along with increased involvement of the anterior cingulate cortex.
Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University. Speakers include: Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian – Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf – Jacobo Penide – David Sarno – Peter Fischer
The skin mimics the way a human finger responds to tension and compression, as it slides along a surface or distinguishes among different textures. This could allow users to sense when something is slipping out of their grasp.
Tiny electrically conductive liquid metal channels are placed on both sides of of a prosthetic finger. As it is slid across a surface, the channels on one side compress while those on the other side stretch, similar to a natural limb. As the channel geometry changes, so does the amount of electricity. Differences in electrical resistance correlate with force and vibrations.
The researchers believe that the sensor skin will enable users to better be able to open a door, use a phone, shake hands, or lift packages.
Join ApplySci at Wearable Tech + Digital Health + Neurotech Silicon Valley on February 26-27, 2018 at Stanford University, featuring: Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld
The Monitoring OsseoIntegrated Prostheses uses a limb which includes a titanium fixture surgically implanted into the femur. Bone grows at the implant’s connection point, leaving a small metallic connector protruding from the remaining leg. An accompanying artificial limb then can be attached or detached. The same procedure can be performed for upper limbs.
Advantages include less pain, a fluid walking motion, and a more stable, better-fitting limb. However, infection risk is increased due to the metal profusion. This is meant to be addressed by electrochemical and skin sensors, including a bio-compatible array embedded within the residual limb. The array tracks changes in body temperature and pH balance, which indicate infection. It also monitors the fit of the bone and prosthetic limb, and the healing process, which could help doctors to speed recuperation.
A convolutional neural network was trained it with images of 500 graspable objects, and taught to recognize the grip needed for each. Objects were grouped by size, shape, and orientation, and the hand was programmed to perform four different grasps to accommodate them: palm wrist neutral (to pick up a cup); palm wrist pronated (to pick up the TV remote); tripod (thumb and two fingers), and pinch (thumb and first finger).
The hand’s camera takes a picture of the object in front of it, assesses its shape and size, picks the most appropriate grasp, and triggers a series of hand movements, within milliseconds.
In a small study of the technology, subjects successfully picked up and moved objects with an 88 per cent success rate.
The work is part of an effort to develop a bionic hand that senses pressure and temperature, and transmits the information to the brain.
Join ApplySci at Wearable Tech + Digital Health + NeuroTech Boston on September 19, 2017 at the MIT Media Lab. Featuring Joi Ito – Ed Boyden – Roz Picard – George Church – Nathan Intrator – Tom Insel – John Rogers – Jamshid Ghajar – Phillip Alvelda
Professor Ravinder Dahiya, at the University of Glasgow, has created a robotic hand with solar-powered graphene “skin” that he claims is more sensitive than a human hand. The flexible, tactile, energy autonomous “skin” could be used in health monitoring wearables and in prosthetics, reducing the need for external chargers. (Dahiya is now developing a low-cost 3-D printed prosthetic hand incorporating the skin.)
This week at the Pentagon, Johnny Matheny unveiled his DARPA developed prosthetic arm. The mind-controlled prosthesis has the same size, weight, shape and grip strength of a human arm, and, according to Matheny, can do anything one can do.
It is, by all accounts, the most advanced prosthetic limb created to date.
The 100 sensor arm was developed as part of the “Revolutionizing Prosthetics Program” of the Biological Technologies office, led by Dr. Justin Sanchez.
An implanted neural interface allows the wearer to control the arm with his thoughts. Sensors are also implanted in the fingertips, sending signals back to the brain, allowing users to feel sensations.