Edward Chang and UCSF colleagues are developing technology that will translate signals from the brain into synthetic speech. The research team believes that the sounds would be nearly as sharp and normal as a real person’s voice. Sounds made by the human lips, jaw, tongue and larynx would be simulated.
The goal is a communication method for those with disease and paralysis.
According to Chang: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity.”
Berkeley’s Bob Knight has developed related technology, using HFB activity to decode imagined speech to develop a BCI for treatment of disabling language deficits. He described this work at the 2018 ApplySci conference at Stanford.
Join ApplySci at the 12th Wearable Tech + Digital Health + Neurotech Boston conference on November 14, 2019 at Harvard Medical School and the 13th Wearable Tech + Neurotech + Digital Health Silicon Valley conference on February 11-12, 2020 at Stanford University