Dr. Jaimie Henderson had a single wish throughout childhood: for his father to be able to speak with him. Now a scientist and neurosurgeon at Stanford Medicine, Henderson and his colleagues are developing brain implants that might be able to make similar wishes come true for other people with paralysis or speech impairments.
Two studies published Wednesday in the journal Nature show how the brain implants, described as neuroprostheses, can record a person’s neural activity when they attempt to speak naturally, and that brain activity can then be decoded into words on a computer screen, through audio speech or even communicated using an animated avatar.
“When I was 5 years old, my dad was involved in a devastating car accident that left him barely able to move or speak. I remember laughing at the jokes he tried to tell, but his speech ability was so impaired that we couldn’t understand the punchline,” Henderson, an author of one of the studies and professor at Stanford University, said in a news briefing about his research.
“So I grew up wishing that I could know him and communicate with him,” he said. “And I think that early experience sparked my personal interest in understanding how the brain produces movement and speech.”
Henderson and his colleagues at Stanford and other US institutions examined the use of implanted brain sensors in 68-year-old Pat Bennett. She had been diagnosed with amyotrophic lateral sclerosis in 2012, and it affected her speech.
The researchers wrote in their study that Bennett can make some limited facial movements and vocalize sounds but is unable to produce clear speech due to ALS, a rare neurological disease that affects nerve cells in the brain and spinal cord.
The arrays were attached to wires that came out of the skull and were connected to a computer. Software decoded the neural activity, converting the activity into words that were displayed on the computer screen in real time. When Bennett finished speaking, she pressed a button to finalize the decoding.
The researchers evaluated this brain-to-computer interface with Bennett attempting to speak with vocalizations and to only “mouth” words with no vocalization.
With a 50-word vocabulary, the rate of errors in the decoding was 9.1% on the days Bennett vocalized and 11.2% on the silent days, the researchers found. When using a 125,000-word vocabulary, the word error rate was 23.8% across all vocalizing days and 24.7% for silent days.
“In our work, we show that we can decipher attempted speech with a word error rate of 23% when using a large set of 125,000 possible words. This means that about three in every four words are deciphered correctly,” Frank Willett, an author of the study and Howard Hughes Medical Institute staff scientist affiliated with the Neural Prosthetics Translational Lab, said in the news briefing.
“With these new studies, it is now possible to imagine a future where we can restore fluent conversation to someone with paralysis, enabling them to freely say whatever they want to say with an accuracy high enough to be understood reliably,” he said.
The researchers say the decoding happened at high speeds. Bennett spoke at an average pace of 62 words per minute, which “more than triples” the speed of previous brain-computer interfaces that had about 18 words per minute for handwriting models.
“These initial results have proven the concept, and eventually technology will catch up to make it easily accessible to people who cannot speak,” Bennett wrote in a news release. “For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships.”
Yet for now, the researchers wrote that their findings are a “proof of concept” that decoding speaking movements with a large vocabulary is possible, but it needs to be tested on more people before it can be considered for clinical use.
“These are very early studies,” Willett said. “And we don’t have a big database of data from other people.”