Brain implant enables paralysed woman to steam thoughts into voice

Brain implant enables paralysed woman to steam thoughts into voice

A brain implant has restored speech in a woman with severe paralysis, a breakthrough advance in enabling communication for people who have lost the ability to speak.

The implant, detailed in a study published in the journal Nature Neuroscience on Monday, solves a long-standing challenge of latency in using prosthesis for speech.

Researchers, including from the University of California Berkeley, developed an artificial intelligence system to synthesise and stream brain signals into audible speech in “near-real time”.

“Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” said Gopala Anumanchipalli, an author of the study.

Cheol Jun Cho, another author, said: “So, what we’re decoding is after a thought has happened, after we’ve decided what to say, after we’ve decided what words to use and how to move our vocal-tract muscles.”

Researchers connect Ann's brain implant to voice synthesiser
Researchers connect Ann’s brain implant to voice synthesiser (Noah Berger via UC Berkeley)

Researchers collected data from their subject, Ann, to train their AI algorithm. She looked at prompts on a screen like the phrase “Hey, how are you?” and silently attempted to speak that sentence.

“This gave us a mapping between the chunked windows of neural activity that she generates and the target sentence that she’s trying to say, without her needing to vocalise at any point,” said Kaylo Littlejohn, another author of the study.

Scientists then used AI to fill in the details missing in Ann’s brain signals.

“We used a pre-trained text-to-speech model to generate audio and simulate a target. And we also used Ann’s pre-injury voice, so when we decode the output, it sounds more like her,” Mr Cheol, a PhD candidate at UC Berkeley, said.

Implantable, hidden continuous glucose monitoring could revolutionise diabetes care

In previous studies on speech prosthesis, researchers experienced a long latency for decoding speech, about an eight-second delay for a single sentence.

With the new approach, they say an audible output can be generated in “near-real time as the subject is attempting to speak”.

“We can see relative to that intent signal, within 1 second, we are getting the first sound out. And the device can continuously decode speech, so Ann can keep speaking without interruption,” Dr Anumanchipalli said.

Scientists also tested the AI model’s ability to synthesise words that were not part of the training dataset vocabulary. “We found that our model does this well, which shows that it is indeed learning the building blocks of sound or voice,” he added.

“This proof-of-concept framework is quite a breakthrough. We are optimistic that we can now make advances at every level,” Mr Cheol added.

Source: independent.co.uk