Two brain-computer interface systems restore speech in separate UCSF, Stanford studies

Between commercial developers like Synchron and Elon Musk’s Neuralink, and ongoing research around the globe, it may be only a matter of time before people with paralysis, amyotrophic lateral sclerosis and other speech-limiting conditions have their communication abilities restored via “mind-reading” technology.

A pair of studies published Wednesday in the journal Nature describe two separate systems that each promise to do just that.

Both of the technologies employ brain-computer interfaces, with each connecting electrode sensors placed on the brain to a computer equipped with artificial intelligence software trained to translate collected brain signals into words.

According to the researchers, each of the technologies have so far been successful in helping one patient communicate again—albeit in different formats.

One of the systems comes from the lab of Edward Chang, M.D., chair of neurological surgery at the University of California San Francisco. Chang’s team, which includes researchers from both UCSF and UC Berkeley, focused on recreating the entire process of speaking, instead of simply giving voice to the words in the study participant’s head.

To that end, the electrode panel implanted on the patient’s brain was designed to pick up not only her brain signals related to speech but also those that would’ve moved her mouth and jaw and created facial expressions as she spoke.

The panel comprised a total of 253 electrodes, which were connected to a bank of computers via a cable plugged into a port on the patient’s head. To train the software to translate the signals coming in through the electrodes, the participant—who has severe paralysis caused by a brainstem stroke—first spent several weeks repeating different phrases so that the AI could learn to recognize how her brain activity matched up with 39 distinct sounds in the spoken English language.

Meanwhile, another machine learning AI tool helped translate the signals associated with facial movements and combine them with facial animation software from U.K. software developer Speech Graphics, resulting in a digital avatar that could move its face and speak as a proxy for the participant.

Finally, the researchers used another algorithm to capture the patient’s voice—using a recording of her speaking at her wedding—and personalize the synthesized speech to sound more like her.

The result was a digital avatar that could talk and make facial expressions as the patient imagined them. The technology can perform its real-time mind-reading translations at a rate of almost 80 words per minute, according to the study, and its vocabulary was initially trained to span just over 1,000 words.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” Chang said in a UCSF release Wednesday. “These advancements bring us much closer to making this a real solution for patients.”

In the future, Chang’s team is hoping to create a wireless version of the technology so users have more freedom with their newly restored speech, rather than having to remain physically connected to a computer at all times.

The other study, meanwhile, took place at Stanford Medicine and recruited a patient who had lost her ability to speak from ALS.

That group of researchers’ intracortical brain-computer interface required a total of four electrode arrays to be placed on speech-related regions of the brain, with each sensor comprising 64 electrodes. Like the UCSF system, the Stanford technology also requires a wired connection from the participant’s head to a computer, and it also works by parsing out the 39 phonemes of spoken English from her brain signals.

Unlike the UCSF model, however, the Stanford brain-computer interface shares the resulting words on a computer screen, rather than broadcasting them through a lifelike digital avatar.

After four months of twice-weekly, four-hour training sessions with the patient—which began about a month after the implantation surgery in late March 2022—the AI was able to translate her thoughts at a rate of 62 words per minute. Since then, that pace is now approaching 160 words per minute, roughly the rate of natural spoken English conversations, according to a Stanford release Wednesday.

However, as the system has improved, so too has its error rate increased: from 9.1% when the participant’s test sentences were limited to a 50-word vocabulary, to nearly 24% on a 125,000-word vocabulary. The technology is still only in the proof-of-concept phase and not yet ready for its commercial debut, according to Frank Willett, Ph.D., a lead author of the study, but still represents a “big advance” toward restoring fast-paced communication to patients who have lost the ability to speak.

“Imagine how different conducting everyday activities like shopping, attending appointments, ordering food, going into a bank, talking on a phone, expressing love or appreciation—even arguing—will be when nonverbal people can communicate their thoughts in real time,” the participant said in the release via email.