At the age of 30, Ann suffered a brainstem stroke that left her severely paralyzed. She lost control of all the muscles in her body and was unable even to breathe. It came on suddenly one afternoon, for reasons that are still mysterious.

For the next five years, Ann went to bed each night afraid she would die in her sleep. It took years of physical therapy before she could move her facial muscles enough to laugh or cry. Still, the muscles that would have allowed her to speak remained immobile.

“Overnight, everything was taken from me,” Ann wrote, using a device that enables her to type slowly on a computer screen with small movements of her head. “I had a 13-month-old daughter, an 8-year-old stepson and 26-month-old marriage.”

A woman wearing a red shirt and with a buzz haircut sits calmly as an implanted device on her skull is connected to wires.
Ann, the participant in the study. Photo by Noah Berger

Today, Ann is helping researchers at UC San Francisco and UC Berkeley develop new brain-computer technology that could one day allow people like her to communicate more naturally through a digital avatar that resembles a person.

It is the first time that either speech or facial expressions have been synthesized from brain signals. The system can also decode these signals into text at nearly 80 words per minute, a vast improvement over the 14 words per minute that her current communication device delivers.

Edward Chang, MD, chair of neurological surgery at UCSF, who has worked on the technology, known as a brain-computer interface, or BCI, for more than a decade, hopes this latest research breakthrough, published Aug. 23, 2023, in Nature, will lead to an FDA-approved system that enables speech from brain signals in the near future.

“Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others,” said Chang, who is a member of the UCSF Weill Institute for Neurosciences and the Jeanne Robertson Distinguished Professor. “These advancements bring us much closer to making this a real solution for patients.”

Ann’s work with UCSF neurosurgeon Edward Chang, MD, and his team plays an important role in helping advance the development of devices that can give a voice to people unable to speak. Video by Pete Bell


Decoding the signals of speech

Ann was a high school math teacher in Canada before her stroke in 2005. In 2020, she described her life since in a paper she wrote, painstakingly typing letter-by-letter, for a psychology class.

“Locked-in syndrome, or LIS, is just like it sounds,” she wrote. “You’re fully cognizant, you have full sensation, all five senses work, but you are locked inside a body where no muscles work. I learned to breathe on my own again, I now have full neck movement, my laugh returned, I can cry and read and over the years my smile has returned, and I am able to wink and say a few words.”

As she recovered, she realized she could use her own experiences to help others, and she now aspires to become a counselor in a physical rehabilitation facility.

“I want patients there to see me and know their lives are not over now,” she wrote. “I want to show them that disabilities don’t need to stop us or slow us down.”

She learned about Chang’s study in 2021 after reading about a paralyzed man named Pancho, who helped the team translate his brain signals into text as he attempted to speak. He had also experienced a brainstem stroke many years earlier, and it wasn’t clear if his brain could still signal the movements for speech. It’s not enough just to think about something; a person has to actually attempt to speak for the system to pick it up. Pancho became the first person living with paralysis to demonstrate that it was possible to decode speech-brain signals into full words.


Eddie Chang, a neurosurgeon, wears a scrub uniform and a surgical cap while standing in an operating room. Behind him is a scan of a brain on a screen.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others.”

Edward Chang, MD, chair of neurological surgery at UCSF


With Ann, Chang’s team attempted something even more ambitious: decoding her brain signals into the richness of speech, along with the movements that animate a person’s face during conversation.

To do this, the team implanted a paper-thin rectangle of 253 electrodes onto the surface of her brain over areas they previously discovered were critical for speech. The electrodes intercepted the brain signals that, if not for the stroke, would have gone to muscles in Ann’s lips, tongue, jaw and larynx, as well as her face. A cable, plugged into a port fixed to Ann’s head, connected the electrodes to a bank of computers.

For weeks, Ann worked with the team to train the system’s artificial intelligence algorithms to recognize her unique brain signals for speech. This involved repeating different phrases from a 1,024-word conversational vocabulary over and over again until the computer recognized the brain activity patterns associated with all the basic sounds of speech.

A graphic animation of a head, showing the brain, tongue, and vocal muscles within it. A thin implant rests on the left surface of the brain, and a wire connects the implant to lead to an external vocal device.

Chang implanted a thin rectangle of electrodes on the surface of Ann’s brain to pick up signals sent to speech muscles when Ann tries to talk. Illustration by Ken Probst

Neuroseurgon Eddie Chang looks at a screen with brain scrans during a brain surgery.

The electrodes were placed over areas of the brain the team previously discovered were critical for speech. Photo by Todd Dubnicoff

A woman calmly looks at a screen with a 3D avatar that resembles her. The real woman has a buzzed haircut, and has wires protruding from her head that measure her brainwaves to produce speech from muscle movement.

Ann worked with the team on training the AI algorithm to recognize her brain signals associated with phonemes, the sub-units of speech that form spoken words. Photo by Noah Berger

“It was exciting to see her go from, ‘We’re going to just try doing this,’ and then seeing it happen quicker than probably anyone thought,” said Ann’s husband, Bill, who travelled with her from Canada to be with her during the study. “It seems like they’re pushing each other to see how far they can go with this.”

Rather than train the AI to recognize whole words, the researchers created a system that decodes words from smaller components called phonemes. These are the sub-units of speech that form spoken words in the same way that letters form written words. “Hello,” for example, contains four phonemes: “HH,” “AH,” “L” and “OW.”

Using this approach, the computer only needed to learn 39 phonemes to decipher any word in English. This both enhanced the system’s accuracy and made it three times faster.

“The accuracy, speed and vocabulary are crucial,” said Sean Metzger, who developed the text decoder with Alex Silva, both graduate students in the joint Bioengineering Program at UC Berkeley and UCSF. “It’s what gives Ann the potential, in time, to communicate almost as fast as we do, and to have much more naturalistic and normal conversations.”

Giving Mom her voice back

Ann’s 18-year-old daughter knows “mom’s voice” as a computerized voice with a British accent.

The BRAVO3 team recreated Ann’s voice using language learning AI, and footage of Ann’s laugh-inducing wedding speech from 2005.


Adding a face and a voice

To synthesize Ann’s speech, the team devised an algorithm for synthesizing speech, which they personalized to sound like her voice before the injury by using a recording of Ann speaking at her wedding.

“My brain feels funny when it hears my synthesized voice,” she wrote in answer to a question. “It’s like hearing an old friend.”

She looks forward to the day when her daughter – who only knows the impersonal, British-accented voice of her current communication device – can hear it too.

“My daughter was 1 when I had my injury, it’s like she doesn’t know Ann … She has no idea what Ann sounds like.”

I want patients ... to see me and know that their lives are not over now. I want to show them that disabilities don’t need to stop us or slow us down.”

The team animated Ann’s avatar with the help of software that simulates and animates muscle movements of the face, developed by Speech Graphics, a company that makes AI-driven facial animation. The researchers created customized machine-learning processes that allowed the company’s software to mesh with signals being sent from Ann’s brain as she was trying to speak and convert them into the movements on her avatar’s face, making the jaw open and close, the lips protrude and purse and the tongue go up and down, as well as the facial movements for happiness, sadness and surprise.

“We’re making up for the connections between her brain and vocal tract that have been severed by the stroke,” said Kaylo Littlejohn, a graduate student working with Chang and Gopala Anumanchipalli, PhD, a professor of electrical engineering and computer sciences at UC Berkeley. “When Ann first used this system to speak and move the avatar’s face in tandem, I knew that this was going to be something that would have a real impact.”

An important next step for the team is to create a wireless version that would not require Ann to be physically connected to the BCI.

“Giving people like Ann the ability to freely control their own computers and phones with this technology would have profound effects on their independence and social interactions,” said co-first author David Moses, PhD, an adjunct professor in neurological surgery.

For Ann, helping to develop the technology has been life changing.

“When I was at the rehab hospital, the speech therapist didn’t know what to do with me,” she wrote in answer to a question. “Being a part of this study has given me a sense of purpose, I feel like I am contributing to society. It feels like I have a job again. It’s amazing I have lived this long; this study has allowed me to really live while I’m still alive!”

A pair of gloved hands gently connect a wire to a square device that is screwed onto a woman's shaved head.
Clinical research coordinator Max Dougherty connects the electrodes resting on Ann’s brain to the computer that translates her attempted speech into the spoken words and facial movements of an avatar. Photo by Noah Berger