+ copy+Line Copy 7

University of California San Francisco

Give to UCSF

Unraveling the Ethics of New Neurotechnologies

By Nicholas Weiler

In those of us lucky enough to have intact speech, the boundary between thinking and speaking is almost imperceptible. So when scientists talk about designing technology to decode the brain activity underlying speech, it is easy to think they are talking about reading people’s minds, with all the serious ethical concerns that would imply.

In reality, there is a huge gap, both scientific and technological, that makes any sinister attempt to intrude on a person’s inner thoughts virtually impossible, while decoding what they are trying to say out loud – a clinically urgent need for people with paralysis – is merely very hard.

I have no interest in developing a technology to find out what people are thinking, even if it were possible. But if someone wants to communicate and can’t, I think we have a responsibility as scientists and clinicians to try to restore that most fundamental human ability.

Eddie Chang, MD

“I have no interest in developing a technology to find out what people are thinking, even if it were possible,” said Eddie Chang, MD, a professor of neurosurgery, Bowes Biomedical Investigator, and member of the Weill Institute for Neurosciences at UCSF. “But if someone wants to communicate and can’t, I think we have a responsibility as scientists and clinicians to try to restore that most fundamental human ability.”

Scientists currently know little more than the rough boundaries of the brain real estate responsible for formulating our thoughts, and virtually nothing about how patterns of electrical signals inside our skulls correspond to particular words or ideas contained in those thoughts.

But once we decide to express our thoughts to others, things become much simpler: our ineffable inner monologue must be translated into a well-defined set of muscle movements – whether that’s by moving the lips, tongue, jaw and larynx to form our breath into words, or by controlling the movement of our fingers as we type on a keyboard or touch screen – all of which are controlled by the motor cortex, one of the best-studied parts of the brain.

In many people with paralysis, the brain’s language systems are fully intact all the way to this final output stage, but the nerves that actually send the electrical signals to tell the muscles of the vocal tract to speak or the fingers to type have been severed.

portrait of Eddie Chang
Eddie Chang, MD

Scientists have already been successful in designing brain implants that allow paralyzed people to control robotic limbs with their minds: by hooking up the implant to the brain’s motor cortex, the patient can learn to control the robotic limb as if it were their own. Now researchers hope to take a similar approach for restoring communication abilities to people with paralysis – by intercepting the messages the brain is still trying to send to the non-functional vocal tract, and instead using them to control an external communication device.

Still, it is never too early to begin thinking about how such technologies may evolve in the future, and the ethical concerns they may raise, says UCSF neurologist and neuroethicist Winston Chiong, MD, PhD.

“Right now it will be such a major achievement to enable someone without a voice to express themselves again that it’s hard to worry about more problematic potential future applications of these techniques,” Chiong said. “But I would say this is exactly the kind of question we should be thinking about now so that we can make informed choices about how this technology should be developed in the future.”

Chiong, a behavioral neurologist at the UCSF Memory and Aging Center who also studies the neuroscience of decision making, has made long study of the ethics of emerging medical technologies. He serves on the Neuroethics Working Group of the National Institutes of Health BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies), and the American Academy of Neurology’s Ethics, Law and Humanities Committee.

portrait of Winston Chiong
Winston Chiong, MD, PhD

“Even though some of these technologies are not here yet, it’s important to integrate these kinds of ethical concerns as devices are developed,” Chiong said. “Neuroethics is becoming a priority of the NIH and other funding agencies who recognize that we don’t want to get far along in the development of some new device, and then belatedly realize that it poses serious ethical problems."

Since 2017, Chiong and Chang have led a collaborative neuroethics research project funded by the NIH BRAIN Initiative to explore the ethical issues that arise from the novel brain technologies whose development the initiative was created to support.

“In unearthing these ethical issues, we try as much as possible to get out of our armchairs and actually observe how people are interacting with these new technologies. We interview everyone from patients and family members to clinicians and researchers,” Chiong said. “We also work with philosophers, lawyers, and others with experience in biomedicine, as well as anthropologists, sociologists and others who can help us understand the clinical challenges people are actually facing as well as their concerns about new technologies.”

Some of the top issues on Chiong’s mind include ensuring patients understand how the data recorded from their brains are being used by researchers; protecting the privacy of this data; and determining what kind of control patients will ultimately have over their brain data.

“As with all technology, ethical questions about neurotechnology are embedded not just in the technology or science itself, but also the social structure in which the technology is used,” Chiong added. “These questions are not just the domain of scientists, engineers, or even professional ethicists, but are part of larger societal conversation we’re beginning to have about the appropriate applications of technology, and personal data, and when it's important for people to be able to opt out or say no.”

Related story: Team IDs Spoken Words and Phrases in Real Time from Brain’s Speech Signals