Sun. Nov 28th, 2021

Researchers in California reported Wednesday that they had developed and successfully tested an experimental brain implant that translates brain signals into words on a computer screen.

The achievement, described in a paper published in the New England Journal of Medicine, marks a step toward technology that may one day help people speak by thinking. It also offers a glimmer of hope for the thousands of people who each year lose the ability to speak as a result of injury or illness.

Yet the limitations of the so-called speech neuroprosthesis indicate that brain-computer interface technology—in which tiny electrical signals from the brain are converted into actions in the physical world like speaking, typing or controlling a computer cursor—remains in its infancy. In recent years the technology has drawn the attention of academic scientists as well as techcompanies that hope to commercialize it, including Elon Musk’s Neuralink Corp., Kernel and

Facebook Inc.

Facebook is a sponsor of the new research and said in a blog post that it was eager for the development of a noninvasive, wearable device that could allow people to type by thinking.

This UCSF illustration shows the placement of the electrode on the speech motor cortex and the head stages used to connect the electrode to the computer.



Photo:

Illustration: Ken Probst/UCSF

To test their neuroprosthesis, the University of California, San Francisco researchers enlisted the help of a man in his 30s who had lost the ability to speak as a result of paralysis caused by a severe stroke suffered more than 15 years ago. The man, who now communicates by using a cap-worn pointer to tap out individual letters on a screen, agreed to have a small rectangular array of electrodes surgically attached to the outer surface of his brain.

Over the course of 81 weeks and in 50 separate sessions, the researchers attached a computer to the array to record the man’s brain activity as he observed individual words displayed on a screen and imagined uttering them aloud. The researchers said in the paper that they could accurately identify the word the man was saying 47% of the time.

Dr. David Moses, a postdoctoral scientist at the university and a co-author of the new paper, conducts a trial session with the participant.



Photo:

Todd Dubnicoff/UCSF

The accuracy rose to 76% when the scientists incorporated word-prediction algorithms similar to the auto-suggest feature of email and word-processing programs. The study was limited to a vocabulary of 50 words—a tiny fraction of the many thousands of words that make up the vocabularies of elementary-school students.

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Dr. Eddie Chang, a neurosurgeon at the university and the paper’s senior author. “It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”

In addition to demonstrating that the brain region responsible for speech continues to function even years after the ability to speak has been lost, the new research shows that computers can be taught to decode full words from brain activity and not just letters, said Amy Orsborn, an assistant professor of bioengineering at the University of Washington, who wasn’t involved in the research.

SHARE YOUR THOUGHTS

How do you imagine this technology might change people’s lives? Join the conversation below.

Devices capable of doing that could one day speed communication for people who have lost the ability to speak, she said, Many of these people type out words letter by letter on assistive devices—as does the man involved in the new research.

Yet Dr. Orsborn and other experts said the system’s high error rate, its limited vocabulary and the large amount of time required to train the system to recognize the imagined words are among the reasons that such technology has a long way to go before a practical device is available.

Other researchers have been able to translate brain signals to computer text, but those efforts have mostly generated individual letters rather than full words. And while Dr. Chang and his team had previously demonstrated the ability to translate brain signals into words, they did so with test subjects who could speak, making it easier to teach a computer the brain waves associated with specific words.

Researchers in Dr. Eddie Chang’s lab at UCSF’s Mission Bay campus.



Photo:

Noah Berger

The new research comes two months after Stanford researchers reported that they had developed and successfully demonstrated a similar system that enabled a man with a paralyzed hand to “type” 90 characters a minute with 94% accuracy, and 99% with the addition of word-prediction algorithms. The system, which used electrodes implanted within the brain rather than on its surface, was described in a paper published in May in the journal Nature.

The University of California researchers didn’t make the man or his family available for comment, saying he wished to remain anonymous. The speech neuroprosthesis he used in the study is an experimental device and not something the man can use daily.

But he continues to participate in the continuing research—one aim of which is to expand the number of words that can be used—and seems to enjoy the sessions and to take pride in his involvement. Dr. David Moses, a postdoctoral scientist at the university and a co-author of the new paper, said that the man would giggle and tremble with apparent delight when the computer displayed his words correctly.

“He feels very fulfilled,” Dr. Moses said. “He gets a lot of joy from that, that he’s contributing in his own special way.”

Write to Rolfe Winkler at rolfe.winkler@wsj.com

Copyright ©2021 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Leave a Reply

Your email address will not be published. Required fields are marked *