menu-control
The Jerusalem Post

Tiny sensors allows this ALS patient to communicate

 
 Frank Willett operates software that translates Pat Bennett's attempts at speech - recorded by sensors in her brain - into words on a screen. (photo credit: STEVE FISCH)
Frank Willett operates software that translates Pat Bennett's attempts at speech - recorded by sensors in her brain - into words on a screen.
(photo credit: STEVE FISCH)

At 62 words per minute, with a vocabulary of 125,000, brain sensor implants helped interpret a woman's intended speech and printed it on screen.

A year-and-a half ago, a neurosurgeon at Stanford University in California placed two tiny sensors as small as baby aspirin in the brain of Pat Bennett, now 68, a former human-resources director and onetime horseback rider who also jogged daily.

In 2012, she was diagnosed with amyotrophic lateral sclerosis (ALS or Lou Gehrig’s disease), a progressive neurodegenerative disorder that attacks neurons controlling movement, causing physical weakness and eventual paralysis and death.

The sensors, components of an intracortical brain-computer interface (iBCI) were implanted in two separate regions – both of them involved in speech production. with state-of-the-art decoding software, they are designed to translate the brain activity accompanying attempts at speech into words on a screen.

About a month after the surgery, Stanford scientists began twice-weekly research sessions to train the software that was interpreting her speech. After four months, Bennett’s attempted utterances were being converted into words on a computer screen at 62 words per minute – more than three times as fast as the previous record for BCI-assisted communication.

Advertisement

“These initial results have proven the concept, and eventually technology will catch up to make it easily accessible to people who cannot speak,” said Dr. Jaimie Henderson, the surgeon who performed the implants. “Our brains remember how to formulate words even if the muscles responsible for saying them out loud are incapacitated. A brain-computer hookup is making the dream of restoring speech a reality.”

The devices transmit signals from a couple of speech-related regions in Bennett’s brain to state-of-the-art software that decodes her brain activity and converts it to text displayed on a computer screen.

 ALS took Pat Bennett's ability to speak. (credit: STEVE FISCH)
ALS took Pat Bennett's ability to speak. (credit: STEVE FISCH)

How do symptoms for ALS start showing up?

Usually, ALS first shows up at the body’s periphery – arms and legs, hands and fingers. For Bennett, the deterioration began not in her spinal cord, as is typical, but in her brain stem. She can still move around, dress herself and use her fingers to type, albeit with increasing difficulty. But she can no longer use the muscles of her lips, tongue, larynx, and jaws to enunciate clearly the phonemes – units of sound like sh — that are the building blocks of speech. Although her brain can still formulate directions for generating those phonemes, her muscles can’t carry out the commands. 

“We’ve shown you can decode intended speech by recording activity from a very small area on the brain’s surface,” said Henderson, who was co-senior author of a paper describing the results published in Nature.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Dr. Frank Willett, a Howard Hughes Medical Institute scientist at with the Neural Prosthetics Translational Lab, that Henderson helped to establish in 2009, shares lead authorship of the study with his co-senior author electrical engineering Prof. Krishna Shenoy who died before the study was published, and graduate students Erin Kunz and Chaofei Fan. The team published their findings in the prestigious journal Nature under the title “A high-performance speech neuroprosthesis.” 

In 2021, Henderson, Shenoy and Willett were co-authors of a study published in Nature describing their success in converting a paralyzed person’s imagined handwriting into text on a screen using an iBCI, attaining a speed of 90 characters, or 18 words, per minute — a world record until now for an iBCI-related methodology.

Advertisement

In 2021, Bennett learned about Henderson and Shenoy’s work and volunteered to participate in the clinical trial. The sensors Henderson implanted in Bennett’s cerebral cortex – the brain’s outermost layer –  are square arrays of tiny silicon electrodes. Each array contains 64 electrodes, arranged in 8-by-8 grids and spaced apart from one another by a distance of about half the thickness of a credit card. The electrodes penetrate the cerebral cortex to a depth roughly equaling that of two stacked quarters. The implanted arrays are attached to fine gold wires that exit through pedestals screwed to the skull, which are then hooked up by cable to a computer. 

An artificial-intelligence algorithm receives and decodes electronic information emanating from Bennett’s brain, eventually teaching itself to distinguish the distinct brain activity associated with her attempts to formulate each of the 39 phonemes that compose spoken English. It feeds its “best guess” about the sequence of Bennett’s attempted phonemes into a sophisticated autocorrect system that converts the streams of phonemes into the sequence of words they represent.

To teach the algorithm to recognize which brain-activity patterns were connected with which phonemes, Bennett took part in 25 training sessions of four hours each during which she tried to repeat sentences chosen randomly from a large data set consisting of samples of conversations among people talking on the phone. As she tried to recite each sentence, Bennett’s brain activity – translated by the decoder into a phoneme stream and then assembled into words by the autocorrect system – would be displayed on the screen below the original. Then a new sentence would appear on the screen. She repeated 260 to 480 sentences per training session, and the system kept improving as it became familiar with Bennett’s brain activity during her attempts to speak.

When the vocabulary was expanded to 125,000 words – large enough to compose almost anything she’d want to say – the error rate was 23.8% –was far from perfect, but a giant step from the prior state of the art.

“This is a scientific proof of concept, not an actual device people can use in everyday life,” Willett said. The device described in this study is licensed for investigative use only and is not yet commercially available. “But it’s a big advance toward restoring rapid communication to people with paralysis who can’t speak.”

“Imagine,” Bennett wrote, “how different conducting everyday activities like shopping, attending appointments, ordering food, going into a bank, talking on a phone, expressing love or appreciation – even arguing – will be when nonverbal people can communicate their thoughts in real time.”

×
Email:
×
Email: