Speaking (with) your mind: How we can use brain-computer interfaces to communicate without speech

For social creatures such as humans, communication is key. However, certain conditions can make verbal communication difficult. Recent studies have been able to use Brain-Computer Interfaces to translate brain signals to speech so they can help speech-impaired patients communicate without needing to speak.

This post is also available in Dutch.

Speech is a key characteristic that sets human communication apart from other animals. Our ability to speak relays information and binds us together, forming the essence of relationships and societies. However, certain conditions, such as the neurological disorder Amyotrophic lateral sclerosis (ALS), can cause severe speech impairments and can make communication extremely difficult. Recently, two research teams have taken major steps toward developing Brain-Computer interfaces (BCIs) that allow patients to overcome these barriers with technology that translates brain signals into speech.

Brain-Computer Interfaces: Mind reading machines?

Scientists have been using methods like fMRIEEG, or implanted electrodes to record brain activity for many years. A BCI is a technology that connects such electrical signals in the brain with a device like a computer. For example, a BCI can use “Brain-to-text decoding”, meaning it can translate brain activity into sentences. Still, a BCI can’t read somebody’s mind, but it can use clever algorithms to learn what brain activity is connected to what action. 

Video description: Meet Pat, a patient with ALS who can speak at a rate of up to 60 words per minute using a brain-computer interface. 

Connecting neural activity with meaning

This year, not one but two teams of scientists have stepped up the game in BCI research and created powerful tools that can translate brain signals to sentences close to the rate of normal speech. As you can see in the video, one team helped a patient with ALS called Pat. Pat has electrodes implanted in her brain that can directly record the activity in the sensorimotor face area, a region involved in moving the face muscles when speaking. They measured her while she was reading sentences to decipher which activity was related to which word. This information can then be fed to a language model (the same thing ChatGPT uses) to translate the brain activity into speech when Pat wants to say something. The other  researchers used similar techniques to regain speech in a patient following a stroke, even making the translation into artificial speech possible.

The road ahead: The future of BCIs

While these recent studies show the amazing potential of new techniques have for patients with speech impairments, there is still a long way to go before these BCIs become common practice. We still need to figure out how to apply this to different patients and different conditions, and make the BCIs even more accurate. There is also the need  to make this sophisticated technology available for non-invasive ways of measuring brain activity (like EEG) for patients who don’t have implanted electrodes like Pat. Still, BCIs change how we perceive and facilitate verbal communication. They can empower patients to express themselves without the need for speech, giving them not only words but a voice.

Credits 

Author: Helena Olraun

Buddy: Viola Hollestein

Editor: Maartje Koot

Translation: Eline de Boer

Editor translation: Judith Scholing

Featured image by sudatimages on shutterstock

Leave a Reply

Your email address will not be published. Required fields are marked *

Categories