Researchers have successfully identified and translated the brain activity associated with inner speech, the silent conversation people have inside their heads. In a new study published in the journal Cell, a team of scientists demonstrated a system that can decode these silent thoughts on command with an accuracy rate as high as 74 percent, a development that could transform communication for individuals unable to speak audibly.
Brain-computer interface technologies have shown increasing promise for assisting people with severe disabilities. These systems work by interpreting brain signals and translating them into actions, such as moving a robotic arm or typing on a screen. Recent advancements have even enabled brain-computer interfaces to decode attempted speech in people with paralysis, where the system interprets brain activity generated when a person physically tries to form words, even if no sound is produced. While faster than older methods like eye-tracking, this process of attempting speech can still be slow and physically exhausting for users with limited muscle control.
Led by Erin Kunz and Benyamin Meschede-Krasa of Stanford University, the research team investigated a potentially less strenuous alternative: decoding inner speech directly. The idea was to explore if a brain-computer interface could interpret the neural signals of words that are only imagined, without any physical effort to speak them. This approach could offer a more comfortable and perhaps faster way for people with severe speech and motor impairments to communicate.
“If you just have to think about speech instead of actually trying to speak, it’s potentially easier and faster for people,” said Meschede-Krasa,
The study involved four participants who had severe paralysis resulting from either amyotrophic lateral sclerosis or a brainstem stroke. Each participant had microelectrode arrays implanted in their motor cortex, a brain region that plays a key role in controlling speech. The researchers recorded the neural activity from these sensors while asking the participants to perform different tasks. These included both physically attempting to say a set of words out loud and simply imagining saying the same words internally.
Upon analyzing the data, the team found that attempted speech and inner speech activated overlapping regions of the brain. The patterns of neural activity were quite similar for both actions. A notable difference was that the brain signals associated with inner speech were generally weaker in magnitude compared to the signals for attempted speech. This suggests that imagining speech engages many of the same neural circuits as preparing to speak aloud, but at a lower intensity.
Using the collected data from inner speech tasks, the researchers then trained artificial intelligence models to recognize and interpret the patterns for specific imagined words. They demonstrated the system’s capability in a proof-of-concept experiment. The brain-computer interface was able to decode entire sentences that participants imagined speaking. When tested with a large vocabulary of 125,000 words, the system achieved an accuracy rate of up to 74 percent.
The system also showed an ability to decode thoughts that were not part of the explicit instructions. For instance, when participants were asked to silently count the number of pink circles on a screen, the brain-computer interface was able to decode a sequence of increasing numbers. This finding suggests the technology could potentially interpret spontaneous, unprompted inner speech that occurs naturally during a cognitive task.
“This is the first time we’ve managed to understand what brain activity looks like when you just think about speaking,” Kunz said. “For people with severe speech and motor impairments, BCIs capable of decoding inner speech could help them communicate much more easily and more naturally.”
While the similarity between attempted and inner speech opens up new possibilities for communication, it also raises questions about mental privacy. To address this, the team examined the distinctions between the two types of neural activity more closely. They discovered that while the patterns were similar, they were different enough for a system to reliably tell them apart.
Senior author Frank Willett of Stanford University noted that this distinction could be used to train brain-computer interfaces to specifically ignore inner speech, preventing the system from accidentally outputting a user’s private thoughts.
Building on this, the researchers demonstrated a practical privacy control for users who might want to use inner speech for communication. They developed a password-controlled mechanism that keeps the brain-computer interface from decoding any inner speech until it is intentionally activated.
In their experiment, a user could think of a specific keyword phrase, “chitty chitty bang bang,” to unlock the system. Once the password was thought, the device would begin translating inner speech into text. The system recognized this silent password with more than 98 percent accuracy, offering a reliable way for a user to control when their thoughts are being translated.
The researchers acknowledge the limitations of their work. The current systems are not yet capable of decoding free-form, unstructured inner speech without making significant errors. The study was also conducted with a small number of participants, and future work will be needed to see how these findings apply to a broader population. The privacy mechanisms demonstrated are initial steps, and further development will be needed as the technology advances.
Despite these limitations, the team believes that more advanced devices with a greater number of sensors and improved algorithms could one day achieve more fluent decoding of inner thoughts. The findings represent a significant step toward developing communication tools that are not only effective but also comfortable and controllable for the user, giving them the power to choose when to speak and when to think privately.
“The future of BCIs is bright,” Willett said. “This work gives real hope that speech BCIs can one day restore communication that is as fluent, natural, and comfortable as conversational speech.”
The study, “Inner speech in motor cortex and implications for speech neuroprostheses,” authored by Erin M. Kunz, Benyamin Abramovich Krasa, Foram Kamdar, Donald T. Avansino, Nick Hahn, Seonghyun Yoon, Akansha Singh, Samuel R. Nason-Tomaszewski, Nicholas S. Card, Justin J. Jude, Brandon G. Jacques, Payton H. Bechefsky, Carrina Iacobacci, Leigh R. Hochberg, Daniel B. Rubin, Ziv M. Williams, David M. Brandman, Sergey D. Stavisky, Nicholas AuYong, Chethan Pandarinath, Shaul Druckmann, Jaimie M. Henderson, and Francis R. Willett.