This Brain Implant Can Read Out Your Inner Monologue

This Brain Implant Can Read Out Your Inner Monologue

2025-08-18Technology
--:--
--:--
Tom Banks
Good morning 跑了松鼠好嘛, I'm Tom Banks, and this is Goose Pod for you. Today is Monday, August 18th.
Mask
I'm Mask, and we are here to discuss a brain implant that can read your inner monologue. This isn't science fiction anymore; it's reality.
Tom Banks
Let's get started. It's truly remarkable. Scientists have developed a brain-computer interface, or BCI, that can decode a person's silent, inner thoughts and display them as text on a screen in real-time. This is a lifeline for people with conditions like ALS.
Mask
It's a massive leap in communication bandwidth. We're talking up to 74% accuracy with a 125,000-word dictionary. The old methods required patients to physically attempt speech, which was slow and exhausting. This bypasses that physical bottleneck entirely.
Tom Banks
Exactly, and that human element is so important. One participant was thrilled he could finally interrupt a conversation again. The lead researcher, Erin Kunz, was driven to this work because her own father had ALS. It’s deeply personal.
Mask
And they've engineered a clever control mechanism. To ensure private thoughts stay private, the user just thinks the phrase "chitty chitty bang bang" to start or stop the device. It's a simple, elegant solution to the most obvious privacy problem.
Tom Banks
This technology certainly didn't appear out of thin air. Its origins go all the way back to the 1970s at UCLA, with a researcher named Jacques Vidal. He was the first to coin the term "brain-computer interface." His first experiment was just moving a cursor through a maze.
Mask
That was the slow, foundational work. Progress was incremental for decades. Moving a cursor, then controlling a robot in 1988. The real game-changer was the shift from non-invasive EEG, which is noisy, to implanting electrodes directly into the brain in the mid-90s.
Tom Banks
That’s a great analogy. Non-invasive EEG is like listening to a concert from outside the stadium—you just get a muffled roar. These implants are like putting a microphone on every single musician. That’s the level of precision we're talking about now.
Mask
And that precision is accelerated by focused investment. Organizations like DARPA and the NIH poured funding into the BRAIN initiative. That's what takes a scientific concept and turns it into a functional, life-changing technology. It's about applying massive resources to solve the engineering hurdles.
Tom Banks
And it leads to incredible results. One participant in the study, a man with ALS, used the device for just 16 hours and the system was already decoding his thoughts with about 97.5% accuracy. That's the culmination of 50 years of research.
Tom Banks
But, of course, this brings us to the monumental ethical questions. If a device can read the thoughts you want to share, what prevents it from accessing the ones you don't? This is where the discussion around "Neuro Rights" becomes critically important as a safeguard.
Mask
I see privacy as a feature to be engineered, not a barrier to stop progress. The risk of misuse is real, though. Think of workplace surveillance monitoring your attention level. The data is immensely valuable, but the solution is building better, unhackable security.
Tom Banks
But it's more than just a security issue, it's a societal one. What happens when these enhancements are only available to the wealthy? It threatens to create a new, biological divide in society. That's a conflict that technology alone can't solve.
Mask
That's a temporary phase. Every transformative technology starts out expensive and creates an advantage. The goal is to innovate and scale until it's as cheap and accessible as a smartphone. Competition will drive down the cost and level the playing field.
Tom Banks
Looking at the immediate impact, it's incredibly positive for the healthcare sector. Think of the reduction in costs for long-term disability care. It allows people to regain independence and even re-enter the workforce, which is a profound benefit for them and for society.
Mask
True, but the business reality is a huge bottleneck. The neurotech market is projected to hit nearly 40 billion dollars, but the technology is useless if no one can pay for it. Getting insurance companies to reimburse for these devices is the real mountain to climb.
Tom Banks
So, it's another case of our social and financial systems lagging far behind the pace of innovation. The science is here, but the bureaucracy stands in the way, and that, unfortunately, delays the impact for the very people who need this breakthrough right now.
Mask
The future is about integration and enhancement. AI will make these systems learn faster and become almost perfectly accurate. The next frontier is closed-loop systems—not just reading from the brain, but writing information back to it. That's the leap from restoration to augmentation.
Tom Banks
That is a staggering thought. It could provide sensory feedback for prosthetics or even regulate brain activity to treat disorders. But it also takes us right back to those deep ethical questions about what it means to be human when we can rewrite our own minds.
Tom Banks
So, this technology is a double-edged sword: it offers incredible hope for communication while opening a new chapter on privacy and what it means to be human.
Mask
That's the end of today's discussion. Thank you for listening to Goose Pod. See you tomorrow.

## New Brain Implant Reads Inner Speech in Real Time **News Title:** This Brain Implant Can Read Out Your Inner Monologue **Publisher:** Scientific American **Author:** Emma R. Hasson **Publication Date:** August 14, 2025 This report details a groundbreaking advancement in brain-computer interfaces (BCIs) that allows individuals with severe paralysis to communicate by reading out their "inner speech" – the thoughts they have when they imagine speaking. This new neural prosthetic offers a significant improvement over existing technologies, which often require users to physically attempt to speak. ### Key Findings and Technology: * **Inner Speech Decoding:** The new system utilizes sensors implanted in the brain's motor cortex, the area responsible for sending motion commands to the vocal tract. While this area is also involved in imagined speech, the researchers have developed a machine-learning model that can interpret these neural signals to decode inner thoughts into spoken words in real time. * **Improved Communication for Paralysis:** This technology is particularly beneficial for individuals with conditions like Amyotrophic Lateral Sclerosis (ALS) and brain stem stroke, who have limited or no ability to speak. * **Contrast with Previous Methods:** * **Blinking/Muscle Twitches:** Older methods relied on eye movements or small muscle twitches to select words from a screen. * **Attempted Speech BCIs:** More recent BCIs require users to physically attempt to speak, which can be slow, tiring, and difficult for those with impaired breathing. This new "inner speech" system bypasses the need for physical speech attempts. * **Vocabulary Size:** Previous inner speech decoders were limited to a few words. This new device allows participants to access a dictionary of **125,000 words**. * **Communication Speed:** Participants in the study could communicate at a comfortable conversational rate of approximately **120 to 150 words per minute**, with no more effort than thinking. This is a significant improvement over attempted speech devices, which can be hampered by breathing difficulties and produce distracting noises. * **Target Conditions:** The technology is designed for individuals whose "idea to plan" stage of speech is functional, but the "plan to movement" stage is broken, a condition known as dysarthria. ### Study Details: * **Participants:** The research involved **three participants with ALS** and **one participant with a brain stem stroke**, all of whom already had the necessary brain sensors implanted. * **Publication:** The results of this research were published on Thursday in the journal *Cell*. ### User Experience and Impact: * **Comfort and Naturalism:** Lead author Erin Kunz from Stanford University highlights the goal of achieving a "naturalistic ability" and comfortable communication for users. * **Enhanced Social Interaction:** One participant expressed particular excitement about the newfound ability to interrupt conversations, a capability lost with slower communication methods. * **Personal Motivation:** Erin Kunz's personal experience with her father, who had ALS and lost the ability to speak, drives her research in this field. ### Privacy and Future Considerations: * **Privacy Safeguard:** A code phrase, "chitty chitty bang bang," was implemented to allow participants to start or stop the transcription process, ensuring private thoughts remain private. * **Ethical Oversight:** While brain-reading implants raise privacy concerns, Alexander Huth from the University of California, Berkeley, expresses confidence in the integrity of the research groups, noting their patient-focused approach and dedication to solving problems for individuals with paralysis. ### Participant Contribution: The report emphasizes the crucial role and incredible dedication of the research participants who volunteered to advance this technology for the benefit of others with paralysis.

This Brain Implant Can Read Out Your Inner Monologue

Read original at Scientific American

August 14, 20254 min readNew Brain Device Is First to Read Out Inner SpeechA new brain prosthesis can read out inner thoughts in real time, helping people with ALS and brain stem stroke communicate fast and comfortably Andrzej Wojcicki/Science Photo Library/Getty ImagesAfter a brain stem stroke left him almost entirely paralyzed in the 1990s, French journalist Jean-Dominique Bauby wrote a book about his experiences—letter by letter, blinking his left eye in response to a helper who repeatedly recited the alphabet.

Today people with similar conditions often have far more communication options. Some devices, for example, track eye movements or other small muscle twitches to let users select words from a screen.And on the cutting edge of this field, neuroscientists have more recently developed brain implants that can turn neural signals directly into whole words.

These brain-computer interfaces (BCIs) largely require users to physically attempt to speak, however—and that can be a slow and tiring process. But now a new development in neural prosthetics changes that, allowing users to communicate by simply thinking what they want to say.The new system relies on much of the same technology as the more common “attempted speech” devices.

Both use sensors implanted in a part of the brain called the motor cortex, which sends motion commands to the vocal tract. The brain activation detected by these sensors is then fed into a machine-learning model to interpret which brain signals correspond to which sounds for an individual user. It then uses those data to predict which word the user is attempting to say.

On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.But the motor cortex doesn’t only light up when we attempt to speak; it’s also involved, to a lesser extent, in imagined speech.

The researchers took advantage of this to develop their “inner speech” decoding device and published the results on Thursday in Cell. The team studied three people with amyotrophic lateral sclerosis (ALS) and one with a brain stem stroke, all of whom had previously had the sensors implanted. Using this new “inner speech” system, the participants needed only to think a sentence they wanted to say and it would appear on a screen in real time.

While previous inner speech decoders were limited to only a handful of words, the new device allowed participants to draw from a dictionary of 125,000 words.A participant is using the inner speech neuroprosthesis. The text above is the cued sentence, and the text below is what's being decoded in real-time as she imagines speaking the sentence.

“As researchers, our goal is to find a system that is comfortable [for the user] and ideally reaches a naturalistic ability,” says lead author Erin Kunz, a postdoctoral researcher who is developing neural prostheses at Stanford University. Previous research found that “physically attempting to speak was tiring and that there were inherent speed limitations with it, too,” she says.

Attempted speech devices such as the one used in the study require users to inhale as if they are actually saying the words. But because of impaired breathing, many users need multiple breaths to complete a single word with that method. Attempting to speak can also produce distracting noises and facial expressions that users find undesirable.

With the new technology, the study's participants could communicate at a comfortable conversational rate of about 120 to 150 words per minute, with no more effort than it took to think of what they wanted to say.Like most BCIs that translate brain activation into speech, the new technology only works if people are able to convert the general idea of what they want to say into a plan for how to say it.

Alexander Huth, who researches BCIs at the University of California, Berkeley, and wasn’t involved in the new study, explains that in typical speech, “you start with an idea of what you want to say. That idea gets translated into a plan for how to move your [vocal] articulators. That plan gets sent to the actual muscles, and then they carry it out.

” But in many cases, people with impaired speech aren’t able to complete that first step. “This technology only works in cases where the ‘idea to plan’ part is functional but the ‘plan to movement’ part is broken”—a collection of conditions called dysarthria—Huth says.According to Kunz, the four research participants are eager about the new technology.

“Largely, [there was] a lot of excitement about potentially being able to communicate fast again,” she says—adding that one participant was particularly thrilled by his newfound potential to interrupt a conversation—something he couldn’t do with the slower pace of an attempted speech device.To ensure private thoughts remained private, the researchers implemented a code phrase: “chitty chitty bang bang.

” When internally spoken by participants, this would prompt the BCI to start or stop transcribing. Brain-reading implants inevitably raise concerns about mental privacy. For now, Huth isn’t concerned about the technology being misused or developed recklessly, speaking to the integrity of the research groups involved in neural prosthetics research.

“I think they’re doing great work; they’re led by doctors; they’re very patient-focused. A lot of what they do is really trying to solve problems for the patients,” he says, “even when those problems aren’t necessarily things that we might think of,” such as being able to interrupt a conversation or “making a voice that sounds more like them.

” For Kunz, this research is particularly close to home. “My father actually had ALS and lost the ability to speak,” she says, adding that this is why she got into her field of research. “I kind of became his own personal speech translator toward the end of his life since I was kind of the only one that could understand him.

That’s why I personally know the importance and the impact this sort of research can have.”The contribution and willingness of the research participants are crucial in studies like this, Kunz notes. “The participants that we have are truly incredible individuals who volunteered to be in the study not necessarily to get a benefit to themselves but to help develop this technology for people with paralysis down the line.

And I think that they deserve all the credit in the world for that.”

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

This Brain Implant Can Read Out Your Inner Monologue | Goose Pod | Goose Pod