A real-time system decoded who was listening and boosted the speaker’s voice, offering proof of concept for smarter hearing technologies that respond to attention rather than just sound.

Study: Real-time brain-controlled selective hearing enhances speech perception in multi-talker environments. Image Credit: OpenAI. Neural Harmony in a Vibrant Crowd. 2026. AI-generated illustration.
In a recent study published in the journal Nature Neuroscience, researchers developed a brain-controlled hearing system that can help people focus on a single voice in noisy environments. They recorded real-time brain activity from patients undergoing neurosurgery and tested the system’s ability to identify the attended speaker and amplify that speaker's voice.
The system demonstrated consistent results in improving speech understanding, even in the presence of similar voices and background noise, in participants with self-reported normal hearing, while a separate group of listeners with hearing loss preferred and better understood the system-enhanced audio.
The findings open avenues for the development of brain-guided hearing aids to improve voice recognition and hearing in busy environments like restaurants or parties.
Brain-Controlled Hearing Aid Background
People, especially those with hearing difficulties, often struggle to follow conversations in crowded places, even with hearing aids. This is because regular hearing aids typically amplify all sounds, including background noise.
To address this issue, scientists have developed auditory attention decoding (AAD) technology. This approach detects the voice an individual is listening to and makes that specific voice louder, based on brain activity signals. Previous studies have tested this technology in controlled laboratory conditions; however, whether it can deliver real-time perceptual benefits has remained unclear.
Intracranial EEG Hearing Study Design
In the present study, researchers tested the brain-controlled hearing system in four adults undergoing brain monitoring as part of epilepsy treatment. They recorded from clinically implanted intracranial electroencephalography (iEEG) electrodes, with coverage of the speech and sound-processing regions of participants’ brains during neurosurgery. These electrodes allowed researchers to record high-resolution brain activity while participants listened to conversations. These participants had self-reported normal hearing.
The team first trained the system before testing it in real time. In this offline training phase, participants listened to conversations simultaneously from two spatially separated speech streams. These simulated multiple people talking in a closed room. The conversations involved everyday topics. The team increased the task's difficulty by using similar voices, such as those of speakers of the same gender. They also included background sounds such as street noise and crowd chatter to mimic real-world scenarios further.
The team instructed the participants to follow only one conversation and ignore the other. Participants verified this by pressing a button upon hearing repeated words in a conversation. While they listened, the team recorded the brain activity. They then trained AAD models to recognize patterns in brain signals and reconstruct the rhythm, or “speech envelope,” of the attended voice.
In the online, controlled testing phase, using realistic multi-talker and noise conditions, the technology continuously analyzed participants’ brain signals to identify which speaker they were paying attention to. Upon identifying the speaker, the technology automatically amplified that voice while keeping the overall sound level stable.
The researchers compared the listening performance before and after the brain-guided enhancements. They then checked whether the system could quickly adjust when participants shifted their focus to another speaker. Subsequently, they tested whether the system could track natural changes in speaker identity. Further, they measured pupil dilation to assess mental effort.
Lastly, they played the audio recordings to 40 individuals with hearing impairment and analyzed their responses. Generalized linear mixed models (GLMMs) enabled statistical analysis.
Selective Hearing System Performance Results
The system demonstrated consistent performance across different tests and listening situations. Electrodes placed over the superior temporal gyrus provided the most useful brain signals. During offline testing, the system correctly recognized the voice the listener paid attention to in 72.0% to 90.3% of decoding windows across participants. The technology also performed reliably in difficult situations, including similar voices and background noise.
In real time, the system automatically amplified the voice that the listener was focusing on. Researchers found a 12 dB improvement in target-to-masker ratio, indicating that the attended speaker’s voice became much clearer than unattended voices and background noise.
Participants strongly preferred listening with the system on and reported improved speech understanding. Researchers observed smaller pupil dilation when the system was active, suggesting lower mental strain among participants while following conversations. More accurate brain-based attention decoding was linked to stronger participant preference.
When participants were instructed to focus on another voice, the system quickly adjusted to such shifts in attention, with a mean switch time of 5.1 seconds. The technology could also naturally follow changes in a listener’s attention.
Participants with self-reported normal hearing showed improvements while using the closed-loop system, and separate listeners with hearing loss showed improvements in speech understanding and strongly preferred the enhanced audio.
Brain-Guided Hearing Technology Implications
The findings demonstrate that brain-guided hearing systems could help people better understand speech in noisy environments by recognizing and amplifying the specifically attended voice. The findings suggest that such systems could be especially relevant to everyday situations, where attention constantly changes, due to their ability to adjust to natural shifts in attention. However, this technology requires invasive procedures for implanting electrodes, and therefore, may not be suitable for widespread routine use.
Scientists can, nevertheless, use this system as a gold-standard benchmark and proof of concept to develop smarter, more personalized versions using less-invasive brain-computer interface technologies.
Download your PDF copy by clicking here.
Journal reference:
- Choudhari, V., Nentwich, M., Johnson, S. et al. (2026). Real-time brain-controlled selective hearing enhances speech perception in multi-talker environments. Nature Neuroscience. DOI:10.1038/s41593-026-02281-5, https://www.nature.com/articles/s41593-026-02281-5