Scientists reveal intriguing new insights into how the brain processes and predicts sounds

A new study published in Advanced Science suggests that the brain uses two distinct, large-scale networks to recognize memorized musical sequences. One network appears to handle general sound processing, while the other is specifically engaged in comparing incoming information to memory and detecting prediction errors. These findings provide a more integrated view of how the brain supports complex cognitive functions through the coordinated activity of widespread neural systems.

Predictive coding is a theory suggesting that the brain continuously generates expectations about incoming sensory information. When reality deviates from these expectations, the brain updates its predictions through a process called prediction error.

Much of the past research on this topic has focused on either small brain regions or narrow frequency bands. These studies have helped identify some of the building blocks of prediction, such as early sensory responses to unexpected sounds. But they often overlook how multiple brain regions cooperate as a network, especially during tasks involving memory for complex sequences like music.

In the new study, a team of researchers sought to address this gap in our understanding of how predictive coding works at the level of the whole brain. The research was led by Leonardo Bonetti, an associate professor at the Center for Music in the Brain at Aarhus University and the Centre for Eudaimonia and Human Flourishing at the University of Oxford, and Mattia Rosso, a researcher affiliated with the Center for Music in the Brain at Aarhus University and the IPEM Institute for Systematic Musicology at Ghent University.

“For several years, we (Leonardo Bonetti and Mattia Rosso) have been interested in understanding how the brain organises its activity across different regions when we perceive, remember, or predict sounds. Most existing analytical tools focus on small sets of brain regions or predefined connections, which means that we often miss the broader, system-level picture. Moreover, several methods rely on fairly strong assumptions or rather complex analytical procedures which limit the interpretability of the findings,” the researchers told PsyPost.

“We wanted to overcome these limitations by creating a new method that could capture the brain’s whole dynamic activity, how multiple regions cooperate in real time. This motivation led us to develop BROAD-NESS, a framework that identifies broadband brain networks in a way that is simple, effective, fast, and highly interpretable. Our goal was to give researchers a tool that is both mathematically rigorous and accessible, allowing them to map large-scale brain interactions without imposing strong assumptions on the data.”

The study involved 83 volunteers, ranging in age from 19 to 63. Participants first listened to and memorized a short musical piece by Johann Sebastian Bach. Following this memorization phase, their brain activity was recorded using magnetoencephalography (MEG), a technique that measures the magnetic fields produced by the brain’s electrical currents with high temporal precision.

During the recording, participants listened to 135 different five-tone musical excerpts. Some of these excerpts were taken directly from the piece they had memorized, while others were novel variations. For each excerpt, participants had to indicate whether it was part of the original music (“memorized”) or a new variation (“novel”).

The core of the analysis was the novel method, BROAD-NESS, which stands for BROadband brain Network Estimation via Source Separation. The researchers first used the MEG data to estimate the location of neural activity across 3,559 points, or voxels, throughout the brain.

They then applied a statistical technique called Principal Component Analysis to this massive dataset. This method identifies the main patterns of synchronized activity across all brain voxels, with each major pattern representing a distinct, simultaneously operating brain network. The analysis also quantifies how much of the brain’s total activity each network explains.

The two primary networks together explained about 88% of the variance in the broadband, source-reconstructed MEG data recorded during the task. The first network, which explained the majority of the activity (about 72%), was centered on the auditory cortices and the medial cingulate gyrus.

The activity in this network showed a more consistent pattern across all conditions, with less pronounced differences between memorized and novel sequences. This pattern suggests its primary role is in the fundamental processing of sounds as they are being heard.

The second network explained a smaller but significant portion of the activity (about 16%). This network also included the auditory cortices but extended to involve regions associated with memory and higher-order processing, such as the hippocampus, anterior cingulate, insula, and inferior temporal regions.

Unlike the first network, the activity in this second network was highly dependent on the experimental condition. Its dynamics appeared to reflect the processes of matching incoming sounds to stored memories and flagging prediction errors when the sounds deviated from what was expected.

“The key takeaway is that the brain works as a dynamic network, not as a collection of isolated regions,” Bonetti and Rosso explained. “When we remember a sound or predict what will come next, many brain areas interact simultaneously, and the quality of these interactions matters for how well we perform.”

“Using BROAD-NESS, we discovered that the auditory cortices are not just doing one job at a time. Instead, they participate in two major networks: one focused on processing the sensory details of sounds, and another that supports memory and predictive processes, linking to deeper brain structures such as the hippocampus and anterior cingulate cortex.”

To better understand the timing and organization of these networks, the researchers used additional analytical techniques. One method, called recurrence quantification analysis, examined the stability and predictability of the networks’ activity over time.

The results indicated that when participants were listening to the correctly memorized musical sequences, the combined activity of the two networks was more structured and stable. Across all participants, this increased stability was associated with better performance on the task, including higher accuracy and faster response times. This provides evidence that organized and recurrent network dynamics are linked to successful cognitive function.

“Interestingly, participants who showed more stable and recurrent interactions between these networks also performed better in memory recognition,” Bonetti and Rosso said. “In simpler terms, when the brain’s networks work together in a stable and coordinated way, cognition becomes more efficient.”

A separate analysis focused on the spatial organization of the networks. By clustering brain voxels based on their participation in the two networks, the researchers found a nuanced pattern of engagement. Some brain regions, such as parts of the auditory cortex, were highly active in both networks, suggesting they act as hubs that contribute to both sound perception and memory-based prediction.

Other regions were more specialized, contributing strongly to one network but not the other. For example, the medial cingulate was primarily involved in the first network, while the hippocampus was a key component of the second.

The study also provides a new perspective on the “dual-stream” hypothesis of brain organization. Originally described for vision, this model proposes separate pathways for processing “what” an object is versus “where” it is. The second network identified in this study aligns well with the “what” pathway, or ventral stream, as it involves regions critical for recognition and memory.

However, the first network does not map cleanly onto the traditional “where” pathway. Instead, it seems to represent a distinct system involved in sustained auditory attention and processing, suggesting a more complex organization for auditory memory than previously thought.

“While we expected to see a link between auditory and memory systems, what really stood out was how the auditory cortex was simultaneously engaged in two distinct large-scale networks: one for perception and one for prediction,” Bonetti and Rosso told PsyPost. “This shows that the same brain region can flexibly contribute to different computational roles depending on context. This pattern confirms that the brain is fundamentally organised to support parallel processing, where multiple cognitive operations run at once and influence each other in real time.”

The study has some limitations. The task, while effective for research, was relatively simple and did not involve the complexity of real-world music listening. Future research could use the BROAD-NESS method to investigate brain network dynamics during more naturalistic experiences.

The researchers also plan to apply this framework to study clinical populations. Examining how these large-scale network dynamics differ in individuals with conditions that affect memory or predictive processing, such as Alzheimer’s disease or schizophrenia, could offer new insights into the neural basis of these disorders.

“Our next steps are twofold,” Bonetti and Rosso said. “First, we want to continue refining the BROAD-NESS framework, improving its accessibility and scalability so other researchers can apply it to their own data. Second, we plan to apply it across a variety of datasets, both in healthy individuals and clinical populations, to explore how large-scale brain networks differ between health and disease.”

“Ultimately, we hope this approach can help us better understand not just how the brain works when everything goes well, but also what changes occur in pathological conditions. In the long run, this could contribute to developing new biomarkers or targets for interventions based on whole-brain network dynamics.”

“One of the things we value most about BROAD-NESS is that it’s fully data-driven and transparent,” Bonetti and Rosso added. “The pipeline is built to integrate spatial, temporal, and dynamical analyses in a way that is easy to interpret, making it suitable not just for specialists in neuroscience, but for researchers across psychology, medicine, and computational science.”

“More broadly, this work aligns with a growing effort to move from studying where things happen in the brain to understanding how they unfold and interact as part of a living, dynamic system. That’s the big picture we hope to contribute to.”

This research emerged from an international collaboration that brought together several leading institutions. The study was carried out by researchers from the Center for Music in the Brain, which is affiliated with Aarhus University and The Royal Academy of Music in Denmark, along with partners from the Department of Clinical Medicine at Aarhus University, the University of Oxford, and the Department of Physics at the University of Bologna. This collaborative work was made possible by financial support from several key organizations, including the Danish National Research Foundation, the Independent Research Fund Denmark, and the Lundbeck Foundation.

The study, “BROAD-NESS Uncovers Dual-Stream Mechanisms Underlying Predictive Coding in Auditory Memory Networks,” was authored by Leonardo Bonetti, Gemma Fernández-Rubio, Mathias H. Andersen, Chiara Malvaso, Francesco Carlomagno, Claudia Testa, Peter Vuust, Morten L. Kringelbach, and Mattia Rosso.

Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×