Your eyes reveal how strongly you believe fake news before you even make a choice

A recent study published in the Proceedings of the National Academy of Sciences suggests that our preexisting beliefs deeply influence how we learn new information in our daily lives. By tracking eye movements and decision-making during a simulated news evaluation game, scientists found that people readily learn from rewards that match their existing views but struggle to adapt when rewards challenge their preconceived notions.

These findings provide evidence for the cognitive pathways that allow misinformation to persist in the modern digital landscape. This dynamic explains why simply presenting factual corrections often fails to change minds.

People increasingly rely on social media platforms for their daily news consumption, where automated algorithms tend to filter content to match users’ existing preferences. This digital environment provides a fertile ground for disinformation to spread rapidly across large populations, raising the question of why individuals continue to believe false content even when objective fact-checking is readily available.

“I began seriously considering this line of research in 2021, after witnessing firsthand the damage misinformation caused during the COVID-19 pandemic, particularly in relation to the vaccination campaign,” said study author Stefano Lasaponara, an associate professor in the department of psychology at Sapienza University of Rome. “That experience led me to wonder to what extent fake news might affect not only what people believe, but also how they learn from feedback and experience.”

Lasaponara and his colleagues sought to understand how a person’s preexisting judgments and internal confidence interact with the way they learn from external feedback. They designed the study to test whether our tendency to favor belief-consistent information might be rooted in basic, everyday learning mechanisms. By examining these fundamental learning processes, the authors hoped to uncover why people find it so difficult to update their opinions when faced with misleading news stories.

To explore these questions, the scientists recruited a final sample of 28 healthy young adults, aged between 18 and 36, to participate in a detailed three-part experiment. In the first phase, participants viewed a set of 324 news headlines that had recently circulated on popular social media platforms. Half of these selected headlines contained real news events, and the other half contained entirely false information. Participants had to read each headline on a computer screen and judge whether it was true or fake.

They also wagered a virtual amount of money, ranging from zero to 99 cents, on their provided answer. This financial bet served as a measurable indicator of their internal confidence regarding each specific news item. Based on these answers, the scientists grouped the headlines into four personalized categories for each individual participant. These customized categories included news judged as true with high confidence, true with low confidence, fake with high confidence, and fake with low confidence.

During this phase, the researchers used specialized eye-tracking glasses to measure the participants’ pupil dilation as they read. Pupil dilation is an involuntary physical response that indicates mental effort, focused attention, and physiological arousal. Measuring this subtle response allowed the team to track brain engagement in real time without interrupting the participants.

In the second phase, the researchers tested how well participants could learn new rules based on their previous judgments. Participants played a computer game where they had to choose between pairs of the headlines they had just rated in the first phase. The goal was to select the specific headline that would win them a 20-cent virtual monetary reward. Unknown to the participants, the rewards were not randomly assigned throughout the game.

In different rounds of the game, the 83 percent chance of winning a reward was tied to specific categories established during the initial evaluation. For example, in one round, picking headlines the participant had previously judged as true provided the reward. In another round, picking headlines judged as fake gave the reward. Other rounds rewarded choices based on high or low confidence, and one single round gave rewards entirely at random to serve as a baseline comparison.

The third and final phase tested whether the learning game had changed the participants’ minds regarding the news items. The scientists showed the participants the original headlines again, along with their initial true or false judgments and their associated confidence wagers. Participants were given the option to either confirm their original judgment or change their mind completely. If their final answer matched the actual real or fake status of the news, they kept their wagered money as a final payout.

The outcomes of the learning phase showed that participants learned very differently depending on the hidden rules of the computer game. When the game rewarded participants for choosing headlines they already believed to be true, they learned the winning strategy quickly and earned high scores. On the other hand, performance dropped when the game rewarded them for picking headlines they believed were fake. Participants also struggled to figure out the game’s hidden rules when rewards were tied to their confidence levels rather than their beliefs about truth.

“One important takeaway is that our prior beliefs can begin shaping our decisions even before we explicitly express a judgment,” Lasaponara said. “In our study, these pre-existing convictions were strong enough to influence learning itself. More broadly, this suggests that we should approach new information as critically and as openly as possible, trying, when we can, to evaluate it without immediately filtering it through our preconceptions.”

To understand the underlying mental strategies at play, the scientists used computational modeling, which involves creating mathematical simulations of human decision-making processes. The models revealed that when the rewards matched a participant’s belief in the truth, they used broad, generalized rules to make their choices.

When the rewards no longer matched their sense of truth, the participants abandoned these broad generalization strategies. Instead, they reverted to simply reacting to positive and negative feedback on a trial by trial basis, which proved to be a much less effective way to navigate the game.

The eye-tracking data provided physical evidence that our beliefs engage our nervous systems before we even make a conscious choice. In the initial phase, participants’ pupils dilated more when they were looking at headlines they would later judge with high confidence. This noticeable dilation suggests that strong subjective beliefs trigger an early physical arousal response within the body. During the learning phase, pupils dilated when participants faced a mental conflict, such as having to choose between a strongly held belief and a competing reward signal.

“I expected to find pupillary effects related to the moment of decision itself, but I did not expect to observe them at an earlier stage, during the formation of a belief-consistent choice tendency,” Lasaponara noted. “That was particularly interesting because it suggests that the influence of prior beliefs may begin unfolding before an overt response is made.”

When participants received feedback that went against their established beliefs, their pupils also widened, indicating cognitive surprise and an increased mental load. In the final feedback phase, participants showed a strong tendency to stick to their original opinions about the headlines. They rarely changed their minds, especially if they had placed a high confidence wager during the very first phase of the experiment.

Interestingly, high confidence made people resistant to changing their minds regardless of whether the headline was actually true or false in reality. Participants were slightly more willing to update their beliefs if they had initially expressed low confidence in their judgment. While the study provides detailed evidence on how subjective beliefs shape learning, there are potential misinterpretations and limitations to keep in mind.

Because the study required participants to experience all the different reward rules back to back, the learned rules from one round might have affected how they behaved in the next round. “An important caveat is that this study does not yet allow us to make strong claims about correcting misinformation, or about when and how people truly change their minds after learning,” Lasaponara explained. “Our results show that prior beliefs can bias reinforcement learning, but they do not yet tell us how to reliably undo that bias. This is something we are currently addressing in follow-up work.”

The experiment also relied exclusively on political and social news headlines, meaning these learning patterns might look different if the topics were neutral or completely unrelated to current events. Future research could expand on these physiological findings by using different types of information to see if this learning behavior applies to other areas of human life.

“Our broader goal is not only to better understand why people believe fake news, but also to identify the conditions under which misinformation becomes less effective,” Lasaponara added. “In follow-up studies, we are investigating whether different reinforcement structures can lead to varying degrees of belief updating and how computational models can help explain when people remain resistant to correction and when they become more flexible.”

Scientists could also design experiments that explicitly present participants with direct evidence contradicting their beliefs, rather than just changing a computer game’s reward rules. This alternative approach would help map out the exact conditions that might finally encourage people to update their most stubborn opinions.

“The title is also a small nod to Metallica, whom I am a big fan of,” Lasaponara added. “More importantly, this work would not have been possible without my co-authors, especially Valentina Piga and Silvana Lozito, whose contributions were fundamental to the project.”

The study, “Eye of the beholder: Pupillary response reflects how subjective prior beliefs shape reinforcement learning with fake news,” was authored by Silvana Lozito, Valentina Piga, Sara Lo Presti, Angelica Scuderi, Fabrizio Doricchi, Massimo Silvetti, and Stefano Lasaponara.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×