AI conversations can reduce belief in conspiracies, whether or not the AI is recognized as AI

Talking with an artificial intelligence chatbot can reduce belief in conspiracy theories and other questionable ideas. A new study published in PNAS Nexus finds that these belief changes are not dependent on whether the AI is perceived as a machine or a person, suggesting that the persuasive power lies in the quality of its arguments, not in the identity of the messenger. The research adds to a growing body of work showing that even beliefs often seen as resistant to correction may be influenced by short, targeted dialogues.

Beliefs in conspiracy theories and pseudoscientific ideas are often thought to be deeply rooted and difficult to change. These beliefs may fulfill emotional or psychological needs, or they may be reinforced by narratives that reject contradictory evidence. For example, someone who believes in a secret government plot might interpret any denial as further proof of the conspiracy.

Previous research has shown that conversations with artificial intelligence chatbots—particularly those tailored to an individual’s specific belief—can lead to meaningful reductions in belief certainty. However, it remained unclear whether these results were due to the facts presented, the way the message was framed, or the fact that the messenger was an AI.

One possibility is that people view AI systems as more neutral or less judgmental than human experts. That could make them more open to reconsidering their beliefs. But another possibility is that people might find arguments more credible when they come from a human source, especially when the tone is natural and conversational.

“We were motivated by a central question: does the identity of the source actually matter when correcting strongly held beliefs? Many frameworks—especially motivated reasoning—suggest that people filter information based on who they think is speaking. If a message comes from a source they distrust or associate with an opposing group, they may resist updating even when the evidence is strong. Conspiracy beliefs, in particular, are often tied to identity, so this issue seemed especially relevant,” explained study author
Esther Boissin, a postdoctoral associate at Cornell University.

“Our study tests whether debunking depends on the messenger or on the quality of the counterarguments. The absence of differences between the AI and human-expert framings suggests that, in this context, people respond to the content (which is the same across framings) rather than the label of the source. This also speaks to a persistent limitation in the literature: many past debunking attempts fail not because people are unpersuadable, but because the corrections they receive are incomplete, low-quality, or too generic. High-quality, tailored reasoning appears to matter far more than who delivers it.”

The study involved 955 participants drawn from a larger group of over 8,000 individuals who began the online survey. Participants were asked to report a belief they held—either a conspiracy theory or another type of idea that most experts would likely reject. Examples included beliefs in secret government collaborations or ideas drawn from pseudoscience, such as unconventional theories about disease.

Each participant first described one of their beliefs and rated how confident they were in it on a 0 to 100 scale. They then engaged in a two-round conversation with GPT-4o, a large language model developed by OpenAI. In these conversations, the AI attempted to argue against the participant’s belief using facts and evidence.

Participants were randomly assigned to different experimental conditions. Some were told they were talking to an AI, while others were told they were interacting with an “expert,” a term that many interpreted as referring to a human. Additionally, in half the cases, the AI used a tone designed to sound more conversational and human-like.

Despite these differences, all participants saw the same underlying arguments, customized to address their specific belief. After the conversation, they rated their belief confidence again.

The results showed that belief in conspiracy theories decreased by about 12 percentage points on average, while belief in other unsupported ideas decreased by about 6 points. These changes were statistically significant and occurred regardless of whether the AI was described as a machine or a human. Whether the AI used a neutral or more human-like tone also had no measurable effect.

This pattern held even among participants who thought they were talking to a person. In the “expert” condition, 45% described the speaker as human in a follow-up question, while only 21% mentioned it might have been an AI. Still, belief reductions were similar across groups.

“The effects are modest at the individual level but meaningful in practice,” Boissin told PsyPost. “Shifting entrenched beliefs—even by a few points—after a short, two-round conversation is notable. These changes are comparable to, or larger than, what we typically observe in high-quality debunking interventions. From a public-communication perspective, even small shifts aggregated across many interactions can have substantial impact.”

“It is also worth noting that AI is becoming more present in everyday life, and concerns about its influence are rising. Our findings show that these tools can be used responsibly and for beneficial purposes, and that their effectiveness does not depend on replacing human experts. The same approach can be applied in many other contexts where people need clear, accurate, and tailored information. Of course, these same tools can also be misused, which makes it even more important to understand when and how they influence beliefs.”

To confirm that these findings were not due to differences in who completed the study, the researchers included participants who dropped out in a conservative analysis. They assumed that those who left early experienced no belief change. Even under this assumption, the results remained consistent.

The researchers also analyzed the language used by participants in their responses. People who thought they were talking to an AI tended to use slightly simpler and less varied language, which fits previous findings suggesting people may simplify their communication when interacting with machines. However, they did not significantly change the structure or logic of their arguments.

“A key takeaway is that people who hold conspiracy beliefs are not as resistant to change as commonly assumed,” Boissin explained. “When they receive clear, precise, and evidence-based explanations that address their belief directly, many reconsider their views—even on topics typically thought to be ‘immune’ to correction.”

“This also challenges the idea that people are fundamentally irrational. Our results suggest that, when the arguments are strong and grounded in evidence, people are willing to update. The issue is usually that such high-quality explanations are hard to produce in everyday settings. The AI helped because it could gather relevant information quickly and present it in a structured way, not because it was an AI. A human with the same amount of knowledge could likely have produced a similar belief reduction, but assembling this amount of information in real time is extremely difficult for a human.”

The study did have some limitations. The AI model used for the conversations was trained on data that is predominantly from Western, English-speaking sources. This means its argumentative style and the evidence it presents may reflect specific cultural norms, and the debunking effects might not be the same in different cultural contexts. Future research could explore the effectiveness of culturally adapted AI models.

“A common misunderstanding would be to conclude that AI is inherently more persuasive or uniquely suited for debunking,” Boissin said. “Our results do not support that. The AI was effective because it could generate high-quality, tailored explanations—not because people trusted it more or because it had some special persuasive power.”

“A remaining caveat is that, while the source label did not matter here, this does not mean that source effects never matter; it simply shows that they were not a limiting factor in this particular debunking setting.”

The researchers plan to continue this line of inquiry, aiming build a more complete picture of the psychology of belief and when evidence-based dialogue is most effective.

“We want to understand why some people revise their beliefs while others do not, even when they receive the same information,” Boissin explained. “This includes examining individual differences—cognitive, motivational, or dispositional—that shape how people respond to counterevidence.”

“We are also interested in the properties of the beliefs themselves. Some beliefs may be revisable because they rest on factual misunderstandings, while others may be tied more strongly to identity or group loyalty. And beyond factual beliefs, we plan to study other types of beliefs—such as political attitudes or more ambiguous beliefs that do not have a clear ‘true’ or ‘false’—to see whether the same mechanisms apply. Understanding these differences can help clarify the cognitive processes that allow certain beliefs to emerge, solidify, or change.”

“More broadly, we want to map the conditions under which evidence-based dialogue works well, when it fails, and what this reveals about the psychology of belief,” Boissin continued. “As part of that effort, we plan to test more challenging scenarios—for example, situations where the AI is framed as an adversarial or low-trust source or even behaves in a way that could trigger resistance.”

“These conditions will allow us to assess the limits of the effect and evaluate how far our conclusions generalize beyond cooperative settings. In short, we want to understand whether people change their minds mainly because they process evidence or because they protect their identity.”

The study, “Dialogues with large language models reduce conspiracy beliefs even when the AI is perceived as human,” was authored by Esther Boissin, Thomas H. Costello, Daniel Spinoza-Martín, David G. Rand, and Gordon Pennycook.

Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×