Truthful messages are more persuasive and more likely to be shared than false ones, according to new research published in the Journal of Personality and Social Psychology. The findings, drawn from four large experiments, challenge the widespread belief that misinformation naturally spreads more effectively than accurate information.
Concerns about the influence of false information have intensified in recent years, particularly as misleading claims have been linked to delayed climate action, public health issues, and a loss of trust in institutions. Earlier studies have shown that falsehoods can travel rapidly on platforms such as X (formerly Twitter), leading many to conclude that lies possess an inherent advantage in the digital environment. However, the new research suggests that this pattern may be shaped more by the design of social media platforms than by human preference.
Led by Nicolas Fay from the University of Western Australia, the researchers sought to examine how people respond to true and false information when the influence of algorithms, bots, and platform incentives is removed.
The team conducted four experiments involving a combined total of 4,607 participants (ranging from 18 to 99 years of age). Two experiments focused on a “persuasion game,” where the goal was to create short messages to convince others of a claim. The other two experiments focused on an “attention game,” where the goal was to write messages designed to capture as much attention as possible.
In the first and third experiments, human participants wrote the messages. They were randomly assigned to base their messages on information they believed to be true, information they believed to be false, or given no constraints at all. In the second and fourth experiments, the messages were generated by the artificial intelligence model GPT-3.5 using the same constraints. A separate, large group of human participants then rated all the messages on truthfulness, persuasiveness, emotional tone, and likelihood of sharing.
Across all four experiments, the results were consistent. Messages written with the intention of being truthful were rated as more persuasive and more interesting, and they produced stronger belief change in the direction of the claim. False messages, in contrast, often caused participants to believe the claim less. True messages were also more likely to be shared, both online and offline.
However, the researchers found that truth itself was not the primary reason people chose to share information. Instead, sharing was driven mainly by the positive emotions a message evoked and the degree to which it encouraged social interaction.
The experiments also revealed that messages produced by GPT‑3.5 were consistently rated as more persuasive and more shareable than those written by humans, particularly when the AI was instructed to generate truthful content.
Another notable finding was that when participants were free to write persuasive messages without constraints, they tended to default to truthfulness. Their unconstrained messages were rated as nearly as truthful as those written under explicit instructions to be accurate.
This tendency weakened slightly when participants were asked to write attention‑grabbing messages, but even then, their messages remained far more truthful than those written under falsehood instructions. Crucially, the researchers noted that relaxing the truth to make a message more attention-grabbing did not actually increase user engagement or intent to share.
Fay and colleagues concluded: “Our findings suggest that people are predisposed to the truth – both as information producers and consumers. This is consistent with the finding that the majority of online misinformation is spread by a small group of supersharers.”
The study acknowledges several limitations. For example, the experiments were conducted in a controlled environment, which may not reflect the complexity of real-world information ecosystems. Participants were primarily from Western, educated backgrounds, and the role of repetition, social networks, and source credibility was not examined.
The study, “Truth Over Falsehood: Experimental Evidence on What Persuades and Spreads,” was authored by Nicolas Fay, Keith J. Ransom, Bradley Walker, Piers D. L. Howe, Andrew Perfors, and Yoshihisa Kashima.
Leave a comment
You must be logged in to post a comment.