Conversational AI can increase false memory formation by injecting slight misinformation in conversations

An experimental study in the United States found that having a conversational AI insert slight misinformation into conversations with users increased false memory occurrence and reduced memories of correct information. The research was published in IUI ’25: Proceedings of the 30th International Conference on Intelligent User Interfaces.

Human memory works through three main processes: encoding, storage, and retrieval, i.e., processes during which information is transformed, maintained, and later accessed in the brain (respectively). Encoding depends on attention and meaning, so information that is emotionally salient or well-organized is remembered better. Stored memories are not exact recordings of events; they can change over time. Retrieval of stored memories is a reconstructive process, meaning that memories are rebuilt each time they are recalled rather than simply replayed.

In these processes, exploiting the reconstructive nature of memory, false memories can form. False memories are recollections of events or details that feel real but are inaccurate or entirely fabricated. They are formed through suggestion, imagination, repeated questioning, social influence, or confusion between similar experiences. During retrieval, the brain tends to fill in gaps using expectations, prior knowledge, or external information, which then becomes integrated into the memory. Over time, these altered details can feel just as vivid and real as true memories.

Study author Pat Pataranutaporn and his colleagues examined the potential for malicious generative chatbots to induce false memories by injecting subtle misinformation during interactions with users. Previous studies indicated that there is a rise in AI-driven disinformation campaigns, in which AIs using an authoritative tone, persuasive language, and targeted personalization contributed to users’ difficulties in distinguishing between true and false information. Earlier studies also showed that AI-generated content can influence people’s beliefs and attitudes.

Study participants were 180 individuals recruited via CloudResearch. Participants’ average age was 35 years. The numbers of female and male participants were equal.

Study authors randomly assigned each participant to read one of three information articles. One article was about elections in Thailand, another about drug development, and the third was about shoplifting in the U.K. This was followed by a short filler task. After this, participants were again randomly allocated to five different conditions. There were, in total, 36 participants per condition with 12 per article.

The experimental conditions were the control condition (no intervention) and four intervention conditions that included interaction with an AI (gpt-4o-2024-08-06, a large language model). Of these four conditions, two included reading an AI-generated summary of the article, while the other two included engaging in a discussion with the AI. In each pair of conditions, the AI was honest in one (i.e., correctly presenting facts from the articles) and misleading in the other (i.e., incorporating misinformation alongside factual points).

After undergoing their assigned intervention, participants answered questions, recalling whether specific points appeared in the original article. The questionnaire consisted of 15 questions, 10 of which were about key points from the article, while 5 were about misinformation.

For each question, participants could answer with Yes, No, or Unsure, and they also rated their confidence in the answer. Participants also self-reported their familiarity with AI, evaluated their own general memory performance, and rated their ability to remember visual and verbal information. Finally, participants rated the level of distrust they felt towards official information.

Results showed that participants who engaged in discussion with a misleading chatbot recalled the highest number of false memories compared to all the other conditions and recalled the lowest number of non-false memories. The number of false memories recalled by individuals who read a misleading summary was slightly higher compared to control and honest conditions, but the difference was not large enough to be sure that those were not just random variations. The situation was similar for non-false memories.

Similarly, participants who conversed with a misleading chatbot displayed lower confidence in their non-false memories compared to participants in honest conditions. Overall, participants who conversed with a misleading chatbot had the lowest confidence in their recalled memories compared to all the other treatment conditions.

“The findings revealed that LLM-driven [large language model-driven] interventions heighten false memory creation, with misleading chatbots generating the most pronounced misinformation effect. This points to a worrying capacity for language models to introduce false beliefs in their users. Moreover, these interventions not only fostered false memories but also diminished participants’ confidence in recalling accurate information,” the study authors concluded.

The study contributes to the scientific understanding of false memories. However, it should be noted that the study focused on immediate recall of memories with no particular personal relevance for study participants.

Also, their information came from just a single source, which was the article they read. This is profoundly different from real-world information acquisition, where individuals most often gather information from multiple competing sources, process them based on the level of trust they have in the sources and various other factors, and are able to verify points directly relevant to them.

The paper, “Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation,” was authored by Pat Pataranutaporn, Chayapatr Archiwaranguprok, Samantha Chan, Elizabeth F. Loftus, and Pattie Maes.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×