Artificial intelligence writing tools that predict and suggest our next words can do much more than simply speed up our typing. New research provides evidence that interacting with biased autocomplete suggestions can covertly shift a person’s underlying attitudes on important societal issues. The findings, published in the journal Science Advances, suggest that the subtle influence of these everyday programs often bypasses our conscious awareness.
Artificial intelligence programs powered by large language models are increasingly woven into human communication. These technologies power the autocomplete features found in popular email clients, messaging applications, and word processors. As these tools become a standard part of daily life, scientists have grown concerned about their potential to shape human cognition.
Previous studies have shown that artificial intelligence can persuade people during direct interactions. This happens when a program generates a persuasive essay or directly debates a user on a specific topic. However, researchers wanted to explore a more subtle pathway for influence in our digital environments.
“There were two things that led my team and I to pursue the research question of whether being exposed to biased AI autocomplete suggestions could shift users’ attitudes on societal issues,” said study author Sterling Williams-Ceci, a PhD candidate at Cornell University and Merrill Presidential Scholar & Robert S. Harrison College Scholar.
“One was that we are surrounded by AI writing assistants that generate autocomplete suggestions in multiple contexts (e.g. Gmail, Google Docs, social media), but separate studies have shown that LLM-generated text can represent politically biased viewpoints; meanwhile, older psychology research showed that shifting how people behave through their writing can shift how they think about issues, so we suspected that these biased AI suggestions could trigger attitude shift through this mechanism.”
Because millions of people use the same text prediction models every day, even a minor shift in individual opinions could have broad societal implications. To test this idea, the researchers conducted two large-scale online experiments involving a total of 2,582 participants. They built a custom writing application that functioned much like a standard word processor.
In both experiments, participants were asked to write a short essay about a debatable topic. In the first experiment, which included 1,485 participants, everyone wrote about the use of standardized testing in education. Some participants wrote without any assistance, acting as a baseline control group.
Others were provided with autocomplete suggestions generated by the artificial intelligence model GPT-3.5. These suggestions were specifically programmed to favor standardized testing. As participants typed, short phrases of about 24 words would appear on the screen, and the users could accept them into their essays by pressing the tab key.
To rule out the possibility that the mere presence of new information caused any opinion changes, a third group in the first experiment did not use the autocomplete tool. Instead, they were shown a static list of the artificial intelligence program’s arguments before they began writing. After the writing task, all participants filled out a survey measuring their final opinions on the topic alongside several unrelated distraction topics.
Distractor questions are used in psychology to hide the true purpose of a study. This prevents participants from guessing what the scientists are looking for and unnaturally altering their responses.
The researchers found that participants who used the biased autocomplete tool reported attitudes that were closer to the artificial intelligence’s programmed bias. Their opinions shifted by nearly half a point on a five-point scale compared to the control group. This shift occurred even among the roughly thirty percent of participants who did not actually accept any suggested words into their essays.
The scientists also noticed that the interactive autocomplete feature had a stronger effect than simply reading the same arguments presented as a static list. This provides evidence that the unique experience of co-writing with an artificial intelligence program is a distinct and potent form of influence. It suggests the act of typing alongside the program shapes our thoughts more than just reading the text.
“AI assistants that provide these autocomplete suggestions can make us write easier and quicker, but that there are implications: they change the type of language we use; the topics we write about, and, as we show here, can also shift how we actually think about the issues we are communicating about,” Williams-Ceci told PsyPost. “We found that attitudes shifted even including participants who did not actually accept the suggestions to fill in their writing, so mere exposure to the suggestions may be enough even if people resist using them.”
In the second experiment, involving 1,097 participants, the researchers measured people’s baseline opinions weeks before the actual writing task. This allowed the scientists to track exactly how much an individual’s attitude shifted over time. Participants in this experiment were randomly assigned to write about one of four topics: the death penalty, felon voting rights, genetically modified organisms, or fracking.
The artificial intelligence tool, this time using the more advanced GPT-4 model, was programmed to provide suggestions leaning either conservative or liberal depending on the topic. Once again, the researchers found that participants’ attitudes shifted from their original baseline positions toward the artificial intelligence’s biased perspective. The control group experienced no such shift.
The researcher observed a lack of awareness among the participants. The majority of people exposed to the biased suggestions said that the artificial intelligence was reasonable and balanced. Most participants completely disagreed with the idea that the writing assistant had influenced their thinking or their arguments.
The researchers even attempted to mitigate this effect in the second experiment by explicitly warning participants about the tool’s bias. Some individuals were warned before they started writing, while others were debriefed immediately afterward. Neither of these interventions reduced the extent to which the participants’ attitudes shifted.
“We were very surprised to find out that warning people ahead of when they were exposed to the biased AI suggestions failed to mitigate the attitude shift they exhibited,” Williams-Ceci explained. “Our first experiment showed that people most often did not recognize the bias in the suggestions or their influence, so we hypothesized in our second experiment that simply alerting people to the fact that the suggestions had a bias would make them less likely to be influenced.”
“We also hypothesized this mitigation effect because similar interventions have shown success in the literature on misinformation prevention. However, in our second experiment, neither warning people before nor debriefing them after made any dent in the attitude shift they experienced.”
While the study provides strong evidence of this covert influence, there are some limitations to consider. The research only measured the short-term effects of using a biased writing assistant. It remains unclear if this attitude shift persists over weeks or months, or if repeated exposure over a long period might compound the effect.
“One limitation that is important to note is that our experiments were not designed to pinpoint a specific cognitive mechanism to explain why writing with the biased AI suggestions shifted people’s attitudes,” Williams-Ceci noted. “We know that it had something to do with the fact that these suggestions led people to write about their views in more biased ways — because of the psychology research showing that behavior can influence attitudes — but there have been multiple theoretical explanations for why manipulating people’s writing can shift their attitudes.”
Potential mechanisms include “a cognitive dissonance reaction where people consciously adjusted their self-reported attitude to align with what they had written, or a self-perception theory argument that people inferred their true attitudes from what they had written, or even a biased scanning argument that the biased viewpoints became more accessible in people’s working memory.”
“If future research can pinpoint the exact reason why this attitude shift is occurring, then we can hopefully find interventions that are more effective in preserving people’s autonomy,” Williams-Ceci continued.
“Our team is interested in learning more about the mechanism behind the attitude shift, as well as ways to prevent or mitigate it. It is alarming that telling people about the bias in the AI suggestions didn’t reliably reduce the extent of the influence; we wonder if people need to be confronted with interventions in the moment, alongside the biased suggestions, in order for these interventions to work.”
The study, “Biased AI writing assistants shift users’ attitudes on societal issues,” was authored by Sterling Williams-Ceci, Maurice Jakesch, Advait Bhat, Kowe Kadoma, Lior Zalmanson, and Mor Naaman.
Leave a comment
You must be logged in to post a comment.