How generative artificial intelligence is upending theories of political persuasion

Artificial intelligence programs can persuade people to temper their political views, but highly customized messages or deep conversations with bots do not seem to work any better than a single basic argument. These results challenge long-held academic theories about what makes political messaging effective, suggesting that targeted data and interactive debates might not provide the advantages that politicians expect. The findings were recently published in the Proceedings of the National Academy of Sciences.

Changing the minds of voters is an essential feature of democratic societies. Advocacy groups, public health officials, and political candidates spend vast amounts of money attempting to sway public opinion on polarizing topics. Despite decades of study, the exact psychological processes that dictate whether a person will change their mind remain difficult to pin down. Academic researchers often face practical limitations when studying how social communication works in the real world.

Two central concepts have dominated the academic understanding of targeted messaging. The first is message customization, which is also known in the political realm as microtargeting. This theory proposes that a message will be much more effective if it is explicitly tailored to the personal traits, values, or demographics of the person receiving it. The core idea is that persuaders should adapt their message to the audience rather than expecting the audience to adapt to the message.

The second concept is known as the elaboration likelihood model. This model suggests that people are more likely to experience durable attitude changes when they exert heavy cognitive effort. In other words, if a person has to actively think about a topic, ask questions, or defend their views in a conversation, they are more permanently swayed than if they simply read a static flyer.

Historically, it has been surprisingly difficult to isolate these two mechanisms in a laboratory setting. Human researchers or actors participating in experiments introduce unwanted variables into the interactions. A human confederate might change their tone of voice, display subtle facial expressions, or introduce social pressure that alters how the test subject forms an opinion.

Lisa P. Argyle, a political scientist at Brigham Young University, led a team of researchers hoping to solve this exact methodological problem. Argyle collaborated with Brigham Young University colleagues Ethan C. Busby, Joshua R. Gubler, Alex Lyman, Justin Olcott, Jackson Pond, and David Wingate. They theorized that generative artificial intelligence could act as a perfectly controlled debate partner for human test subjects.

By using large language models, the research team could generate text with a consistent tone and style for thousands of varying interactions. This allowed them to isolate the effects of customization and cognitive elaboration without the messy interference of human social dynamics. They wanted to know if highly tailored messages or interactive chats actually outperformed a single, well-written generic argument.

To answer this question, the research team designed two preregistered online survey experiments featuring nearly 3,700 adult participants in the United States. The researchers recruited a pool of respondents that roughly matched the national census averages for age, gender, and race. They also ensured an even balance of political ideologies, including equal numbers of Democrats and Republicans.

The first study focused on the contentious topic of immigration. Participants answered a series of questions about their support for increased border security spending and their opinions on sponsoring immigrant visas. The second study focused on the curriculum used in public educational settings. Specifically, this survey asked participants how much control parents should have over controversial social topics and whether teachers should bring personal political views into the classroom.

After establishing these baseline opinions, the researchers randomly assigned the participants to either a control group or to one of four experimental interventions. All of the experimental interventions used a large language model to try to persuade the participant to change their mind. The goal of the bot was always to argue against the participant’s original beliefs.

The first experimental group received a single generic message. The software was instructed to act as an expert and write the strongest possible paragraph arguing for the opposing political viewpoint. This text was not adapted to the specific person reading it.

The second group received a microtargeted message. In this scenario, the artificial intelligence was fed all the demographic data the participant had provided at the start of the survey. The bot used this background information to craft a highly personalized argument, testing the modern concept of customized political campaigning.

The third group engaged in a direct, interactive debate. Participants had to exchange six conversational turns with the artificial intelligence program. The bot was instructed to act as a psychology expert, providing counterarguments and asking follow-up questions to force the participant into deep cognitive engagement.

The final experimental group participated in an interactive motivational interview. Motivational interviewing is a psychological technique often used in therapy to help people find internal motivation to alter their own behavior. Instead of directly debating the participants, the bot asked reflective questions intended to help respondents convince themselves to adopt a new perspective.

To verify the integrity of the experiment, the researchers ran a secondary evaluation on the text generated by the bot. They used machine learning techniques to map out the core arguments contained in every single message. This confirmed that the fundamental facts and claims remained identical across all the groups, with only the presentation style changing.

The final results contradicted the expectations set by decades of academic literature. Across the board, encountering the opposing arguments did cause participants to moderate their views. On average, the respondents shifted their political attitudes by roughly 2.5 to 4 percentage points in the direction of the opposing argument.

The surprising takeaway was that the advanced techniques did not perform any better than the basic approach. The personalized messages and the interactive chats failed to produce more attitude change than the single generic message. In fact, the motivational interviewing technique was often the least effective method evaluated during the trials.

These numbers suggest that customization and cognitive elaboration might not be the powerful psychological levers that campaign strategists assume they are. If political microtargeting does provide an advantage, that advantage is extremely small. A simple, generally persuasive argument appears to be just as effective as a tailored digital debate.

The researchers tracked a secondary outcome called democratic reciprocity. This metric captures whether a person is willing to view their political opponents as reasonable people who are worthy of respect. The academic community has debated for years whether moderating a person’s issue-based opinions will automatically reduce their overarching prejudice against opposing groups.

The study provided a relatively clear answer to this secondary question. Even though many participants moderated their actual policy opinions, this shift rarely translated into increased respect for the other side. The ideological gap between the voters shrank, but their hostility toward opposing political groups remained identical.

The one exception occurred during the interactive chats regarding public school curriculums. In that specific setting, participants did show an increase in democratic reciprocity. The researchers suspect this happened because the bot explicitly argued for the necessity of social tolerance as part of its educational curriculum talking points.

The researchers note that these immediate findings should not be interpreted as the final word on political communication. The experiments only examined brief interactions occurring in an isolated digital environment. It is entirely possible that personalization and cognitive elaboration work much better over a period of months or years.

Additionally, personal connections between actual humans might rely on social pressures that artificial intelligence cannot easily mimic. A deeply reasoned argument coming from a close friend might trigger different psychological responses than a similar argument presented by an anonymous survey tool. Researchers hope to explore these boundaries in future investigations.

Ultimately, the project demonstrated that generative artificial intelligence can be a highly effective tool for studying the social sciences. Creating customized arguments for thousands of test subjects would require massive staffing and financial resources if attempted entirely by humans. The software allowed the academic team to evaluate influential theories at a scale that was previously impossible.

The study, “Testing theories of political persuasion using AI,” was authored by Lisa P. Argyle, Ethan C. Busby, Joshua R. Gubler, Alex Lyman, Justin Olcott, Jackson Pond, and David Wingate.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×