A new series of studies published in the Journal of Consumer Psychology suggests that political conservatives are more likely than liberals to accept recommendations from artificial intelligence (AI) systems in everyday contexts like movies, music, and recipes. Although prior research has often found that conservatives tend to be skeptical of new technologies, these findings reveal a more complex pattern: when AI recommendations appear to reflect a person’s own previous choices, conservatives are more inclined to follow them—driven by a broader preference for consistency and resistance to change.
Consumers are regularly exposed to AI-driven recommendations, from the shows Netflix suggests to the songs played on Spotify or the recipes featured on cooking apps. While much research has focused on how to make these systems more accurate, less attention has been paid to who is more likely to accept such suggestions. Given that political ideology influences a wide range of consumer behaviors—from brand preferences to food choices—the research team aimed to explore how ideology might shape receptivity to AI-generated content.
The authors—Iman Paul (Montclair State University), Smaraki Mohanty (Elon University), Monica Wadhwa (Temple University), and Jeffrey Parker (University of Illinois at Chicago)—conducted six studies involving more than 1,500 participants and a Facebook ad campaign to test how political beliefs influence consumer responses to AI recommendations.
Across a series of controlled online experiments, participants were asked to imagine or respond to AI-generated recommendations for movies, music, or recipes. In some cases, they were told the recommendation was based on their own past preferences. In others, this detail was omitted or changed—such as when the recommendation was intentionally described as novel or different from what the user usually consumed.
Participants also rated their political ideology on a scale from liberal to conservative. The researchers then analyzed how likely each group was to accept or follow the AI-generated suggestion. Some studies also measured participants’ preference for consistency and resistance to change to better understand what psychological factors might explain ideological differences.
In the final study, the researchers ran a real-world Facebook ad campaign promoting an AI-generated music playlist. They compared click-through rates between liberal and conservative geographic regions based on voting data.
In contrast to the widespread assumption that conservatives are more skeptical of new technologies, the studies consistently found that conservatives were more likely than liberals to accept AI-generated recommendations—but only under specific conditions.
The effect was strongest when participants believed that the AI recommendation was based on their own past behavior, such as previous music choices or favorite movie genres. Conservatives’ preference for consistent, familiar experiences seemed to override any hesitancy about AI technology itself. This pattern held across product types and even in the real-world Facebook ad study, where conservative-leaning regions showed higher click-through rates for AI-based playlists.
This greater acceptance was not driven by a stronger belief among conservatives that AI recommendations were based on past behavior. Across several pilot studies, participants of all political leanings shared the belief that AI suggestions were primarily generated using their prior preferences.
The researchers also explored why this ideological divide emerged. In one experiment, they measured participants’ resistance to change and preference for consistency—traits that past research has linked more strongly to conservatives. These traits statistically explained why conservatives were more inclined to follow AI suggestions that aligned with previous behaviors. When recommendations were inconsistent with a user’s prior preferences or came from a new, unfamiliar service, the ideological gap in acceptance disappeared.
In a separate study, participants chose between two music apps: one that made suggestions based solely on a person’s past listening history, and another that introduced novel and unexpected options. Conservatives were significantly more likely to choose the app that promised consistency, while liberals were more evenly split.
The findings shed light on an important psychological factor influencing AI adoption, but they do not suggest that conservatives are universally more enthusiastic about AI. The studies focused on low-stakes, everyday consumption contexts, where familiarity and consistency are appealing. Other research has shown that in high-stakes settings—such as medical decisions or autonomous vehicles—conservatives may remain more cautious or skeptical toward AI.
Future research could explore how these patterns extend to other domains, including political messaging, financial advice, or public health interventions. It would also be useful to examine whether these preferences evolve as people gain more experience with AI-based systems over time. Additionally, the study did not explore how trust in technology might interact with other factors like age, education, or media exposure, which could influence acceptance of AI recommendations in complex ways.
The study, “Swipe right: When and why conservatives are more accepting of AI recommendations,” was published May 19, 2025.