How personality and culture relate to our perceptions of artificial intelligence

A recent study reveals that a person’s cultural background, personality traits, and technical skills shape how they view the impact of artificial intelligence on their overall well-being. The findings suggest that feeling competent with new technology and possessing a sense of personal control lead to more positive experiences with artificial intelligence. The research was published in the Journal of Technology in Behavioral Science.

As artificial intelligence becomes a regular part of daily life, from personalized internet recommendations to healthcare planning, questions have emerged about how these tools affect mental health and happiness. Previous research tends to focus on the positive contributions of these technologies in specific fields like education or medicine. Less is known about how an individual’s unique psychological traits influence their daily interactions with these systems.

Most existing studies also focus heavily on Western populations. This leaves a gap in understanding how people from different cultural backgrounds experience artificial intelligence.

“Public reactions to artificial intelligence are highly polarized. While some people see AI as exciting and beneficial, others express concern, discomfort, or even fear about its societal and personal consequences,” explained study authors Magnus Liebherr of the University of Duisburg-Essen and Raian Ali of Hamad Bin Khalifa University.

“Much of the public debate focuses on the capabilities of the technology itself, but psychological research suggests that people’s responses to new technologies are strongly shaped by individual differences. We therefore wanted to understand which personal factors are associated with how people perceive AI’s impact on their well-being.”

In particular, the researchers aimed to see how cultural differences, personality types, and a psychological concept called “locus of control” affect whether people view these tools as helpful or harmful. Locus of control refers to how much a person believes they have the power to influence events in their own life. Sociologists generally classify Arab cultures as collectivist, meaning they emphasize group harmony, social bonds, and community cohesion.

The United Kingdom represents a more individualistic culture, where personal autonomy and individual rights tend to take precedence. The researchers suspected these foundational cultural differences might shape how societies embrace new and potentially disruptive software.

“This geographical diversity was particularly important given the different cultural contexts in which AI is adopted and used,” Liebherr and Ali told PsyPost. “As we discussed in our work “Who lets AI take over? Cross-national variation in willingness to delegate socially important roles to artificial intelligence” (Yankouskaya et al., 2026), cultural factors play a significant role in how people approach and integrate AI technologies into their lives.”

The researchers conducted an online survey involving 562 participants between the ages of 18 and 60. The sample was split evenly, featuring 281 individuals from the United Kingdom and 281 individuals from Arab countries. To qualify for the Arab sample, participants had to reside in a Gulf nation, ensuring a shared cultural and political background.

The participants completed several psychological questionnaires. First, they rated their own competency in using and managing artificial intelligence on a scale from one to six. Next, they answered questions designed to measure five major personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. Neuroticism is a trait associated with a tendency to experience anxiety and negative emotions.

Participants also completed an assessment to determine their locus of control, indicating whether they felt their life was guided by their own actions or by outside forces. Finally, they answered a modified well-being questionnaire. This specific survey asked them to rate how often they felt positive emotions, engagement, meaning, and accomplishment when thinking about artificial intelligence and its presence in society.

The data revealed significant cultural differences in how people view these technologies. Arab participants reported that artificial intelligence contributed much more positively to their well-being than British participants did. The British group actually scored higher on measures linking artificial intelligence to negative emotions and feelings of loneliness.

The scientists found that technical skills played a massive role in shaping user attitudes. Across both cultural groups, individuals who reported higher competency with artificial intelligence perceived the technology as having a much more positive impact on their well-being. This provides evidence that understanding how these systems work and knowing how to use them helps reduce uncertainty and increases the perceived benefits of the technology.

“We expected personality traits to be the dominant predictors, but AI competency emerged as equally strong in predicting positive perceptions of AI,” Liebherr and Ali said. “This was encouraging because personality traits tend to be relatively stable, whereas competency can potentially be improved.”

Personality traits also strongly predicted user experiences, though the specific traits mattered differently depending on the culture. Individuals who scored high in neuroticism tended to view artificial intelligence as less beneficial and more concerning in both the British and Arab groups. This aligns with broader psychological concepts suggesting that people who are naturally prone to anxiety are more sensitive to the potential risks of new technologies.

“Another notable finding was the consistent role of anxiety-related traits: individuals who are generally more prone to worry tended to perceive AI more negatively across both cultural groups,” the researchers explained.

Other personality traits varied by region. Extraversion and conscientiousness predicted positive perceptions of artificial intelligence in the Arab sample. Agreeableness predicted positive perceptions in the British sample.

“This suggests that the influence of personality on attitudes toward AI may vary across cultural contexts, possibly due to other culture-related variables moderating this relationship,” Liebherr and Ali told PsyPost. “These cultural variations highlight the complexity of how personal and cultural factors interact in shaping AI perceptions.”

A consistent finding across both cultures was the importance of an internal locus of control. Participants who believed they were largely in control of their own life paths viewed artificial intelligence as a positive contributor to their well-being. The scientists suggest that feeling a strong sense of personal agency helps people feel more comfortable integrating new tools into their routines.

The statistical models used by the scientists explained a substantial amount of the differences in user attitudes. The tested variables accounted for 31 percent of the varied perceptions in the British sample and 47 percent in the Arab sample. The analysis also revealed that demographic factors like age and gender did not influence how individuals perceived the technology’s contribution to their well-being.

“A central message of the study is that people’s experiences with AI are shaped not only by how AI products are designed but also by their own characteristics and skills,” Liebherr and Ali said.

As with all research, there are a few limitations in mind when interpreting these findings. The study is correlational, which means it cannot prove that specific traits directly cause positive or negative views of artificial intelligence. High technical competency might lead to more positive perceptions, but people with positive attitudes might also simply be more motivated to learn about the technology.

The researchers also point out that the survey measured people’s subjective perceptions of their well-being, rather than objective changes in their mental health. The survey did not specify which types of artificial intelligence the participants should think about when answering the questions. A person might have a very different reaction to a helpful medical tool than they would to an automated hiring system or a social media algorithm.

“Future research should examine how these psychological factors interact with specific AI applications and how perceptions change over time as people gain experience,” Liebherr and Ali said. “It is also important to investigate both potential benefits and risks of AI use. For example, conversational AI systems may provide support and information, but there is also a need to study whether heavy reliance on such systems could have unintended negative consequences for well-being.”

“As we explored in our work “Can ChatGPT be addictive? A call to examine the shift from support to dependence in AI conversational large language models” (Yankouskaya et al., 2025), understanding the full spectrum of AI’s impact, from enhancement to potential problematic use, is crucial for developing responsible AI systems and usage guidelines. We are also interested in understanding how interventions aimed at improving AI competency and sense of control might positively influence well-being outcomes.”

“A practical implication of our findings is that improving people’s competency and skills in AI (not just their understanding) may help them feel more comfortable with these technologies,” the researchers added. “For developers and policymakers, this means providing transparent systems, clear explanations, and opportunities for users to build skills and maintain a sense of control. Explainable AI (XAI) is particularly important in this regard.”

“Our study found that internal locus of control (the belief that one can influence their own outcomes) was a significant predictor of positive AI perceptions. By helping users understand how AI makes decisions, XAI can enhance this sense of control, which in turn may lead to more positive perceptions of AI’s contribution to well being. Supporting users in developing competency and providing them with tools to understand and control AI systems may be just as important as improving the technology itself.”

The study, “Artificial Intelligence vs. Users’ Well-Being and the Role of Personal Factors: A Study on Arab and British Samples,” was authored by Magnus Liebherr, Areej Babiker, Sameha Alshakhsi, Dena Al-Thani, Ala Yankouskaya, Christian Montag, and Raian Ali.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×