As artificial intelligence becomes a staple of modern life, people are increasingly turning to chatbots for companionship and comfort. A new study suggests that while users often rely on these digital entities for stability, the resulting bond is built more on habit and trust than deep emotional connection. These findings on the psychology of human-machine relationships were published in the journal Psychology of Popular Media.
The rise of sophisticated chatbots has created a unique social phenomenon where humans interact with software as if it were a living being. This dynamic draws upon a concept known as social presence theory. This theory describes the psychological sensation that another entity is physically or emotionally present during a mediated interaction.
Designers of these systems often aim to create a sense of social presence to make the user experience more engaging. The goal is for the artificial agent to appear to have a personality and the capacity for a relationship. However, the academic community has not fully reached a consensus on what constitutes intimacy in these synthetic scenarios.
Researchers wanted to understand the mechanics of this perceived intimacy. They sought to determine if personality traits influence how a user connects with a machine. The investigation was led by Yingjia Huang from the Department of Philosophy at Peking University and Jianfeng Lan from the School of Media and Communication at Shanghai Jiao Tong University.
The team recruited 103 participants who actively use AI companion applications such as Doubao and Xingye. These apps are designed to provide emotional interaction through text and voice. The participants completed detailed surveys designed to measure their personality traits and their perceived closeness to the AI.
To measure personality, the researchers utilized the “Big Five” framework. This model assesses individuals based on neuroticism, conscientiousness, agreeableness, openness, and extraversion. The survey also evaluated intimacy through five specific dimensions: trust, attachment, self-disclosure, virtual rapport, and addiction.
In addition to the quantitative survey, the researchers conducted in-depth interviews with eight selected participants. These conversations provided qualitative data regarding why users turn to digital companions. The interview subjects were chosen because they reported higher levels of intimacy in the initial survey.
The study revealed that most users do not experience a profound sense of intimacy with their chatbots. The average scores for emotional closeness were relatively low. This suggests that current technology has not yet bridged the gap required to foster deep interpersonal connections.
When analyzing what composed the relationship, the authors identified trust and addiction as the primary drivers. Users viewed the AI as a reliable outlet that is always available. The researchers interpreted the “addiction” component not necessarily as a pathology, but as a habit formed through daily routines.
The data showed that specific personality types are more prone to bonding with algorithms. Individuals scoring high in neuroticism reported stronger feelings of intimacy. Neuroticism is a trait often associated with emotional instability and anxiety.
For these users, the predictability of the computer program offers a sense of safety. Humans can be unpredictable or judgmental, but a coded companion provides consistent responses. One participant noted in an interview, “He’s always there, no matter what mood I’m in.”
People with high openness to experience also developed tighter bonds. These users tend to be imaginative and curious about new technologies. They engage with the AI as a form of exploration.
Users with high openness are willing to suspend disbelief to enjoy the interaction. They view the exchange as a form of experimental play rather than a replacement for human contact. They do not require the AI to be “real” to find value in the conversation.
The interviews highlighted that users often engage in emotional projection. They attribute feelings to the bot even while knowing it has no consciousness. This allows them to feel understood without the complexities of reciprocal human relationships.
The researchers identified three distinct ways users engaged with these systems. The first is “objectified companionship.” These users treat the AI like a digital pet, engaging in routine check-ins without deep emotional investment.
The second category is “emotional projection.” Users in this group use the AI as a safe container for their vulnerabilities. They vent their frustrations and anxieties, finding comfort in the machine’s non-judgmental nature.
The third category is “rational support.” These users do not seek emotional warmth. Instead, they value the AI for its logic and objectivity, using it as a counselor or advisor to help regulate their thoughts.
Despite these uses, participants frequently expressed frustration with technological limitations. Many described the AI’s language as too formal or repetitive. One user compared the experience to reading a customer service script.
This lack of spontaneity hinders the development of genuine immersion. Users noted that the AI lacks the warmth and fluidity of human conversation. Consequently, the relationship remains functional rather than truly affective.
The study posits that this form of intimacy relies on a “functional-affective gap.” Users maintain a high frequency of interaction for functional reasons, such as boredom relief or anxiety management. However, this does not translate into high emotional intimacy.
Trust in this context is defined by reliability rather than emotional closeness. Users trust the AI not to leak secrets or judge them. This form of trust acts as a substitute for the intuitive understanding found in human bonds.
The authors reference the philosophical concept of “I–Thou” versus “I–It” relationships. A true intimate bond is usually an “I–Thou” connection involving mutual recognition. Interactions with AI are technically “I–It” relationships because the machine lacks subjectivity.
However, the findings suggest that users psychologically approximate an “I–Thou” dynamic. They project meaning onto the AI’s output. The experience of intimacy is co-constructed by the user’s imagination and needs.
This dynamic creates a new relational paradigm. The line between simulation and reality becomes blurred. The user feels supported, which matters more to them than the ontological reality of the supporter.
The researchers argue that AI serves as a technological mediator of social affect. It functions as a mirror for the user’s emotions. The intimacy is layered and highly dependent on the context of the user’s life.
The study relies on a relatively small sample size of users from a specific cultural context. This focus on Chinese users may limit how well the results apply to other populations. Cultural attitudes toward technology and privacy could influence these results in different regions.
The cross-sectional nature of the survey also limits the ability to determine causality. It is unclear if neuroticism causes users to seek AI, or if the interaction appeals to those traits. Longitudinal studies would be needed to track how these relationships evolve over time.
Future investigations could examine how improved AI memory and emotional mimicry might alter these dynamics. As the technology becomes more lifelike, the distinction between functional and emotional intimacy may narrow. The authors imply that ethical design is essential as these bonds become more common.
The study, “Personality Meets the Machine: Traits and Attributes in Human–Artificial Intelligence Intimate Interactions,” was authored by Yingjia Huang and Jianfeng Lan.
Leave a comment
You must be logged in to post a comment.