A new study finds that individuals with high attachment anxiety are more prone to developing problematic usage patterns with conversational artificial intelligence. This connection appears to be strengthened when these individuals form an emotional bond with the technology and have a tendency to view it as human-like. The research was published in the journal Psychology Research and Behavior Management.
The recent rise of conversational artificial intelligence, such as chatbots and virtual assistants, has provided people with a new way to interact and find companionship. These programs use natural language to hold personalized, one-on-one conversations. During periods of increased social isolation, like the COVID-19 pandemic, millions of people turned to these technologies. This trend raised an important question for scientists: Does this innovation pose risks for specific groups of people?
Researchers led by Shupeng Heng at Henan Normal University focused on individuals with attachment anxiety. This personality trait is characterized by a persistent fear of rejection or abandonment in relationships, leading to a strong need for closeness and reassurance. People with high attachment anxiety are already known to be at a higher risk for other forms of problematic technology use, like smartphone and online gaming addictions. The research team wanted to see if this same vulnerability applied to conversational artificial intelligence and to understand the psychological processes involved.
The investigation sought to explore the direct link between attachment anxiety and what the researchers call the problematic use of conversational artificial intelligence, a pattern of addictive-like engagement that negatively impacts daily life. Beyond this direct link, the researchers examined two other factors. They explored whether forming an emotional attachment to the artificial intelligence acted as a bridge between a person’s anxiety and their problematic use. They also investigated if a person’s tendency to see the artificial intelligence as human-like, a trait called anthropomorphic tendency, amplified these effects.
To conduct their investigation, the researchers recruited 504 Chinese adults who had experience using conversational artificial intelligence. The participants were gathered through an online platform and completed a series of questionnaires designed to measure four key variables. One questionnaire assessed their level of attachment anxiety, with items related to fears of rejection and a desire for closeness. Another measured their emotional attachment to the artificial intelligence they used, asking about the strength of the emotional bond they felt.
A third questionnaire evaluated their anthropomorphic tendency, which is the inclination to attribute human characteristics, emotions, and intentions to nonhuman things. Participants rated their agreement with statements like, “I think AI is alive.” Finally, a scale was used to measure the problematic use of conversational artificial intelligence. This scale included items describing addictive behaviors, such as trying and failing to cut back on use. The researchers then used statistical analyses to examine the relationships between these four measures.
The results first showed a direct connection between attachment anxiety and problematic use. Individuals who scored higher on attachment anxiety were also more likely to report patterns of compulsive and unhealthy engagement with conversational artificial intelligence. This finding supported the researchers’ initial hypothesis that this group is particularly vulnerable.
The analysis also revealed a more complex, indirect pathway. The study found that people with higher attachment anxiety were more likely to form a strong emotional attachment to the conversational artificial intelligence. This emotional attachment was, in itself, a strong predictor of problematic use. This suggests that emotional attachment serves as a connecting step. Anxious individuals’ need for connection may lead them to form a bond with the technology, and it is this bond that in part drives the problematic usage.
The most nuanced finding involved the role of anthropomorphic tendency. The researchers discovered that this trait acted as a moderator, meaning it changed the strength of the relationship between attachment anxiety and problematic use. When they separated participants into groups based on their tendency to see the artificial intelligence as human-like, a clear pattern emerged.
For individuals with a low anthropomorphic tendency, their level of attachment anxiety was not significantly related to their problematic use of the technology. In contrast, for those with a high tendency to see the artificial intelligence as human, attachment anxiety was a powerful predictor of problematic use. Seeing the artificial intelligence as a social partner appears to make anxious individuals much more susceptible to developing an unhealthy dependency.
This moderating effect also applied to the formation of emotional bonds. Anxious individuals developed emotional attachments to the artificial intelligence regardless of their anthropomorphic tendencies. However, this effect was much stronger for those with a high tendency to see the technology as human. In other words, having high attachment anxiety combined with a tendency to anthropomorphize created the strongest emotional bonds with the artificial intelligence, which then increased the risk of problematic use.
The study has some limitations that the authors acknowledge. Because the data was collected at a single point in time, it shows a relationship between these traits but cannot prove that attachment anxiety causes problematic use. Future research could follow individuals over time to better establish a causal link. Another area for future exploration is the design of the technology itself. Different types of conversational artificial intelligence, such as a simple chatbot versus a virtual assistant with a human-like avatar, may have different effects on users.
The researchers suggest that their findings have practical implications for the design of these technologies. For instance, developers could consider creating versions with less human-like features for users who may be at higher risk. They could also embed features into the software that monitor for excessive use or provide educational content about healthy technology engagement. For individuals identified as being at high risk, the study suggests that interventions aimed at reducing anxiety, such as mindfulness practices, could help decrease their dependency on these virtual companions.
The study, “Attachment Anxiety and Problematic Use of Conversational Artificial Intelligence: Mediation of Emotional Attachment and Moderation of Anthropomorphic Tendencies,” was authored by Shupeng Heng and Ziwan Zhang.