Emotionally intelligent AI chatbots improve mental health but destroy real-world social ties

A new study reveals that interacting with emotionally intelligent artificial intelligence chatbots can boost a person’s mental health while simultaneously isolating them from real human relationships. The research highlights a hidden trade-off in using these digital companions, where the comfort provided by algorithms comes at the cost of real-world social ties. The findings were published in the journal Psychology & Marketing.

Millions of people turn to artificial intelligence chatbots to alleviate loneliness and find emotional support. Unlike older digital assistants that simply set alarms or book flights, modern social chatbots use advanced algorithms to mimic human empathy. They try to replicate emotional intelligence, which is the ability to recognize, understand, and manage emotions.

By mimicking this trait, applications like Replika or Wysa act as digital friends that adapt to user moods. The global market for these advanced digital companions is growing rapidly, attracting millions of users seeking a safe space to express their feelings.

Shaphali Gupta, a researcher at the Indian Institute of Management Kozhikode, led an investigation into how these emotionally intelligent bots affect human users. Along with colleagues Sumit Saxena and Sonia Kataria, Gupta wanted to understand the full spectrum of these digital interactions. Previous research largely focused on the positive psychological benefits of artificial intelligence companions.

The team suspected there might be a negative side to this technological comfort, specifically regarding how users connect with other humans in the physical world. They framed their research around the technology-wellbeing paradox, an idea suggesting that digital tools can act as a double-edged sword for human health.

The researchers focused on two distinct types of wellness to capture this paradox. Psychological wellbeing refers to a person’s internal mental state, including their sense of happiness, life purpose, and emotional resilience. Social wellbeing represents how connected and integrated a person feels within their real-world community of friends, family, and neighbors. By measuring both of these outcomes, the team hoped to uncover the true cost of seeking solace in a conversational machine.

To begin their investigation, the researchers analyzed how actual users talk about their chatbot experiences online. They collected publicly available comments from YouTube, review sites like Trustpilot, and a large Reddit community dedicated to the Replika application. By reading through hundreds of user posts, the team identified several recurring patterns about how these bots behave and make people feel.

The researchers used an observational technique called netnography to analyze the online posts. This involves studying the digital behaviors of online cultures without directly interfering with their conversations. Users frequently described their chatbots as empathetic and highly adaptable to different social moods. They reported that the bots helped regulate their emotions, often lifting their spirits and helping them find personal meaning during difficult times.

A few users mentioned that the digital friend helped them master their environment by giving advice on how to handle real-life stress. The software seemed to provide an ideal social space where humans felt completely free from judgment. However, a darker pattern also emerged from the online forums. Some users admitted they were spending so much time talking to their digital companions that they felt disconnected from their real-life friends.

Several people expressed that they were ignoring their physical relationships because they preferred the easy attention of the bot over the unpredictable complexities of human interaction. The digital companion was fulfilling their social needs so completely that they no longer felt motivated to maintain their offline friendships.

Building on these online observations, Gupta and her team designed a controlled experiment to test these effects directly. They recruited 167 college students belonging to Generation Z, a demographic known for its high usage of digital tools. The participants were asked to imagine they were feeling lonely and needed to chat with a digital friend.

Half of the group read a scenario where the chatbot showed high emotional intelligence, offering deep empathy and using emotive language. The other half read a scenario featuring a bot with low emotional intelligence, providing more generic and less empathetic responses. The researchers then asked the participants to rate their expected levels of psychological and social health after the interaction.

Participants who interacted with the highly emotionally intelligent bot reported an expected boost in their psychological state. At the same time, this exact same group reported a drop in their expected social connectedness. The team discovered that this dual effect was driven by a psychological mechanism called perceived closeness.

Perceived closeness happens when a human feels a strong emotional bond and a sense of warmth toward another entity. When a bot acts like it truly understands a user, the human forms an intense connection with the software. This intense digital connection improves their immediate internal mood but reduces their desire to seek out human interaction. The digital friendship essentially crowds out the space normally reserved for human-to-human relationships.

Next, the researchers wanted to see how the format of the conversation might alter this psychological trade-off. They conducted a second experiment with 350 different college students. This time, they introduced augmented reality into the testing scenarios. Augmented reality is a technology that overlays digital images onto the physical world, often through a smartphone camera.

Some applications allow users to project a three-dimensional avatar of their digital friend into their bedroom or living room. In this experiment, some participants imagined texting the bot on a standard screen, while others imagined the bot sitting right next to them in their physical room through augmented reality. The researchers wanted to know if the visual immersion of seeing a digital entity in a physical space would change the way users felt about their real-world friends.

The students answered a series of questions to gauge their perceived closeness to the bot and their expected well-being. The results mirrored the first experiment, but the addition of augmented reality magnified everything. When participants visualized an emotionally intelligent bot in their own physical space, their feelings of closeness to the machine skyrocketed.

This created an even larger boost to their psychological wellness compared to those who just used text. The augmented reality feature made the emotional support feel incredibly vivid and personal. Conversely, the immersive nature of augmented reality caused an even sharper decline in their social wellness.

The visual presence of the digital friend made real-world human connections seem even less necessary to the users. The researchers noted that augmented reality amplifies the emotional intelligence of the bot. This makes the digital illusion so comforting that users withdraw even further from their actual physical communities.

While the research offers a detailed look at human and machine relationships, it does have a few limitations. The experiments relied on hypothetical scenarios and self-reported expectations rather than tracking long-term behavioral changes. Because the scenarios were simulated, real-world emotional reactions might differ slightly over months or years of actual use.

The study also focused exclusively on young adults, meaning the results might differ for older generations who interact with emerging technology differently. People with specific personality traits, such as high social anxiety, might also experience these platforms in completely different ways. Moving forward, the research team suggests looking at the long-term habits formed by extensive chatbot use.

They hope future studies will investigate whether people develop deep emotional co-dependency with these applications. The researchers recommend that software designers build specific boundaries into their applications to protect users from social isolation. For example, a chatbot could be programmed to encourage users to call a real friend after a long digital conversation. By implementing these safety measures, developers could harness the mental health benefits of artificial intelligence without isolating people from the physical world.

The study, “The Dual Impact of AI Emotional Intelligence on Users: Are Social Chatbots Promoting Psychological Wellbeing or Deteriorating Social Wellbeing?” was authored by Shaphali Gupta, Sumit Saxena, and Sonia Kataria.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×