Psychologists pinpoint the conversational mechanisms that help humans bond with AI

New research published in the Journal of Social and Personal Relationships suggests that people can form meaningful social connections with artificial intelligence chatbots when the programs respond in a warm and empathetic way. The findings indicate that the feeling of being understood and validated by a chatbot tends to drive this sense of closeness.

Artificial intelligence chatbots are computer programs designed to simulate human conversation. Originally, people used these tools mostly for customer service or answering basic queries. Now, modern text generators are increasingly serving as companions, offering emotional support and mental health interventions.

Because people are beginning to treat these programs as social partners, scientists wanted to understand what exactly creates a sense of connection between a human and a machine. Historically, psychologists have observed that people tend to treat computers as social actors, applying human rules to interactions with machines. With the rise of highly advanced language models, this tendency has only grown stronger.

“AI chatbots are increasingly used not just to get information or complete tasks, but also in a social and relational way. People often share personal experiences and ask for advice about their lives, engaging with these systems almost as if they were interacting with another person,” said Alessia Telari, a postdoctoral researcher at the Catholic University of the Sacred Heart in Milan, who conducted the research as part of her PhD at the University of Milano-Bicocca.

“This shift made us curious about what drives that sense of connection. Drawing on theories of human relationships, we wondered whether the same dynamics might apply here, whether the way a chatbot responds to users’ self-disclosure plays a key role in making the interaction feel meaningful.”

The scientists wondered whether the specific topics people discuss or the exact way the chatbot replies plays a bigger role in building rapport. In human interactions, intimacy usually develops when one person shares personal information and the other responds with understanding, validation, and care. This concept is known in psychology as perceived partner responsiveness.

Testing the impact of a warm and empathetic chatbot

The researchers designed their studies to see if this same psychological mechanism applies when the partner is artificial. To test this, the researchers conducted two distinct experiments. In the first study, 163 participants from Italy engaged in an eight-minute, unstructured text conversation with a chatbot powered by a popular language model.

The scientists manipulated the software through specific background instructions to respond in one of three ways. The first version used a relational style, designed to be warm, empathetic, and human-like. The second version used a non-relational style, acting factual and task-oriented while avoiding emotional language. The third version was a standard default setting meant to act as a control group.

Participants were free to talk about any topic they chose during the eight-minute window. After the chat ended, they filled out a detailed questionnaire evaluating the program on various social metrics. These metrics included mind attribution, which measures how much agency and emotional capacity a person believes an entity possesses.

The researchers also measured perceived empathy, interaction satisfaction, and the participants’ own sense of interpersonal closeness. The relational chatbot produced significantly higher ratings across almost all of these categories compared to both the default and non-relational versions. People who interacted with the warm chatbot felt it possessed a greater capacity to experience emotions.

They also reported higher satisfaction of basic psychological needs. Specifically, participants felt a greater sense of belonging and meaningful existence after talking to the empathetic chatbot. The researchers noted that the default setting performed very similarly to the factual, non-relational setting.

The role of deep conversations and perceived responsiveness

The second experiment included 158 Italian participants and introduced a more structured conversation to test the impact of conversational depth. The researchers wanted to see if deep conversations prompted different reactions than casual ones. They programmed the chatbot to ask either superficial small talk questions or deep, personal questions designed to build closeness.

These deeper prompts were adapted from a well-known psychological exercise used to generate intimacy between human strangers. The researchers also kept the relational and non-relational response styles from the first experiment, dropping the default setting to focus on the two extremes. Participants interacted with the chatbot until the program signaled the end of the conversation.

The scientists found that people were quite willing to open up and share personal details when the chatbot asked deeper questions. This self-disclosure, in turn, led participants to perceive the chatbot as more responsive to their individual needs. Even with the deeper questions, the specific tone of the chatbot remained the dominant factor in building a bond.

When the program used a warm, relational response style, participants reported the highest levels of satisfaction and closeness. The scientists noted that the depth of the topic only increased closeness indirectly. By sharing more personal details, users gave the chatbot more opportunities to be supportive.

When the chatbot replied supportively to these personal disclosures, the users felt a stronger connection. Perceived responsiveness acted as the primary bridge linking the user’s personal sharing to their feeling of social connection.

“When chatbots respond in a warm and empathetic way, people tend to experience the interaction very differently: the chatbot feels more human-like, the conversation is more enjoyable, and most importantly, people feel more socially connected to it,” Telari told PsyPost.

“What seems to matter is a very familiar human process: when we share something personal and feel understood, validated, and cared for, we develop a sense of connection. Our findings suggest that mechanisms similar to those observed in human relationships may also emerge when the interaction partner is an AI.”

Designing emotionally supportive technology and future directions

These findings offer practical insights for the people who design and program interactive technology. In settings like peer support, education, or companionship for the elderly, a relational response style may help users feel acknowledged. The researchers note that they do not suggest these programs should replace human support networks.

Instead, the research highlights how small design choices can shape a user’s emotional experience. When a program validates a user’s feelings, the user is much more likely to want to interact with the software again in the future.

“Over time, many publicly available chatbots have shifted toward a more relational and human-like way of communicating, potentially leading users to feel socially connected to them,” Telari said. “Thus, as these technologies become more integrated into daily life, understanding these psychological mechanisms becomes increasingly important.”

While the research provides evidence that humans can feel connected to machines, there are some limitations to keep in mind. The experiments relied on brief, single interactions. A single eight-minute chat might not reflect how a relationship with an artificial intelligence develops over a longer period. The participants were mostly young adults from Italy, which limits how well these findings apply to other age groups or cultural backgrounds.

“We also focused on text-based interactions, which are common but only one way people engage with these chatbots,” Telari noted. “Future research should look at more naturalistic, long-term, and diverse interactions to better understand how these processes unfold in everyday life.”

“A key next step is to understand how these dynamics evolve over time and what their psychological consequences might be,” Telari added. “Ultimately, my long-term goal is to better understand when, how, and for whom interacting with these systems can be beneficial in supporting our social needs and when it might instead have unintended negative effects that risk undermining them.”

The study, “Can humans feel connected to AI? Perceived responsiveness drives social connection with AI chatbots,” was authored by Alessia Telari, Alessandro Gabbiadini, and Paolo Riva.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×