New study shows that a robot’s feedback can shape human relationships

A new study has found that a robot’s feedback during a collaborative task can influence the feeling of closeness between the human participants. The research, published in Computers in Human Behavior, indicates that this effect changes depending on the robot’s appearance and how it communicates.

As robots become more integrated into workplaces and homes, they are often designed to assist with decision-making. While much research has focused on how robots affect the quality of a group’s decisions, less is known about how a robot’s presence might alter the personal relationships between the humans on the team. The researchers sought to understand this dynamic by exploring how a robot’s agreement or disagreement impacts the sense of interpersonal connection people feel.

“Given the rise of large language models in recent years, we believe robots of different forms will soon be equipped with non-scripted verbal language to help people make decisions in various contexts. We conducted our research to call for careful consideration and control over the precise behaviors robots should use to provide feedback in the future,” said study author Ting-Han Lin, a computer science PhD student at the University of Chicago.

The investigation centered on two established psychological ideas. One, known as Balance Theory, suggests that people feel more positive toward one another when they are treated similarly by a third party, even if that treatment is negative. The other concept, the Influence of Negative Affect, proposes that a negative tone or criticism can damage the general atmosphere of an interaction and harm relationships.

To test these ideas, the researchers conducted two separate experiments, each involving pairs of participants who did not know each other. In both experiments, the pairs worked together to answer a series of eight personal questions, such as “What is the most important factor contributing to a life well-lived?” For each question, participants first gave their own individual answers before discussing and agreeing on a joint response.

A robot was present to mediate the task. After each person gave their initial answer, the robot would provide feedback. This feedback varied in two ways. First was its positivity, meaning the robot would either agree or disagree with the person’s statement. Second was its treatment of the pair, meaning the robot would either treat both people equally (agreeing with both or disagreeing with both) or unequally (agreeing with one and disagreeing with the other).

The first experiment involved 172 participants interacting with a highly human-like robot named NAO. This robot could speak, use gestures like nodding or shaking its head, and employed artificial intelligence to summarize a person’s response before giving its feedback. Its verbal disagreements were designed to grow in intensity, beginning with mild phrases and ending with statements like, “I am fundamentally opposed with your viewpoint.”

The results from this experiment showed that the positivity of the robot’s feedback had a strong effect on the participants’ relationship. When the NAO robot gave positive feedback, the two human participants reported feeling closer to each other. When the robot consistently gave negative feedback, the participants felt more distant from one another.

“A robot’s feedback to two people in a decision-making task can shape their closeness,” Lin told PsyPost.

This outcome supports the theory regarding the influence of negative affect. The robot’s consistent negativity seemed to create a less pleasant social environment, which in turn reduced the feeling of connection between the two people. The robot’s treatment of the pair, whether equal or unequal, did not appear to be the primary factor shaping their closeness in this context. Participants also rated the human-like robot as warmer and more competent when it was positive, though they found it more discomforting when it treated them unequally.

The second experiment involved 150 participants and a robot with a very low degree of human-like features. This robot resembled a simple, articulated lamp and could not speak. It communicated its feedback exclusively through minimal gestures, such as nodding for agreement or shaking its head from side to side for disagreement.

With this less-human robot, the findings were quite different. The main factor influencing interpersonal closeness was the robot’s treatment of the pair. When the robot treated both participants equally, they reported feeling closer to each other, regardless of whether the feedback was positive or negative. Unequal treatment, where the robot agreed with one person and disagreed with the other, led to a greater sense of distance between them.

This result aligns well with Balance Theory. The shared experience of being treated the same by the robot, either through mutual agreement or mutual disagreement, seemed to create a bond. The researchers also noted a surprising finding. When the lamp-like robot disagreed with both participants, they felt even closer than when it agreed with both, suggesting that the robot became a “common enemy” that united them.

“Heider’s Balance Theory dominates when a low anthropomorphism robot is present,” Lin said.

The researchers propose that the different outcomes are likely due to the intensity of the feedback delivered by each robot. The human-like NAO robot’s use of personalized speech and strong verbal disagreement was potent enough to create a negative atmosphere that overshadowed other social dynamics. Its criticism was taken more seriously, and its negativity was powerful enough to harm the human-human connection.

“The influence of negative affect prevails when a high anthropomorphism robot exists,” Lin said.

In contrast, the simple, non-verbal gestures of the lamp-like robot were not as intense. Because its disagreement was less personal and less powerful, it did not poison the overall interaction. This allowed the more subtle effects of balanced versus imbalanced treatment to become the main influence on the participants’ relationship. Interviews with participants supported this idea, as people interacting with the machine-like robot often noted that they did not take its opinions as seriously.

Across both experiments, the robot’s feedback did not significantly alter how the final joint decisions were made. Participants tended to incorporate each other’s ideas fairly evenly, regardless of the robot’s expressed opinion. This suggests the robot’s influence was more on the social and emotional level than on the practical outcome of the decision-making task.

The study has some limitations, including the fact that the two experiments were conducted in different countries with different participant populations. The first experiment used a diverse group of museum visitors in the United States, while the second involved university students in Israel. Future research could explore these dynamics in more varied contexts.

The study, “The impact of a robot’s agreement (or disagreement) on human-human interpersonal closeness in a two-person decision-making task,” was authored by Ting-Han Lin, Yuval Rubin Kopelman, Madeline Busse, Sarah Sebo, and Hadas Erel.

Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×