A recent study published in the journal Consciousness and Cognition has found that interacting with an artificial intelligence partner alters our sense of control in unexpected ways. When people work on a task alongside a virtual agent capable of taking action, they consciously feel less responsible for the outcome, yet their unconscious brain activity shows a heightened tracking of their own actions. This suggests that the human mind adapts to the presence of digital partners much like it does to other people.
The scientific concept of the “sense of agency” refers to the feeling that a person is the direct cause of events happening around them. For example, when you flip a light switch and the room illuminates, you naturally feel a sense of ownership over that action and its result. Past studies have shown that this feeling tends to weaken when other people are present and capable of acting.
This weakening is similar to the bystander effect, where individuals in a crowd feel less responsible for helping in an emergency because they assume someone else will step in. This creates a diffusion of responsibility, meaning the mental burden of taking action is spread out among the group. Researchers at the University of East Anglia wanted to know if this same psychological diffusion happens in online environments when the bystander is a virtual artificial agent.
“The study is based on the ‘Bystander effect’ phenomenon where one is less likely to take action when there are other people around who can also act,” said study author Anh H. Le. “This creates a diffusion of responsibility as one feels less responsible for taking action in such social context.”
In addition to measuring direct feelings of control, the scientists wanted to measure unconscious feelings of control. They did this by looking at the temporal binding effect, a mental illusion where people perceive the time between their voluntary action and its outcome as being much shorter than it actually is. The researchers sought to understand if working with a computer program would change both this hidden timing perception and a person’s direct judgment of their own control.
To test these ideas, the researchers set up two online experiments. In the first experiment, 123 participants engaged in a computer task where a shape on the screen gradually expanded. The participants had to press a key to stop the shape before it turned red, which would result in losing a large number of points.
Participants completed this task under different conditions. In one scenario, they worked entirely alone. In another scenario, they were introduced to a virtual partner named Bobby, represented by a smiling digital face on the computer screen.
The participants were told that Bobby was an artificial partner who could also press a button to stop the shape from expanding. Bobby was programmed to intervene only if the shape grew dangerously large. This mimicked a shared situation where either the human or the machine could take responsibility for finishing the task.
After the shape stopped, the participants heard a tone and saw the shape change color. They were then asked to estimate the exact amount of time that passed between the tone and the color change by holding down the spacebar. Finally, they used a digital slider to rate how much control they explicitly felt over the outcome on a scale from zero to one hundred.
“We adapted a paradigm where participants had to stop the circle from enlarging by pressing a button to prevent losing points. They either worked alone or with Bobby the artificial partner. When they worked with Bobby, Bobby could also act to stop the circle from enlarging and, importantly, if no one acted the participants would lose most points, thus mimicking the diffusion of responsibility scenario.”
The data showed that when working with the virtual partner, participants rated their direct feeling of control lower than when they worked alone. They consciously felt less responsible for the outcome. This suggests that the presence of the artificial agent caused a diffusion of responsibility in their conscious minds.
At the same time, the implicit measure revealed the exact opposite pattern. When Bobby was present, participants perceived the time between their action and the outcome as being noticeably shorter than when they worked alone. This increased temporal binding provides evidence that their unconscious sense of agency actually grew stronger when competing with the artificial partner.
The scientists conducted a second online experiment with 102 new participants to see if the mere visual presence of the digital partner caused these psychological shifts. They used the exact same shape-stopping task but added a new condition called Being Observed. In this new setup, the avatar for Bobby was visible on the screen, but the participants were informed that the artificial agent could only watch and was unable to take any action.
The rest of the procedure remained identical, with participants estimating the time intervals and rating their conscious feelings of control. The findings from the second experiment replicated the first, showing a decrease in conscious control and an increase in unconscious temporal binding when Bobby was allowed to act. However, in the condition where Bobby was merely observing, participants’ feelings of agency perfectly matched the times when they worked completely alone.
This indicates that simply looking at a digital face does not change human psychology. Instead, the artificial agent must have the actual ability to interfere with the task for the human brain to adjust its sense of agency. The researchers propose that the brain subconsciously heightens its tracking of actions to clearly distinguish between what the human did and what the machine might do.
The findings demonstrate that “virtual artificial agents can indeed influence our sense of agency in human-machine interaction in two ways,” Le told PsyPost. “When the artificial agent can also take action, we explicitly feel a reduced sense of agency because we think about the possible actions that such an artificial agent could take, and this interferes with our own decision as to whether or not to also act.
“At the same time, we have an implicit system (temporal binding) that is enhanced to help us distinguish ourselves and our actions from those made by others, in this case, the artificial partner. Because of this, the temporal binding effect, or implicit agency, increases when the artificial partner can also act, signifying a self-other distinction without us being consciously aware of it. As a result, the sense of agency is malleable and adaptive to social contexts, even those that involve an online artificial partner.”
The research counters the assumption that humans view software and robots merely as tools that do not affect our inner psychology. This study provides evidence that people actually process the actions of artificial agents in ways that closely resemble human social interactions. Even when participants knew Bobby was just code, their minds still distributed responsibility to the program.
“In terms of practical significance, this shows that even a ‘made-up’ online partner that was clearly artificial (but who would feel ‘sad’ if the circle enlarged too much and no one stopped it, as if Bobby had ‘feelings’) could interfere with our sense of agency,” the researchers noted. “However, this is conditional on whether the artificial partner could also take independent action.”
“When the artificial partner is merely present and cannot take action, they do not influence the sense of agency,” they explained. “We interact with online artificial systems every day, more so than ever before (Siri, Alexa, etc). This points toward the possibility that during interactions with such systems, our sense of agency could be moderated in similar ways as if we were interacting with other humans (although we did not test working with other human partners in the current study).”
One limitation of the study is that it only tested human interactions with a digital avatar, without a direct comparison group working alongside another real human. The scenarios were also relatively simple and confined to an online environment. It remains unclear exactly how these psychological shifts might play out in physical spaces with advanced robotic partners.
For future research, the scientists plan to explore these dynamics in larger group settings. They hope to investigate what happens to our sense of control when a task involves multiple human participants and several artificial agents all working together. Adding more individuals to the mix tends to complicate how the brain tracks actions and assigns responsibility.
“What happens if there is more than just one artificial partner – or another person, so say a triad (or more) includes the participant and two (or more) other agents, either human, artificial partner or both and anyone could act?” Le said.
It would be interesting to see how such group dynamics influence the sense of agency and perhaps complicate the findings even more!”
“Special thanks to Dr. Tom Burke who is the main driving force of this research and Prof. Andrew Bayliss for his supervision,” she added.
The study, “Working with an Online Artificial Partner Enhances Implicit and Reduces Explicit Sense of Agency,” was authored by Anh H. Le, Thomas Burke, and Andrew P. Bayliss.
Leave a comment
You must be logged in to post a comment.