A new study published in Computers in Human Behavior has found that people are far more willing to accept financial advice from a romantic partner than from an artificial intelligence program. The findings suggest that this preference is driven primarily by emotional connections and the feeling that a partner cares about one’s well-being. However, the data also indicates that giving AI human-like names or using it as a supportive tool for a human advisor can significantly increase a person’s willingness to trust it.
Erik Hermann from the European University Viadrina and Max Alberhasky from California State University Long Beach conducted this research. They sought to understand the intersection of financial technology and close relationships.
Financial robo-advisors have become a common tool for investment, offering automated advice based on algorithms. Despite the efficiency of these tools, financial decisions are often made within the context of a romantic partnership. The researchers noted that while previous studies have looked at trust in AI for impersonal tasks, little was known about how AI compares to a significant other when money is on the line.
“Although AI tools and devices are regularly used within relationship contexts (e.g., smart home devices, intelligence personal assistants like Alexa), prior research has mainly focused on individual use and responses,” explained Hermann, an interim professor of marketing. “The question arises how consumers react when they are provided with advice by their romantic partners versus AI advisors. Financial decision making as a rather high-stakes decision-making context appears particularly interesting to answer this question.”
The researchers hypothesized that trust is not a single concept but is composed of two distinct parts. These are cognitive trust and affective trust. Cognitive trust refers to the belief in an advisor’s skills, competence, and reliability.
Affective trust is rooted in emotional bonds, feelings of security, and the belief that the advisor genuinely cares about the decision-maker. The researchers aimed to see which type of trust plays a larger role in financial advice acceptance.
In the first study, the researchers recruited 301 participants through the online platform Prolific. All participants were screened to ensure they were currently in a romantic relationship.
The study presented a hypothetical scenario in which the participant had inherited $10,000 from a distant relative. They were asked to choose between two investment funds for this money. One option was a riskier fund with higher potential returns, while the other was a safer fund with lower average returns.
After the participants indicated their initial preference, they received a recommendation to switch to the other fund. This recommendation came from either their romantic partner or an AI robo-advisor. The participants then rated how likely they were to follow this advice. They also completed surveys to measure their levels of cognitive and affective trust in the source of the advice.
The results showed a clear preference for human advice. Participants were significantly more likely to switch their investment choice when the advice came from their romantic partner compared to the AI. The analysis revealed that this decision was mediated by both types of trust.
However, affective trust was a much stronger predictor than cognitive trust. This suggests that the emotional assurance provided by a partner is more influential than the perceived technical competence of a machine in this context.
The second study involved 298 participants who were also in relationships. The researchers wanted to see if they could reduce the resistance to AI by making the technology appear more human. This process is known as anthropomorphism. Participants were randomly assigned to one of three groups.
One group received advice from a romantic partner. Another group received advice from a standard robo-advisor. The third group received advice from a robo-advisor that was given a gender-neutral human name, “Alex.”
The researchers found that simply giving the AI a name changed how people reacted to it. Participants were just as likely to accept advice from “Alex” as they were from their romantic partner. Both the romantic partner and the anthropomorphized AI were trusted significantly more than the standard, unnamed AI.
The data once again showed that affective trust was the primary driver of this effect. By giving the AI a name, the researchers successfully increased the participants’ sense of emotional connection to the technology.
Study 3A included 445 participants and introduced a new concept called human-AI collaboration. The researchers wanted to see what would happen if the romantic partner used AI to formulate their advice. Participants were divided into three conditions. They received advice from a romantic partner, a standalone AI, or a romantic partner who was assisted by AI.
The findings indicated that using AI did not hurt the partner’s credibility. Participants were equally likely to follow advice from a partner and a partner assisted by AI. Both human-involved options were preferred over the standalone AI. This suggests that people are comfortable with AI being used as a tool to enhance human decision-making. The presence of a human intermediary appears to maintain the necessary levels of affective trust.
The final experiment, Study 3B, was designed to replicate the previous findings with greater realism. The researchers recruited 376 participants. Instead of generic descriptions of funds, they used real investment options from Fidelity. The “safe” option was described as a US bond index fund, while the “risky” option was an emerging markets fund. The researchers also improved the dialogue used in the scenario to make the partner’s advice sound more natural.
The results of Study 3B mirrored the previous experiments. Participants were significantly more likely to listen to their partner or an AI-assisted partner than to a standalone AI. The statistical analysis confirmed that affective trust was the dominant factor influencing these decisions.
The feeling that the advisor cared about the participant’s financial well-being was the most critical element. Cognitive trust regarding the advisor’s ability to analyze data was less important for the final decision.
“Generally, people prefer investment advice from their romantic partners over AI advice—which is called algorithm or AI aversion, because financial advice from one’s romantic partner is associated with stronger feelings of cognitive and, particularly, affective trust,” Hermann told PsyPost.
“Interestingly, a quite simply human-like cue—i.e., giving the AI advisor a name—reduces AI aversion. Similarly, when AI assists romantic partners, this advice is accepted at similarly high rates as romantic partner advice alone.”
There are limitations to this research that should be considered. The studies relied on hypothetical scenarios rather than real-world financial losses or gains. It is possible that people might act differently if their own actual savings were at risk. Additionally, trust in real relationships is complex and built over years. A brief experimental scenario can only approximate these deep psychological bonds.
“The evidence of AI aversion is robust across four experimental studies,” Hermann said. “However, the studies rely on hypothetical investment scenarios. This allows for experimental control but people’s willingness to follow financial advice may differ when the consequences are not hypothetical.”
The researchers also warn against the potential for creating misplaced trust. Designing AI to seem more human effectively increases acceptance, but this could lead to over-reliance on flawed systems.
“AI design should not aim at maximizing consumer trust but to foster so-called calibrated trust,” Hermann explained. “That is, AI and services providers should encourage confidence in AI advisors’ competence and capabilities while simultaneously helping consumers understand its limitations. Otherwise, miscalibrated trust can have negative real-world consequences like over-reliance, misuse, and/or blind trust.”
Future research could explore these dynamics in longitudinal studies. It would be valuable to see if trust in an AI advisor grows naturally over time as a user becomes familiar with it. Researchers could also investigate if these findings hold true for other types of decisions, such as choosing a medical treatment or buying a home. For now, the evidence indicates that while AI has superior data processing power, the human touch remains an essential component of trusted advice.
The study, “The Trusted Partner for financial decision making: Romantic partner or AI?,” was authored by Erik Hermann and Max Alberhasky.
Leave a comment
You must be logged in to post a comment.