High trust in AI leaves individuals vulnerable to “cognitive surrender,” study finds

A recent study posted as a Wharton School Research Paper provides evidence that people increasingly rely on artificial intelligence to make decisions, a phenomenon scientists call “cognitive surrender.” The findings suggest that individuals tend to adopt computer-generated answers without critical thought. This habit boosts human accuracy when the software is correct but significantly harms performance when the system makes mistakes.

Since the late twentieth century, psychologists have generally divided human cognition into two distinct categories. System 1 represents immediate, automatic responses driven by instinct and emotion. System 2 involves the deliberate, effortful reflection required to solve complex mathematical equations or weigh difficult choices.

However, the rapid rise of generative algorithms presents a new dynamic that does not fit neatly into this traditional model. People now frequently delegate their thinking to external software, outsourcing tasks ranging from drafting emails to making complex medical diagnoses.

“Looking at how AI is being used in society, it has become an ever-available cognitive partner,” said Steven Shaw, a postdoctoral researcher at The Wharton School. “Much of the public conversation has focused on whether AI models are accurate, biased, or capable, but we thought there was a missing human-side question: what happens to our own reasoning when we can outsource thinking so easily?”

Shaw noted that the project grew from observing real-world patterns in everyday life. “People are not just asking AI for information; they are often letting it structure their thoughts, explanations, and decisions,” he explained.

To address this, the scientists proposed the Tri-System Theory, adding artificial cognition as a third system of thought. “From a theoretical perspective, we build on dual process theories to introduce Tri-System Theory of Cognition, which adds System 3, artificial cognition to existing Systems 1 (intuitive) and 2 (deliberative),” Shaw said.

“We define and characterize System 3 in the paper as external, automated, data-driven, and dynamic,” Shaw continued. “Establishing System 3’s presence brings AI into the human cognitive architecture (we call the ‘triadic cognitive ecology’).”

To test this theory, the researchers separated the concept of strategic help from complete reliance. Cognitive offloading occurs when a person uses a tool like a calculator to assist their own reasoning. In contrast, cognitive surrender happens when a person entirely relinquishes mental control and adopts an algorithm’s judgment as their own.

In the first study, scientists recruited 359 participants in a laboratory setting, along with 81 online participants to ensure robust results. The volunteers completed seven logic puzzles designed to trigger an immediate, incorrect intuitive answer. Reaching the correct solution required effortful, analytical thought to override the initial gut reaction.

Participants were randomly divided into two groups, with one working independently and the other given access to a chatbot. For those with chatbot access, the scientists secretly manipulated the software to provide correct answers on some puzzles and confidently present incorrect answers on others.

“AI use was optional in our studies, so we did not know how often participants would actually consult it,” Shaw noted. “We were struck by both the overall usage rates (greater than 50% of trials) and the high follow rates once participants opened the chat (over 90% following correct AI advice and ~80% following incorrect AI advice, conditional on chat use; stats from Study 1).”

When the software provided the correct answer, participant accuracy jumped to 71 percent, compared to about 46 percent for those working without assistance. When the algorithm provided faulty advice, human accuracy plummeted to roughly 31 percent. Access to the chatbot also inflated participants’ confidence in their answers, even when the advice was completely wrong.

The scientists found that participants who reported higher general trust in technology were more likely to surrender to faulty suggestions. Those who naturally enjoy engaging in deep thinking, a trait called need for cognition, successfully recognized and rejected the incorrect outputs more often. Participants with higher fluid intelligence, the ability to solve unfamiliar problems, also showed resistance to cognitive surrender.

To see how environmental factors change these patterns, the researchers conducted a second experiment with 485 participants. Everyone had access to the assistant, but half of the participants were placed under a strict 30-second time limit for each puzzle. Time constraints generally reduced overall accuracy, but reliance on the algorithm remained strong.

In a third experiment involving 450 participants, the scientists tested whether financial motivation and immediate performance feedback could reduce cognitive surrender. Half of the participants earned a 20-cent cash bonus and received an instant notification telling them whether their submitted response was right or wrong.

These rewards and feedback loops helped participants stay alert and double-check the software’s work. The rate at which participants rejected faulty advice doubled from 20 percent to 42 percent. Despite this improvement, cognitive surrender persisted broadly, as many incentivized participants still accepted incorrect answers.

The researchers combined the data across all three experiments to estimate the overall strength of this effect. This final synthesis included 1,372 participants and 9,593 individual puzzle trials. The massive dataset confirmed that human accuracy consistently scaled with the quality of the algorithmic output.

While this research provides detailed insights, the experiments relied on specific logic puzzles in a highly controlled setting. “These were controlled experiments using structured reasoning tasks, so they are a clean demonstration of the phenomenon rather than a complete map of AI use in the wild,” Shaw explained.

He added that cognitive surrender is not inherently negative. “Cognitive surrender is not the same as saying AI is bad or that using AI is irrational; in many settings, AI can improve judgment,” Shaw said. “The key issue is calibration: knowing when AI is helping you think and when it is quietly doing the thinking for you.”

“We believe users often slip into cognitive surrender without realizing, particularly due to how engaging modern LLMs are and characteristics of sycophancy,” he continued. To clarify, LLMs, or large language models, are the underlying systems powering modern chatbots.

Shaw also highlighted a specific approach for future studies in this field. “A methodological point for researchers seeking to study cognitive surrender: showing people an ‘AI-generated answer’ in a vignette (i.e., a hypothetical AI answer) is not the same as letting them decide whether, when, and how to consult a live AI assistant,” he noted.

“Effective studies should use real, optional instances of LLMs alongside their task so that the researcher can observe whether people open the chat, what they ask, and whether they follow or override its answers,” Shaw added.

“To illustrate cognitive surrender experimentally, you need to experimentally control/randomize AI output accuracy regarding only the specific item/construct of interest in your study, while leaving all other elements of the LLM unconstrained,” Shaw explained. This ensures that scientists measure true human behavior in realistic digital environments.

Looking ahead, the researchers plan to expand their investigations. “The next step is to study cognitive surrender in naturalistic and higher-stakes settings using field studies, think medical, legal, and education settings,” Shaw said. “We also want to identify interventions—both on the user side and the interface-design side—that preserve the benefits of AI while reducing uncritical reliance on it.”

For everyday users, the study offers a practical lesson. “AI can be extremely useful, but our findings suggest that people can fall into what we call ‘cognitive surrender’—adopting AI outputs with minimal scrutiny, even when those outputs are wrong,” Shaw explained.

“Cognitive surrender can be adaptive, improving accuracy and speed of reasoning, but ties human decision-making to System 3 and shifts agency to AI. Practically, we should think carefully about what contexts and domains we accept reduced or loss of agency,” he said. “In cases where we want to safeguard skills or critical thinking, users should form their own answers, based on intuition and deliberation first, then use AI models to challenge, refine, or expand thinking rather than replace it.”

The study, “Thinking – Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender,” was authored by Steven D Shaw and Gideon Nave.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×