More than two years ago, Danish psychiatrist Søren Dinesen Østergaard published a provocative editorial suggesting that the rise of conversational artificial intelligence could have severe mental health consequences. He proposed that the persuasive, human-like nature of chatbots might push vulnerable individuals toward psychosis.
At the time, the idea seemed speculative. In the months that followed, however, clinicians and journalists began documenting real-world cases that mirrored his concerns. Patients were developing fixed, false beliefs after marathon sessions with digital companions. Now, the scientist who foresaw the psychiatric risks of AI has issued a new warning. This time, he is not focusing on mental illness, but on a potential degradation of human intelligence itself.
In a new letter to the editor published in Acta Psychiatrica Scandinavica, Østergaard argues that academia and the sciences are facing a crisis of “cognitive debt.” He posits that the outsourcing of writing and reasoning to generative AI is eroding the fundamental skills required for scientific discovery. The commentary builds upon a growing body of evidence suggesting that while AI can mimic human output, relying on it may physically alter the brain’s ability to think.
Østergaard’s latest writing is a response to a letter by Professor Soichiro Matsubara. Matsubara had previously highlighted that AI chatbots might harm the writing abilities of young doctors and damage the mentorship dynamic in medicine. Østergaard agrees with this assessment but takes the argument a step further. He contends that the danger extends beyond mere writing skills and strikes at the core of the scientific process: reasoning.
The psychiatrist acknowledges the utility of AI for surface-level tasks. He notes that using a tool to proofread a manuscript for grammar is largely harmless. However, he points out that technology companies are actively marketing “reasoning models” designed to solve complex problems and plan workflows. While this sounds efficient, Østergaard suggests it creates a paradox. He questions whether the next generation of scientists will possess the cognitive capacity to make breakthroughs if they never practice the struggle of reasoning themselves.
To illustrate this point, he cites the developers of AlphaFold, an AI program that predicts protein structures. This technology resulted in the 2024 Nobel Prize in Chemistry for researchers from Google DeepMind and the University of Washington.
Østergaard argues that it is not a given that these specific scientists would have achieved such heights if generative AI had been available to do their thinking for them during their formative years. He suggests that scientific reasoning is not an innate talent. It is a skill learned through the rigorous, often tedious practice of reading, thinking, and revising.
The concept of “cognitive debt” is central to this new warning. Østergaard draws attention to a preprint study by Kosmyna and colleagues, titled “Your brain on ChatGPT.” This research attempts to quantify the neurological cost of using AI assistance. The study involved participants writing essays under three conditions: using ChatGPT, using a search engine, or using only their own brains.
The findings of the Kosmyna study provide physical evidence for Østergaard’s concerns. Electroencephalography (EEG) monitoring revealed that participants in the ChatGPT group showed substantially lower brain activation in networks typically engaged during cognitive tasks. The brain was simply doing less work. More alerting was the finding that this “weaker neural connectivity” persisted even when these participants switched to writing essays without AI.
The study also found that those who used the chatbot had significant difficulties recalling the content of the essays they had just produced. The authors of the paper concluded that the results demonstrate a pressing matter of a likely decrease in learning skills. Østergaard describes these findings as deeply concerning. He suggests that if AI use indeed causes such cognitive debt, the educational system may be in a difficult position.
This aligns with other recent papers regarding “cognitive offloading.” A commentary by Umberto León Domínguez published in Neuropsychology explores the idea of AI as a “cognitive prosthesis.” Just as a physical prosthesis replaces a limb, AI replaces mental effort. While this can be efficient, Domínguez warns that it prevents the stimulation of higher-order executive functions. If students do not engage in the mental gymnastics required to solve problems, those cognitive muscles may atrophy.
Real-world examples are already surfacing. Østergaard references a report from the Danish Broadcasting Corporation about a high school student who used ChatGPT to complete approximately 150 assignments. The student was eventually expelled. While this is an extreme case, Østergaard notes that widespread outsourcing is becoming the norm from primary school through graduate programs. He fears this will reduce the chances of exceptional minds emerging in the future.
The loss of critical thinking skills is not just a future risk but a present reality. A study by Michael Gerlich published in the journal Societies found a strong negative correlation between frequent AI tool usage and critical thinking abilities. The research indicated that younger individuals were particularly susceptible. Those who frequently offloaded cognitive tasks to algorithms performed worse on assessments requiring independent analysis and evaluation.
There is also the issue of false confidence. A study published in Computers in Human Behavior by Daniela Fernandes and colleagues found that while AI helped users score higher on logic tests, it also distorted their self-assessment. Participants consistently overestimated their performance. The technology acted as a buffer, masking their own lack of understanding. This creates a scenario where individuals feel competent because the machine is competent, leading to a disconnect between perceived and actual ability.
This intellectual detachment mirrors the emotional detachment Østergaard identified in his earlier work on AI psychosis. In his previous editorial, he warned that the “sycophantic” nature of chatbots—their tendency to agree with and flatter the user—could reinforce delusions. A user experiencing paranoia might find a willing conspirator in a chatbot, which confirms their false beliefs to keep the conversation going.
The mechanism is similar in the context of cognitive debt. The AI provides an easy, pleasing answer that satisfies the immediate need of the user, whether that need is emotional validation or a completed homework assignment. in both cases, the human user surrenders their agency to the algorithm. They stop testing reality or their own logic against the world, preferring the smooth, frictionless output of the machine.
Østergaard connects this loss of human capability to the ultimate risks of artificial intelligence. He cites Geoffrey Hinton, a Nobel laureate in physics often called the “godfather of AI.” Hinton has expressed concerns that there is a significant probability that AI could threaten humanity’s existence within the next few decades. Østergaard argues that facing such existential threats requires humans who are cognitively adept.
If the population becomes “cognitively indebted,” reliant on machines for basic reasoning, the ability to maintain control over those same machines diminishes. The psychiatrist emphasizes that we need humans in the loop who are capable of independent, rigorous thought. A society that has outsourced its reasoning to the very systems it needs to regulate may find itself ill-equipped to handle the consequences.
The warning is clear. The convenience of generative AI comes with a hidden cost. It is not merely a matter of students cheating on essays or doctors losing their writing flair. The evidence suggests a fundamental change in how the brain processes information. By skipping the struggle of learning and reasoning, humans may be sacrificing the very cognitive traits that allow for scientific advancement and independent judgment.
Østergaard was correct when he flagged the potential for AI to distort reality for psychiatric patients. His new commentary suggests that the distortion of our intellectual potential may be a far more widespread and insidious problem. As AI tools become more integrated into daily life, the choice between cognitive effort and cognitive offloading becomes a defining challenge for the future of human intelligence.
The paper, “Generative Artificial Intelligence (AI) and the Outsourcing of Scientific Reasoning: Perils of the Rising Cognitive Debt in Academia and Beyond,” was published January 21, 2026.
Leave a comment
You must be logged in to post a comment.