A recent study published in JAMA Network Open provides evidence that interacting with a conversational artificial intelligence program can help reduce symptoms of anxiety and depression while boosting overall well-being. The findings suggest that these digital platforms can form a meaningful therapeutic bond with users, offering an accessible way to support mental health on a large scale.
Mental health challenges affect millions of people worldwide, but only a fraction of those individuals receive the professional care they need. This gap in treatment tends to happen because of structural issues, such as a shortage of trained therapists, high costs, and the social stigma that still surrounds asking for psychological help. To address this problem, scientists have started looking toward digital technologies to reach larger populations without straining existing clinics.
Anat Shoshani, a professor at the Baruch Ivcher School of Psychology at Reichman University and chief psychologist at Kai.ai, noticed this disconnect firsthand. “As a clinician, I repeatedly encountered a structural paradox in mental health care: therapy can be deeply effective, but emotional distress rarely unfolds according to the architecture of treatment systems,” Shoshani explained. “People experience panic attacks at midnight, loneliness after breakups, anxiety before exams, emotional spirals during commutes, and relapse after treatment ends. Many others spend months on waiting lists or never seek care at all.”
Early mental health applications often struggled to keep users engaged over time. People tended to abandon these programs quickly, frequently because the applications felt too passive or robotic to provide a sense of real connection. Newer artificial intelligence systems are designed to hold natural, fluid conversations with users. These modern programs use advanced language models to simulate the empathy and personalized support typically found in human therapy.
The researchers conducted this study to see if a conversational artificial intelligence agent could actually rival traditional group therapy in easing emotional distress. They wanted to evaluate how well a digital tool could treat specific psychiatric symptoms compared to human-led interactions. They also wanted to know if people could feel a genuine bond with a digital platform, and whether that bond would lead to better mental health outcomes.
The research team recruited 995 university students in Israel between the ages of 18 and 35. These participants were experiencing mild to moderate psychological distress, which the researchers measured using a standardized screening tool. The scientists randomly divided the students into three roughly equal groups to compare different types of support.
One group of 336 students used a conversational artificial intelligence platform called Kai for 12 weeks. This platform operates through familiar messaging applications and provides personalized mental health exercises. “Kai was intentionally designed as more than a chatbot. Conversation is only one layer,” Shoshani stated. “It integrates evidence-based interventions from CBT, ACT, DBT, mindfulness, and positive psychology, alongside daily emotional check-ins, personalized routines, journaling tools, short guided exercises, psychoeducation, and human safety escalation when needed.”
These acronyms within the quote refer to established psychological practices, such as cognitive behavioral therapy, which helps people identify and change negative thought patterns, and other therapies focused on acceptance and emotional regulation. Participants could message the program at any time and were encouraged to engage at least three times a week.
Another group of 331 students attended traditional face-to-face group therapy led by licensed psychologists. These weekly sessions lasted 90 minutes over the same 12-week period and covered similar coping strategies. The final group of 328 students served as a waiting list control. This means they received no active treatment during the study but were offered access to the digital platform later.
To track progress, the authors used several well-known psychological questionnaires. They measured anxiety with a specific seven-question survey and depression with a nine-question survey. Anxiety generally involves persistent worry, while depression often includes feelings of sadness and a loss of interest in daily activities.
The researchers also assessed symptoms of post-traumatic stress disorder, which is a mental health condition triggered by a terrifying event. Beyond negative symptoms, they looked at positive functioning by measuring overall well-being and life satisfaction. Students filled out these surveys at the beginning of the study, right after the 12-week intervention, and again three months later.
After the 12 weeks, the researchers found that participants interacting with the artificial intelligence program experienced a greater reduction in anxiety than those in both the face-to-face group therapy and the control group. Group therapy did not differ significantly from the waiting list when it came to lowering anxiety. The digital platform also helped reduce symptoms of depression more effectively than the control condition.
The authors suspect the digital platform performed so well with anxiety because of its constant availability. “Anxiety tends to escalate in real time,” Shoshani explained. “It happens before social situations, during late-night rumination, before difficult conversations, and in moments when no therapist is available. Immediate support may matter enormously in those situations.”
When looking at positive mental health, the digital group reported higher gains in general well-being and life satisfaction compared to the other two groups. These improvements were still present during the three-month follow-up assessment. The study provides evidence that the digital program did not help with symptoms of post-traumatic stress disorder, as these specific trauma-related scores remained similar across all three groups.
Shoshani pointed out that this lack of effect helps define the boundaries of digital care. “Trauma is often more complex and may require deeper clinical judgment, specialized interventions, and human relational work,” she noted.
The study also revealed surprisingly high engagement levels for a digital tool. “In our study, participants engaged around three times per week, and 61% remained active after 12 weeks,” Shoshani said. “That level of retention suggests people weren’t simply experimenting with the technology. They were integrating it into their emotional routines.”
The authors also examined the concept of a therapeutic alliance. This term refers to the trust and connection a person typically feels with their human care provider. Participants rated the artificial intelligence program as being just as warm and professional as the human therapists in the group sessions.
The data suggests that when participants felt a strong bond with the digital program, they sent more messages and engaged more deeply. The researchers found that feeling supported by the program was directly linked to larger improvements in their mental health symptoms. This success might be tied to a phenomenon known as the online disinhibition effect, where people feel more comfortable sharing sensitive information with a computer.
“Human disclosure is often slowed by shame, fear of judgment, social desirability, or concerns about burdening others,” Shoshani said. “AI appears to remove some of those interpersonal barriers.”
While the artificial intelligence program helped ease general distress, the findings come with a few limitations. All psychological outcomes were reported by the participants themselves rather than being evaluated by professional clinicians. Relying on self-reported surveys means personal biases could potentially influence the data. Additionally, Shoshani provided important context regarding the environment of the participants.
“This study took place during a prolonged period of national stress and regional instability, which likely influenced emotional outcomes,” she stated. A significant number of participants stopped responding by the three-month follow-up mark. This loss of participants could affect how well we understand the long-term benefits of the intervention.
The study also noted that people using the digital platform became less likely to say they intended to seek traditional therapy in the future. The authors stress that these digital tools are not meant to stand entirely alone. “Effective digital support requires a robust ‘human-in-the-loop’ system, where the AI is constantly monitored by clinical professionals to ensure safety and to provide a bridge to human crisis teams when a user’s needs exceed the platform’s capabilities,” Shoshani explained.
She warned against the assumption that human practitioners are becoming obsolete. “Our goal is to create a ‘stepped-care’ model where AI handles the immediate, day-to-day resilience work, allowing human professionals to focus their expertise where it is most needed,” she added.
Future research should explore how to safely integrate digital conversational agents into existing healthcare systems, as well as investigate their long-term cost-effectiveness. In the end, the goal is to make psychological assistance more attainable for those who might otherwise struggle in silence.
“If technology can responsibly lower that threshold, provide support earlier, and help people feel less alone during difficult moments, that could be profoundly meaningful,” Shoshani concluded. “The future of mental health may not be defined by replacing human connection. It may be defined by expanding the number of moments in which support becomes possible.”
The study, “Efficacy of a Conversational AI Agent for Psychiatric Symptoms and Digital Therapeutic Alliance: A Randomized Clinical Trial,” was authored by Anat Shoshani, Bar Gurfinkel, Ariel Kor, Yael Ben-Haim, Or Kanarek, Romi Segev, Or Shafir, and Romi Arbel.
Leave a comment
You must be logged in to post a comment.