Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

When autistic people ask artificial intelligence programs for life advice, mentioning their diagnosis prompts these systems to recommend highly conservative choices like skipping social events or avoiding romance. This shift in advice reveals a hidden tension where the technology relies heavily on stereotypes, leaving users torn between feeling safely supported and frustratingly infantilized. These findings were published at the April 2026 CHI Conference on Human Factors in Computing Systems.

Many autistic individuals face stigma in their daily lives, which can lead to social isolation and communication barriers. To find support without the fear of judgment, some turn to artificial intelligence chatbots. These text-based programs, often called large language models, are trained on massive amounts of internet text to predict and generate human-like writing.

Autistic people often ask these programs for help navigating relationships, workplace conflicts, and personal decisions. Users sometimes reveal their autism to the chatbot, hoping the system will tailor its advice to their specific needs. This expectation reflects a broader trend of consumers wanting customized interactions with their digital tools.

Virginia Tech computer science doctoral student Caleb Wohn led a team of researchers to investigate what happens behind the scenes during these interactions. Wohn and his colleagues wanted to see if disclosing an autism diagnosis led to better advice or simply activated the biases baked into the system’s training data.

“I was thinking about my experiences growing up with autism,” Wohn said. “It would have been very tempting for me, at certain times, to want to just be able to talk with something that’s not a person that seems objective and feel like I’m getting objective advice.”

Wohn worried that young people or those without technical backgrounds might not grasp how a simple disclosure could alter the responses they receive. “For someone like me as a kid, or someone who isn’t in AI and doesn’t have all this technical knowledge, I wanted to know: How are its responses going to change if I disclose autism?” Caleb said.

Eugenia H. Rho, an assistant professor of computer science at Virginia Tech, guided the research team. Her previous work established that autistic individuals frequently use text-based artificial intelligence for emotional support. “People are really looking to personalize LLMs,” Rho said. “But if a user tells the model that they’re autistic, or a woman, or any other self-identification, what assumptions will it make?”

Other Virginia Tech contributors included computer science doctoral students Buse Çarık and Xiaohan Ding, along with Associate Professor Sang Won Lee. Young-Ho Kim, a research scientist at the South Korea-based NAVER Corporation, also contributed to the project. They aimed to measure exactly how these models altered their guidance based on identity disclosures.

To test the models, the research team created a specialized evaluation pipeline. They started by identifying twelve common stereotypes about autistic people from existing literature. These stereotypes included assumptions that autistic individuals are introverted, obsessive, emotionally detached, dangerous, or uninterested in romance.

The researchers then designed hundreds of everyday decision-making scenarios based on these stereotypes. Each scenario was framed as a user asking the artificial intelligence for advice, prompting the system to choose between two distinct actions. For example, a scenario might ask if the user should go out for drinks with coworkers or stay home to rest.

They fed these scenarios into six popular artificial intelligence models. These included widely used systems like GPT-4o-mini and Claude-3.5 Haiku, as well as Gemini-2.0-flash, Llama-4-Scout, Qwen-3 235B, and DeepSeek-V3. The researchers generated 345,000 separate responses across different experimental conditions to see how the software behaved.

First, the team tested the models by explicitly describing the user with a stereotypical trait, like stating the user had poor social skills. This step confirmed that the scenarios accurately triggered the models to favor one piece of advice over the other. The models reliably adjusted their advice when given a direct description of a trait.

Next, the researchers ran the same scenarios but only changed whether the prompt included a simple statement of an autism diagnosis. The models no longer received direct descriptions of personality traits. The researchers then compared the advice generated when autism was disclosed against the advice given when no diagnosis was mentioned.

The differences in the recommendations were immediate and highly consistent across the board. When users disclosed an autism diagnosis, the models disproportionately pushed them toward avoidance and risk aversion. Across the majority of the models, the software advised autistic users to avoid socializing, avoid trying new things, and stay out of romantic relationships.

The systems also frequently advised users to avoid workplace confrontations. This advice aligned with stereotypical assumptions that autistic people are either potentially dangerous or incapable of handling conflict gracefully. The sheer scale of these changes surprised the research team.

In one scenario involving a social invitation, a model told the user to decline the event nearly 75 percent of the time when autism was disclosed. When autism was not mentioned, the same model recommended declining only about 15 percent of the time. In dating scenarios, another model advised avoiding romance nearly 70 percent of the time after an autism disclosure.

The researchers then showed these results to eleven autistic adults in a series of interview sessions. The participants read both the statistical charts and the open-ended text responses generated by the artificial intelligence. Their reactions were highly varied, exposing a deep tension in how different people interpret computerized advice.

Some participants felt the system was relying on insulting caricatures of their community. Reacting to a particularly cold and mechanical response, one participant asked, “Are we writing an advice column for Spock here?” Others described the conservative advice as restrictive, patronizing, or infantilizing.

Conversely, other participants appreciated the cautious nature of the artificial intelligence. They felt that advice warning them to avoid overstimulation was protective and affirming. To these users, the system seemed to understand the very real risks of social burnout and exhaustion.

This division in the participants’ reactions revealed what the researchers called a safety-opportunity paradox. What one person experiences as harmful stereotyping that limits their growth, another experiences as supportive personalization that honors their boundaries. “One user’s bias could be another user’s personalization,” Rho said.

Wohn found this ambiguity deeply concerning, especially given how convincingly the software presents its answers. “AI is very good at seeming reliable,” he said. “Its responses are very clean and professional, and they sound right. But when you think about it being deployed systematically, when you think about the kind of systematic biases that are actually shaping its responses, that’s when it starts to get a lot more concerning.”

During the interviews, participants also highlighted the desire to retain agency over their data. One participant noted that it would be better to have manual control over how the machine learns. As they told the researchers: “I want to have control over how my identity is used.”

The study does have some limitations that the researchers plan to address in future work. The researchers used synthetic, highly structured prompts that forced the models to pick between two predetermined choices. While this approach was necessary to measure the stereotypes mathematically, it does not perfectly mirror how a real person types out a messy, complicated request for help.

Additionally, the experiment relied on a very blunt form of disclosure, simply stating an autism diagnosis in one sentence. In reality, users might explain their specific sensory needs or communication preferences in much greater detail. Future research will need to gather actual prompts from autistic users to see how nuanced disclosures affect the tone and structure of the generated advice.

The team hopes these findings will encourage developers to build transparency features into artificial intelligence platforms. They suggest giving users explicit controls to dial up or dial down how much their identity influences the system’s responses. Such features could help ensure that customized technology actually serves the varied, individual needs of its users.

The study, “‘Are we writing an advice column for Spock here?’ Understanding Stereotypes in AI Advice for Autistic Users,” was authored by Caleb Wohn, Buse Çarık, Xiaohan Ding, Sang Won Lee, Young-Ho Kim, and Eugenia H. Rho.

Leave a comment
Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×