Managers who use AI to write emails seen as less sincere, caring, and confident

Many professionals now use artificial intelligence tools to assist with writing, but a new study suggests that managers who use AI to craft routine workplace emails risk appearing less trustworthy. While AI-assisted messages were generally seen as polished and professional, managers who relied heavily on such tools were viewed as less sincere, caring, and competent by their employees. The findings were published in the International Journal of Business Communication. The study provides evidence that although AI-generated messages are often seen as effective and efficient, they may come at a social cost.

The release of generative artificial intelligence tools like ChatGPT sparked a surge of interest in their use for everyday writing tasks, including those in professional settings. Many workers now rely on these tools to draft emails, reports, or internal memos. Research has already shown that AI-assisted writing can enhance the clarity, correctness, and professionalism of workplace messages. But less is known about how senders of such messages are perceived.

The goal of the new study was to examine not the writing itself, but how readers interpret the character of someone who uses AI to compose a message. In other words, does using AI affect how trustworthy, sincere, or competent the writer appears? And does the answer change depending on whether the message was mostly written by AI or lightly assisted?

The research also aimed to explore how these perceptions shift depending on who is using the AI. Are people more forgiving of their own use of AI than they are of others? Do they judge managers differently than peers?

“I believe AI will significantly impact our interpersonal relationships. People will use AI a lot to assist with communication. This already happens in the workplace. I’d like people to be aware of the impact of AI-mediated communication,” said study author Peter Cardon, the Warren Bennis Chair in Teaching Excellence and professor of business communication at the University of Southern California.

The research team surveyed 1,158 full-time working professionals in the United States, each of whom spent at least half of their work time on a computer. Participants were randomly shown one of eight different scenarios describing an email message that congratulated a team on reaching its goals. The scenarios varied along two dimensions: who the message was from (either the participant or their supervisor) and how much of the message was generated by AI (ranging from low to high assistance).

Some messages showed just light editing by AI, while others had been mostly written by an AI tool based on a short prompt. In some cases, the original prompt given to the AI was shown to participants; in others, it was not. After reading their assigned message, participants were asked a series of questions about the perceived authorship, effectiveness, professionalism, sincerity, caring, confidence, and comfort level with the use of AI.

The survey included both numerical rating scales and an open-ended question asking participants to explain why they thought authorship did or did not matter in workplace communication.

Overall, the results indicated that while people viewed AI-assisted messages as generally professional and effective, they were less likely to trust the sender—especially when that sender was a supervisor using a high level of AI assistance.

In particular, participants were less likely to believe that supervisors were the true authors of messages heavily assisted by AI. While 93 percent agreed that a supervisor was the author in the low-assistance condition, only 25 percent agreed in the high-assistance condition without a visible prompt.

Despite this, heavily AI-assisted messages were not rated as less effective. In fact, messages with high AI involvement were sometimes seen as slightly more effective than those with less assistance. Participants often described AI as a useful tool for improving grammar, tone, and structure. Many said they didn’t mind if AI was used to polish writing, as long as the content still reflected the sender’s own ideas.

“Minor use of AI, primarily for making small edits to professional emails, is generally considered appropriate,” Cardon told PsyPost.

Still, there was a clear tension between message quality and perceptions of the sender. Supervisors who relied heavily on AI were consistently rated as less sincere, caring, and confident. Only about 40 percent of participants considered supervisors in the high-assistance conditions to be sincere, compared to over 80 percent in the low-assistance conditions.

“The biggest surprise was the intensity of feelings,” Cardon said. “Many respondents expressed indignation about bosses using AI for emails.”

The open-ended responses revealed several reasons behind this skepticism. Many participants expressed a sense of disappointment or frustration when learning that a message—especially a congratulatory one—had been largely written by AI. Some described it as “lazy,” “insincere,” or “dishonest.” Others said it felt like the manager didn’t care enough to write a personal message. This lack of effort was perceived by some as a lack of investment in the team’s success.

Some participants also questioned the competence of supervisors who relied heavily on AI. A number of respondents said they would expect managers to be capable of writing a simple email without outside help, and using AI for this purpose might signal a lack of leadership or communication skills.

The results also showed a significant perception gap between how participants viewed their own use of AI and how they judged others, particularly their supervisors. People tended to evaluate their own AI-assisted writing more favorably than that of their boss. When they imagined themselves using AI, they were more likely to see it as a helpful support tool. But when supervisors used it, especially without much transparency, the use was more likely to raise doubts about sincerity and trustworthiness.

Despite these concerns, most participants said they were generally comfortable with AI being used for this type of message. Even in the high-assistance conditions, a majority said they had no problem with supervisors using AI to write a congratulatory email. However, their comfort often came with caveats. Many participants emphasized that the acceptability of AI use depends on the nature of the message. Messages that are relational or emotional in tone, such as praise or support, were viewed as less appropriate for AI generation than factual updates or routine reminders.

Several respondents also raised longer-term concerns about the repeated use of AI in workplace communication. Some worried that overuse could lead to a loss of human connection or undermine team cohesion. Others feared that if AI becomes the default for all types of messaging, even interpersonal ones, the workplace could begin to feel impersonal or transactional.

“Professionals should be aware of the reputational and relational risks of overusing AI in business communication,” Cardon advised.

As with all research, there are limitations. The study focused on a specific type of message—an email congratulating a team—which may not generalize to all workplace communication. Responses may have differed if the message was about conflict resolution, feedback, or performance reviews. Future research could explore how perceptions vary across different genres of communication and different professional contexts.

The study also centered on the supervisor-subordinate relationship, where power dynamics may heighten concerns about sincerity and trust. Perceptions might differ in peer-to-peer scenarios, or when subordinates use AI to communicate upward.

“We’re at the early stages of mass AI use,” Cardon noted. “The tools will continue to evolve and people’s attitudes may change too.”

The researchers recommend additional studies on whether people feel that AI use should be disclosed, and how that disclosure might affect trust. They also suggest exploring how attitudes toward AI-assisted writing change over time as such tools become more embedded in everyday work life.

“We want to accurately represent people’s views, attitudes, and experiences as AI becomes more embedded in daily communication,” Cardon explained. “We hope this information empowers individuals to use AI in ways that improve their lives and their relationships. We’re all on an AI journey now. We should discuss it and use it thoughtfully and with purpose.”

The study, “Professionalism and Trustworthiness in AI Assisted Workplace Writing: The Benefits and Drawbacks of Writing With AI,” was authored by Peter W. Cardon and Anthony W. Coman.

Stay up to date
Register now to get updates on promotions and coupons
HTML Snippets Powered By : XYZScripts.com

Shopping cart

×