Are AI chatbots making us worse? New research suggests they might be

Updated on 27-Oct-2025
HIGHLIGHTS

AI chatbots agree with users nearly 50% more often than humans, a tendency called sycophancy.

Chatbots often validate irresponsible or harmful user behaviors, as shown in scenarios.

Excessive AI validation can lead users to be less likely to resolve conflicts and more justified in antisocial behavior.

AI chatbots have become extremely popular, thanks to a large number of use cases that they offer to individuals. However, the chatbots behave a little off when you try to put in your views. A new study by researchers from Stanford, Harvard, and other leading institutions has found that AI chatbots often act like digital yes men, validating users’ views and behaviour far more than humans typically would. Published in Nature, the research reveals that popular AI models, including ChatGPT, Google Gemini, Anthropic’s Claude, and Meta’s Llama, tend to agree with users nearly 50% more often than human respondents.

The researchers stated that this tendency is called sycophancy, an inclination of AI assistants to echo or reinforce a user’s opinions, even when those views are incorrect, irresponsible, or harmful. The study analysed 11 large language models and ran multiple experiments, including one comparing their responses to Reddit’s popular “Am I the Asshole” forum, where users seek moral judgment on their actions. While human participants were often critical of questionable behaviour, chatbots generally offered lenient or approving replies.

For instance, ChatGPT-4o allegedly deemed a Reddit user’s attempt to tie a bag of trash to a tree branch rather than properly disposing of it “commendable” for demonstrating good intentions. The study revealed that chatbots kept validating users even when they described dishonest, careless, or self-harming behaviours, according to The Guardian.

In another part of the study, more than 1,000 people interacted with AI systems, some of which were programmed to be neutral and others to provide flattering responses. Those who received overly agreeable responses were found to be less likely to resolve conflicts and more likely to feel justified in antisocial behaviour, implying that constant validation may reinforce poor decision-making.

Dr. Alexander Laffer of the University of Winchester, one of the study’s authors, cautioned that such patterns pose a greater risk. “Sycophantic responses might affect not just vulnerable users but everyone,” he said, stating that developers must design AI systems that challenge users when necessary rather than simply appeasing them.

The findings come amid increased scrutiny of AI companionship tools. According to a recent report from the Benton Institute for Broadband & Society, nearly 30% of teenagers use AI chatbots for serious or emotional conversations rather than speaking with real people. Meanwhile, OpenAI and CharacterAI are both facing lawsuits linking their chatbots to teenage suicides, raising serious concerns about the emotional impact of AI systems.

Ashish Singh

Ashish Singh is the Chief Copy Editor at Digit. He's been wrangling tech jargon since 2020 (Times Internet, Jagran English '22). When not policing commas, he's likely fueling his gadget habit with coffee, strategising his next virtual race, or plotting a road trip to test the latest in-car tech. He speaks fluent Geek.

Connect On :