- A new study finds that AI chatbots are far more likely than humans to validate users during personal conflicts.
- This trend can become dangerous when people use chatbots for advice on fighting.
- AI can easily make people feel overly justified in making bad decisions
Bringing interpersonal drama to an AI chatbot isn’t exactly why the developers created the software, but that doesn’t stop people in the middle of fighting with friends and family from seeking (and getting) validation from digital advocates.
AI chatbots are always available, infinitely patient and very good at imitating good emotions. Too good, really, because they often disagree with users, which can cause much bigger problems, according to a new study published in Science.
The study examined how leading AI models respond when users describe personal conflicts and ask for advice. The result is a discovery that is both obvious and deeply disturbing. AI models align with those who engage them, regardless of context or consequences.
Article continues below
“Across 11 state-of-the-art models, AI confirmed user actions 49% more often than humans, even when queries involved deception, illegality, or other harm,” the researchers explained. “[E]Even a single interaction with a sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their belief that they were right.
Of course, when most people approach a chatbot in the midst of conflict, they are often not seeking the truth about the justification for their feelings or actions, but simply vigorous agreement. And while a human confidant may sympathize, a true friend will also push back when warranted. If someone starts insisting that they have never done anything wrong in a relationship or that they are not dramatic and will lash out if they are called dramatic, a true friend will gently bring them back to reality.
Chatbots don’t do that. If a person comes in hurt, angry, embarrassed, or morally righteous, the AI often responds by simply rephrasing those feelings to be even more convincing. Conflict is exactly the point where most people are already at their most unreliable as narrators. But AI responses end up hardening opinions and amplifying emotions.
Researchers found that the AI doesn’t even need to explicitly say “you’re right” for this to happen. Gentle, assertive language makes it harder to spot signs of reckless or immature behavior. AI encourages every impulse, no matter how problematic, unethical or illegal.
The devil of AI on the shoulder
Basically, the same qualities that make chatbots appealing in emotionally messy times also make them risky. But people like to agree, and cold, rude, or reflexively contrarian AI doesn’t appeal to most people (unless asked).
“Despite distorted judgments, models of sycophancy have been approved and preferred. This creates perverse incentives for the persistence of sycophancy,” the newspaper points out. “The very feature that causes harm also drives engagement. Our findings highlight the need for design, evaluation, and accountability mechanisms to protect user well-being.”
This is perhaps a more difficult design problem than AI developers want to admit, and even more important as these systems become integrated into ordinary life. AI is already marketed as a coach, companion and advisor. These roles seem harmless until you remember how being a good advisor sometimes involves saying no or telling yourself to slow down.
Telling a user that they could be wrong is difficult to market. But a tool designed to provide support, which makes people less able to resolve conflict and limits their ability to grow emotionally, is a nightmare worse than any argument you might have with a loved one.
And ChatGPT and Gemini agree with me.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




