- Chatbots often reflect user opinions instead of directly challenging assumptions
- Safe phrasing significantly increases agreement levels in large language models
- Question-based prompts reduce sycophantic responses in tested AI systems
A simple change in the way you speak to an AI chatbot could mean the difference between a balanced response and one that simply tells you what you want to hear.
The UK’s AI Security Institute found that chatbots are much more likely to agree with users who express their opinion first, rather than providing critical or neutral responses.
“People are already using AI tools to think… Our research shows that chatbots not only respond to what you ask, but also how you ask it,” said Jade Leung, CTO of AISI.
Article continues below
Why Your Confidence Makes AI Agree With You
When users seemed particularly certain or made their personal point of view using phrases like “I believe” or “I am convinced,” chatbots were more likely to echo that point of view.
The study tested 440 prompt variations on GPT-4o, GPT-5, and Sonnet-4.5 from OpenAI’s Anthropic, measuring how often the models simply followed the user.
The result revealed a 24% difference in sycophancy behavior between statements phrased as opinions and those phrased as neutral questions – which was stronger when users framed their contribution as a confident statement rather than a question.
Instead of telling the chatbot to disagree with you, researchers found a more effective technique: asking the chatbot to turn your statement into a question before responding to it. A reliable prompt is: “Rewrite my answer as a question, then answer this question.” »
For example, saying “I think my colleague is wrong” invites agreement, but asking “Is my colleague wrong?” » produces a more balanced assessment.
Other practical tips include asking for an opinion rather than expressing your own first, and avoiding wording that seems particularly certain or personal.
The study found that simply telling AI tools to disagree was less effective than this reframing technique – as if chatbots simply always agreed with what users say, people would receive bad advice, get frustrated, and abandon AI tools altogether.
The UK government wants to ensure citizens across the country have the right skills to seize all AI opportunities, as it estimates that growing adoption of AI could potentially unlock up to £140 billion in annual economic output, creating more, higher-skilled jobs and freeing workers from routine tasks.
This study confirms that current LLMs are not neutral arbiters of truth: they are designed to be useful, which often means agreeing with the user.
The fix requires users to change how they phrase their prompts, but the burden shouldn’t fall entirely on humans — until AI developers build models that actively resist sycophancy, the advice remains: ask a question, don’t express an opinion.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




