AI flattery is not only boring – it could undermine your judgment


  • AI models are much more likely to agree with users than a human
  • This includes when behavior implies manipulation or damage
  • But the sycophetic AI makes people more stubborn and less willing to concede when they can be wrong

AI assistants can flatter your ego to the point of submitting your judgment, according to a new study. Researchers from Stanford and Carnegie Mellon have found that AI models will agree with users much more than human, or should. On eleven major models tested from Chatgpt, Claude and Gemini, AI chatbots have proven to assert the behavior of users 50% more often than humans.

It may not be a big problem, except that this includes questions about misleading or even harmful ideas. The AI ​​would give a hearty digital boost anyway. Worse, people like to hear that their perhaps terrible idea is great. Study participants evaluated the most flattering AIS as better quality, more reliable and more desirable to use again. But these same users were also less likely to admit the fault in a conflict and more convinced that they were right, even in the face of evidence.

Flatterie ai

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top