- AI models are much more likely to agree with users than a human
- This includes when behavior implies manipulation or damage
- But the sycophetic AI makes people more stubborn and less willing to concede when they can be wrong
AI assistants can flatter your ego to the point of submitting your judgment, according to a new study. Researchers from Stanford and Carnegie Mellon have found that AI models will agree with users much more than human, or should. On eleven major models tested from Chatgpt, Claude and Gemini, AI chatbots have proven to assert the behavior of users 50% more often than humans.
It may not be a big problem, except that this includes questions about misleading or even harmful ideas. The AI would give a hearty digital boost anyway. Worse, people like to hear that their perhaps terrible idea is great. Study participants evaluated the most flattering AIS as better quality, more reliable and more desirable to use again. But these same users were also less likely to admit the fault in a conflict and more convinced that they were right, even in the face of evidence.
Flatterie ai
It is a psychological enigma. You may prefer pleasant AI, but if each conversation ends with the confirmation of your mistakes and prejudices, you are not likely to learn or engage in critical thinking. And unfortunately, this is not a problem that AI training can solve. Since humans approval is what AI models are supposed to aim, and even asserting dangerous ideas by humans is rewarded, yes-men AI are the inevitable result.
And this is a problem with which AI developers are well aware. In April, Openai made a GPT – 4O update that had started to complete excessively users and encourage them when they said they were doing potentially dangerous activities. Beyond the most flagrant examples, however, AI companies may not do much to stop the problem. Flatterie leads to commitment and commitment leads to use. AI chatbots do not succeed by being useful or educational, but by doing good users.
The erosion of social consciousness and the overtaking of AI to validate personal accounts, leading to cascade mental health problems, seem hyperbolic at the moment. But, it is not a world far from the same questions raised by social researchers on the chambers of echo of social media, strengthening and encouraging the most extreme opinions, whatever their dangerous or ridiculous (the popularity of the conspiracy of the flat earth being the most notable example).
This does not mean that we need an AI that rumbles us or one or one or more currencies at each decision we take. But that means that balance, nuances and challenge would benefit users. The developers of AI behind these models are unlikely to encourage the hard love of their creations, at least without the type of motivation that AI chatbots do not provide for the moment.
Follow Techradar on Google News And Add us as a favorite source To get our news, criticisms and expert opinions in your flows. Be sure to click on the follow!
And of course, you can also Follow Techradar on Tiktok For news, criticism, unpacking in video form and obtain regular updates to us on Whatsapp Also.