- Geoffrey Hinton warns that AI will soon be better than humans in emotional manipulation
- They can reach this point without us realizing it
- AI models learn persuasive techniques simply by analyzing human writing
Geoffrey Hinton, widely called the “sponsor of the AI”, seems a warning that AI will not be simply intellectually beyond humans, but also emotionally more sophisticated. While the general artificial intelligence (AG) approaches and the machines correspond or go beyond thought at the level of man, he thinks that AIS will be smarter than humans in a way that lets them grow our pimples, make us feel things, change our behavior and do it better than even the most persuasive human being.
“These [AI] Things will end up knowing much more than us. They already know much more than us, being smarter than us in the sense that if you had a debate with them about anything, you would lose, “warned Hinton in a recent shared interview on Reddit.” Being an emotionally smarter than us, what they will be, they will be better emotionally manipulated people. “”
What Hinton describes is more subtle and quieter than the lifting fears of the usual AI, but perhaps more dangerous because we may not see it coming. The nightmare is an AI that understands us so well that it can change us, not by force, but by suggestion and influence. Hinton thinks that AI has already learned to some extent how to do it.
According to Hinton, today’s large languages models are not only spitting plausible sentences. They absorb persuasion models. He referred to studies of more than a year ago on how AI was just as good to manipulate someone as a human being, and that “if they can both see the person’s Facebook page, then AI is actually better than a person to handle them”.
Takeover IA
Hinton believes that AI models in use currently participate in the emotional economy of modern communication and quickly improve. After decades of machine learning push, Hinton is now on the side of the restraint. Caution. Ethical foresight.
He is not alone in his concern. Eminent researchers with the same title “IA Godfather” frequently attributed them, like Yoshua Bengio, have echoed similar concerns concerning the emotional power of AI. And as emotional manipulation does not come with a flashing warning, you might even not notice it at first, or not at all. A resulting message, or a synthetic tone that feels good. Even a suggestion that looks like your own idea could start the process.
And the more you interact with AI, the more data he gets to refine his approach. In the same way that Netflix learns your tastes, or Spotify guess your musical preferences, these systems can refine how they speak to you. Perhaps we can regulate AI systems not only for factual precision, but for the emotional intention of fighting such a dark future. We could develop transparency standards to know when we are influenced by a machine, perhaps, or teach media mastery not only for teenagers on Tiktok, but for adults using productivity tools that praise us so innocently. The real danger that Hinton sees is not killer robots, but smooth heckling systems. And they are the whole product of our own behavior.
“And that learned all of these just manipulation skills trying to predict the following word in all documents on the web because people do a lot of manipulation, and AI learned for example how to do it.”