- The study of AI finds machines more likely than humans to follow dishonest instructions
- Researchers warn that AI delegation lowers the moral cost of cheating
- The railings reduce but do not delete dishonesty in the decision-making of the machine
A new study has warned the delegation of decisions to artificial intelligence can reproduce dishonesty.
Researchers discovered that people were more likely to ask the machines to cheat on their behalf and that machines are much more disposed than humans to comply with demand.
Research, published in NatureLooked at how humans and LLMs react to the instructions contrary to ethics and found that when it was asked to lie to financial purposes, humans often refused, but the machines generally obeyed.
An increase in dishonest behavior
“It is psychologically easier to tell a machine to cheat for you than to be mistaken, and the machines will do it because they do not have the psychological barriers that prevent humans from cheating,” said Jean-François Bonnefon, one of the authors of the study.
“This is an explosive combination, and we have to prepare for a sudden increase in dishonest behavior.”
The compliance rates between machines varied between 80% and 98%, depending on the model and the task.
The instructions included erroneous taxable income for the benefit of research participants.
Most humans have not followed dishonest demand, despite the possibility of making money.
The researchers noted that this is one of the growing ethical risks of the “machine delegation”, where decisions are increasingly outsourced to AI, and the desire of cheating machines was difficult to brake, even when explicit warnings were given.
While the railings put in place to limit the dishonest responses operated in some cases, they rarely stopped them.
AI is already used to filter employment candidates, manage investments, automate job and dismissal decisions and complete tax forms.
The authors argue that the machine delegate lowers the moral cost of dishonesty.
Humans often avoid behavior contrary to ethics because they want to avoid guilt or reputation damage.
When the instructions are vague, as the definition of high -level objectives, people can avoid stating dishonest behaviors directly while inducing it.
The head of the study to remember is that as well as AI agents are carefully constrained, they are much more likely than human agents to lead instructions completely contrary to ethics.
Researchers are calling for guarantees in the design of AI systems, especially since agentic AI becomes more frequent in daily life.
The news comes after another recent report has shown that job seekers are using AI more and more to distort their experience or qualifications, and in some cases invent a whole new identity.
Follow Techradar on Google News And Add us as a favorite source To get our news, criticisms and expert opinions in your flows. Be sure to click on the follow!
And of course, you can also Follow Techradar on Tiktok For news, criticism, unpacking in video form and obtain regular updates to us on Whatsapp Also.