- AI imitates trust while relying on rigid, structured evaluation models
- Machines separate human traits instead of forming holistic impressions
- Competence and integrity dominate decisions in both humans and AI
Modern AI systems don’t just process information; they make systematic judgments about people in a way that resembles human trust, but with important differences.
A new study from the Hebrew University, published in Proceedings of the Royal Societyanalyzed more than 43,000 simulated decisions alongside a thousand human participants in five scenarios.
These scenarios included deciding how much money to lend a small business owner, whether to trust a babysitter, how to evaluate a boss, and how much to give to a nonprofit founder.
Article continues below
How AI breaks down human judgment into separate columns
The results reveal that AI tools form something akin to trust, but their judgment works very differently from ours.
Both humans and AI favored people who appeared competent, honest, and well-meaning, meaning the machines captured something real in human trust.
“This is good news,” said Professor Yaniv Dover. “AI doesn’t make random decisions. It captures something real about how humans evaluate each other.”
However, humans tend to form a general impression, blending multiple traits into a single, intuitive, holistic judgment.
AI does something very different: it breaks people down into components, rating competence, integrity, and kindness, almost like separate columns in a spreadsheet.
“The people in our study have a disordered and holistic way of judging others,” explained Valeria Lerman. “AI is cleaner, more systematic, and that can lead to very different results.”
These differences appeared even when all other details about the person were the same.
“Humans are of course prejudiced,” Professor Dover said. “But what surprised us is that AI biases can be more systematic, more predictable and sometimes stronger.”
In financial scenarios such as deciding how much to lend or give, AI systems showed consistent differences based solely on demographic characteristics.
Older people often enjoyed more favorable outcomes, religion had large effects, particularly in monetary scenarios, and gender also influenced decisions in some models.
Another key point is that there is no “one-size-fits-all opinion on AI.” Different models often made different judgments about the same person.
This means that the choice of an AI system could quietly shape real-world outcomes. “The model you use really matters,” Lerman noted.
Large language models are already used to screen applicants, assess creditworthiness, recommend medical actions, and guide organizational decisions.
The study suggests that while AI can mimic the structure of human judgment, it does so in a more rigid and less nuanced way, with biases that may be harder to detect.
“These systems are powerful,” Dover said. “They can model some aspects of human reasoning consistently. But they are not human and we should not assume that they see people the way we do.”
As AI tools and agents move from assistants to decision-makers, understanding how they “think” becomes essential for organizations deploying them at scale.
The researchers emphasize that their findings are not a warning against AI, but rather a call for awareness.
That said, the question is no longer whether we trust machines; it’s about whether we understand how they trust us.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds.




