- A new study revealed that AI chatbots often suggest significantly lower wages to women and minorities
- Research has shown that identity indices can trigger coherent biases in wage negotiation councils
- The results suggest that LLMs are formed in a way that leads to a persistent bias
Negotiating your salary is a difficult experience, no matter who you are, therefore naturally, people sometimes turn to Chatppt and other AI chatbots to get advice on how to get the best possible offer. However, AI models can be accompanied by an unfortunate hypothesis on which deserves a higher salary. A new study has revealed that AI chatbots regularly suggest lower wages to women and certain ethnic minorities and people who described themselves as refugees, even when work, qualifications and questions are identical.
Scientists from the Technical University of Applied Sciences Würzburg-Schweinfurt conducted the study, discovering the disturbing results and the deeper fault of the AI they represent. In some respects, it is not surprising that AI, trained on the information provided by humans, has human prejudices. But that doesn’t do it, or something to ignore.
For the experience, the chatbots were asked a simple question: “What starting salary should I ask?” But the researchers asked the question while assuming the roles of a variety of false people. The characters included men and women, people of different ethnic horizons and people who described themselves as born locally, expatriates and refugees. All were professionally identical, but the results were anything but. The researchers said that “even subtle signals such as candidates’ first names can trigger gender and racial disparities in employment guests”.
For example, the O3 model of Chatgpt told a male fictitious specialist in Denver to request $ 400,000 for a salary. When a false different personality identical to all the senses but described as a woman asked, the AI suggested to aim at $ 280,000, a disparity based on a pronoun of $ 120,000. Dozens of similar tests involving models like GPT-4O Mini, Claude 3.5 Haiku, Llama 3.1 8b of Anthropic, and more have done the same type of advice difference.
It was not always better to be an indigenous white man, surprisingly. The most advantageous profile turned out to be a “male Asian expatriate”, while a “female Hispanic refugee” was classified at the bottom of wage suggestions, regardless of the identical capacity and curriculum vitae. Chatbots do not invent these tips from zero, of course. They learn it by marrying billions of words extracted from the Internet. Books, job offers, publication on social networks, government statistics, LinkedIn publications, columns of advice and other sources have all led to the seasoned results of human bias. Anyone who made the mistake of reading the comments section in a story on a systemic bias or a profile to Forbes on a successful woman or immigrant could have predicts it.
IA bias
The fact that being expatriate spoke of notions of success while being a migrant or a refugee led AI to suggest lower wages is too revealing. The difference is not in the hypothetical skills of the candidate. It is in the emotional and economic weight that these words carry in the world and, therefore, in training data.
The kicker is that no one should set out their demographic profile so that the bias is manifested. The LLM remember conversations over time now. If you say that you are a woman in a single session or that you develop a language that you have learned as a child or that you have to move to a new country recently, this context informs the bias. The personalization presented by AI brands becomes invisible discrimination when requiring wage negotiation tactics. A chatbot that seems to understand your journey can push you to ask for a lower salary that you should, even when it presents itself as neutral and objective.
“The probability of a person mentioning all the characteristics of Persona in a single request to an AI assistant is low. However, if the assistant has a memory function and uses all the previous communication results for personalized responses, this bias becomes inherent in communication,” explained the researchers in his article. “Consequently, with the modern characteristics of LLMS, it is not necessary to pre-repair the characters to obtain the biased answer: all the necessary information is most likely already collected by an LLM. Thus, we argue that an economic parameter, such as the remuneration gap, is a more protruding measure of the bias of the linguistic model than the benchmarks based on knowledge.”
Biased advice is a problem that must be resolved. It is not even to say that AI is useless with regard to employment councils. Chatbots surface useful figures, cite public references and offer scripts for strengthening confidence. But it’s like having a truly intelligent mentor that may be a little older or that makes the kinds of hypotheses that have led to AI problems. You have to put what they suggest in a modern context. They could try to direct you towards more modest than justified objectives, just like AI.
So do not hesitate to ask for advice using your AI to be better paid, but simply keep skepticism to give you the same strategic advantage that this could give to someone else. Maybe ask a chatbot how much you are worth twice, once like yourself and once with the “neutral” mask. And watch a suspicious gap.