- Study finds professionals feel disrespected when clients compare their expertise to AI-generated answers.
- Advisors become less motivated after losing clients to AI-powered online recommendations
- Customers using AI fact-checking may subsequently appear less trustworthy to professionals
New research from Monash Business School claims professional advisers feel offended when clients use AI to get a second opinion on their recommendations.
The research, published in Computers in human behaviorfound that professionals become less motivated to work with clients who consult AI tools.
This effect persists even when the customer uses AI only for basic information, or as a complementary resource rather than a replacement.
Human experts feel insulted by AI fact-checking
“Advisors view AI as significantly inferior to themselves; therefore, being placed in the same category as an AI system is an insult and a lack of respect, undermining advisors’ willingness to engage,” said Associate Professor Gerri Spassova, the lead author.
Imagine spending an hour helping a customer plan a complex trip, carefully planning flights, hotels, and itineraries, only for that customer to follow your recommendations and book everything through an AI chatbot.
Researchers found that professionals who lost business to an AI were significantly less willing to work with that client again in the future.
Clients who consult AI may be perceived as less competent and less friendly by the counselors they turn to for help.
When clients rely on AI, it makes advisors question the value of their own human contribution, and this could get worse as AI improves.
Many advisors take offense to this, and it’s the main reason they opt out of clients who consult AI.
“One can only speculate,” said Associate Professor Spassova. “My hunch is that the situation will not improve much, firstly because the jobs of professional advisers are at stake.
“Furthermore, as AI improves, it may threaten our self-esteem and self-esteem. So when clients rely on AI, it would make advisors question the value of their human contribution. »
The study suggests that for new client-advisor relationships, people should not reveal that they consulted AI before the meeting.
A long history of collaboration may weaken the negative reaction, but even then the advisor may still feel cheated.
This applies to doctors, lawyers, and other professionals whose expert clients can fact-check with AI tools.
A doctor who has spent years training doesn’t want to be questioned by a patient who spent five minutes on ChatGPT.
AI tools generally provide a general overview of a situation and are very prone to errors.
Their judgment depends heavily on the amount of information you provide, and if you’re not detailed enough, their response can be misleading.
Additionally, AI gives answers to questions based on how they are asked, and users can easily influence an AI tool to tell them what they want to hear.
Given these nuances, it would be unfair to judge a professional with years of education and experience based on an uncertain tool.
There is absolutely no need to tell a professional that you have consulted AI as this creates a feeling of “lack of trust”.
Until professional standards adapt to the presence of AI, clients would do well to keep their audits private or risk harming their professional relationships.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds.




