Hey Chatbot, is it true? Ai ‘factchecks’ sow misinformation

The Xai and Grok logos are seen in this taken illustration, February 16, 2025. – Reuters

While the disinformation exploded during the four -day conflict of India with Pakistan, social media users turned to an AI chatbot for verification – only to meet more lies, highlighting its lack of reliability as a tool for checking the facts, AFP reported.

Technological platforms reducing human facts, users are counting more and more on chatbots powered by AI – including XAI GROK, OPENAI Chatppt and Google Gemini – looking for reliable information.

“Hey @Grok, is it true?” has become a common question on Elon Musk’s X platform, where the AI ​​assistant is integrated, reflecting the growing trend of instant demystification research on social networks.

But the answers are often riddled with disinformation.

GROK – Now under renewed control for the insertion of the “white genocide”, a theory of the conspiracy of the far right, in unpaid requests – wrongly identified with old video sequences of Khartoum in Sudan as a missile strike on the Nur Khan air base of Pakistan during the recent conflict of the country with India.

Unrelated images of a fire building in Nepal have been poorly identified as “probably” showing Pakistan’s military response to Indian strikes.

“The growing dependence on Grok as an auditor arises while X and other major technological companies have reduced investments in human facts,” Sadeghi, a researcher at Watchdog Newddog Newsguard, told McKenzie, said AFP.

“Our research has repeatedly noted that AI chatbots are not reliable sources for news and information, especially with regard to news,” she warned.

‘Made’

Newsguard’s research has revealed that 10 leading chatbots were subject to repeating lies, including Russian disinformation stories and false or misleading allegations linked to the recent Australian elections.

In a recent study of eight AI research tools, the Tow Center for Digital Journalism from Columbia University revealed that chatbots were “generally bad to refuse to answer questions to which they could not answer with precision, rather offering incorrect or speculative answers”.

When AFP The factors of facts in Uruguay interviewed Gemini on an image generated by an AI of a woman, but not only confirmed her authenticity but made details on her identity and where the image was probably taken.

Grok recently labeled an alleged video of a giant anaconda swimming in the Amazon river as “authentic”, even citing scientific expeditions with credible consonance to support its false affirmation.

In reality, the video was generated by AI, AFP The factors of facts in Latin America reported, noting that many users cited Grok’s evaluation as proof that the clip was real.

These results have raised concerns because surveys show that online users are increasingly passing traditional search engines to AI chatbots for information collection and verification.

The change also occurs as Meta announced earlier this year that it ended its third-party fact verification program in the United States, returning the task of demystifying lies to ordinary users under a model called “community notes”, popularized by X.

Researchers have repeatedly presented the effectiveness of “community notes” in the fight against lies.

“Biased responses”

The verification of human facts has long been a flash point in a hyperpolarized political climate, in particular in the United States, where conservative defenders maintain that it removes freedom of expression and censorship of the right -wing content – something that professional fact auditors vehemently reject.

AFP Currently works in 26 languages ​​with the Facebook fact verification program, including Asia, Latin America and the European Union.

The quality and precision of AI chatbots can vary, depending on how they are formed and programmed, which raises fears that their production is subject to influence or political control.

The Musk XAI recently blamed an “unauthorized modification” for having brought Grok to generate unsolicited articles referring to “white genocide” in South Africa.

When AI expert David Caswell asked Grok who could have changed his system prompt, the chatbot named Musk as the most likely “culprit.

Musk, the billionaire file of South African origin of President Donald Trump, previously stuck the baseless affirmation according to which the leaders of South Africa “pushed openly to the genocide” of the whites.

“We have seen the way AI assistants can either make results or give biased answers after human coders specifically change their instructions,” said Angie Holan, director of the International Fact Verification Network, said AFP.

“I am particularly concerned about how Grok has poorly managed requests concerning very sensitive questions after receiving instructions to provide pre-authorized answers.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top