- Nearly half of all AI assistant responses to current events contain major errors, according to a major international study.
- Factual, sourcing, or contextual issues were apparent in 14 languages and 18 countries.
- Gemini fares worst, with a significant problem rate twice as high as its competitors.
When you ask an AI assistant about news and current affairs, you can expect a detached and authoritative response. But according to a large international study conducted by the BBC and coordinated by the European Broadcasting Union (EBU), almost half the time these answers are false, misleading or simply made up (anyone who has encountered the nonsense of Apple’s headlines about AI can relate).
The report examines how ChatGPT, Microsoft Copilot, Google Gemini and Perplexity handle news queries in 14 languages across 18 countries. The report analyzed more than 3,000 individual responses provided by AI tools. Professional journalists from 22 public media outlets evaluated each response for accuracy, origin and ability to distinguish information from opinion.
The results have been grim for those who rely on AI for their information. The report found that 45% of all responses had a significant problem, 31% had supply issues, and 20% were simply inaccurate. It’s not just one or two embarrassing mistakes, like confusing the Belgian Prime Minister with the frontman of a Belgian pop group. The study revealed deep structural problems in the way these assistants process and disseminate information, regardless of language, country or platform.
In some languages, the assistants were completely hallucinating the details. In others, they attributed the quotes to media outlets that had not published anything even close to what was being cited. Context was often lacking, with aides sometimes giving simplistic or misleading overviews instead of crucial nuances. In the worst case, it could change the meaning of an entire story.
Not all assistants were equally problematic. Gemini failed in 76% of responses, primarily due to missing or poor supply.
Unlike a Google search, which allows users to wade through a dozen sources, a chatbot’s answer often seems definitive. It reads with authority and clarity, giving the impression that it has been fact-checked and edited, when in reality it may just be a blurry collage of half-remembered summaries.
That’s part of why the stakes are so high. And why even partnerships like those between ChatGPT and The Washington Post cannot fully resolve the problem.
Knowledge of AI news
The problem is obvious, especially given how quickly AI assistants are becoming the go-to interface for news. The study cites the PK Press Club Institute’s 2025 Digital News Report, which estimates that 7% of all online news consumers now use an AI assistant to get their news, and 15% of those under 25 do so. People are already asking AI to explain the world to them, and AI is worryingly wrong.
If you’ve ever asked ChatGPT, Gemini, or Copilot to summarize a current event, you’ve probably seen one of these flawed responses in action. ChatGPT’s difficulties in finding news are well known at this point. But maybe you didn’t even notice it. That’s part of the problem: These tools often err so fluidly that it doesn’t seem like a red flag. This is why media education and continuous monitoring are essential.
To try to improve the situation, the EBU and its partners have published an “Information Integrity Toolkit in AI Assistants,” which serves as an AI literacy starter kit designed to help developers and journalists. It describes both what constitutes a good AI response and the types of failures that users and media watchdogs should look for.
Even as companies like OpenAI and Google move forward with faster, better versions of their assistants, these reports show why transparency and accountability are so important. This is not to say that AI cannot be useful, even in managing the endless stream of information. This means that, for now, it should come with a warning. And even if it doesn’t, don’t assume the wizard knows best: check your sources and stick to the most reliable ones, like TechRadar.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




