Shocking Truth: Nearly Half of AI News Responses Are Misleading!

Imagine relying on AI for your news only to discover that nearly half of its responses are misleading! Recent research from the European Broadcasting Union (EBU) and the BBC has unveiled a startling fact: a whopping 45% of replies from popular AI assistants misrepresent the news, raising urgent questions about trust, accuracy, and the very fabric of democracy.
In a world where more and more people—especially the younger generation—turn to AI-driven assistants for news updates, this study, published on a seemingly ordinary Wednesday, is a wake-up call. As pointed out in the Reuters Institute's Digital News Report 2025, a growing number of individuals are ditching traditional news sources in favor of these digital companions. But what happens when the source of information is itself riddled with inaccuracies?
The study delved deep, analyzing 3,000 responses from leading AI systems like ChatGPT, Google’s Gemini, and Perplexity. Conducted in 14 different languages and involving 22 public service organizations from 18 countries, the findings are concerning. An alarming 81% of the responses had some form of issue, with sourcing errors being particularly prevalent. Google’s Gemini assistant was a standout, but not for good reasons—an eye-popping 72% of its responses contained serious sourcing errors.
What’s at stake? The ramifications of AI inaccuracies extend far beyond mere mistakes. With 20% of responses containing outdated information or outright factual errors—like incorrectly stating that Pope Francis is still alive—AI's reliability as a news source is under intense scrutiny. The EBU’s Media Director, Jean Philip De Tender, articulated the stakes perfectly: “When people don't know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”
This scenario is particularly troubling as the Reuters Institute reports that about 7% of online news consumers and 15% of those under 25 are now using AI assistants for news. As this trend grows, so does the responsibility of tech companies to ensure that their products are accurate and trustworthy. With the call for accountability echoing louder, the EBU insists on the need for AI firms to step up their game.
Additionally, AI assistants are struggling with another critical issue: distinguishing between fact and opinion. While some companies claim to be aware of these dilemmas, the reality is that improvement is necessary. Google’s Gemini has publicly welcomed user feedback to enhance its services, which is a step in the right direction.
The issue of ‘hallucination,’ where AI generates incorrect or misleading information due to inadequate data, is also a pressing concern. Leading tech firms like OpenAI and Microsoft recognize this and are actively working to mitigate it. Perplexity even boasts a 'Deep Research' mode that claims an impressive 93.9% accuracy in factuality.
As AI assistants become more integral to how we consume news, the demand for reliable, transparent, and accurate information has never been more critical. The authors of the report, along with various media organizations, are advocating for advancements in AI technology to ensure that users around the globe can confidently rely on the information they receive.