Intelligent assistants, such as ChatGPT, Copilot, Gemini or Perplexity, are taking up more and more space in our lives, helping us understand certain topics or search for information. To the point that 15% of those under 25 years of age use intelligent assistants to obtain information, according to the Digital News Report 2025 of the Reuters Institute.
However, a study coordinated by the European Broadcasting Union (EBU) and carried out by the BBC shows that these assistants “regularly distort or misrepresent” the information produced by the 22 public broadcasters present in 18 European countries. Initiated after a first phase carried out in 2024 solely by the BBC, this investigation was carried out throughout the first half of 2025.
Major error in information, source problem…
Thus, the study “identified multiple systemic problems” on these four popular platforms, explains the EBU press release, which denounces “a systemic distortion of information by AI.” To find out, professional journalists tested ChatGPT, Copilot, Gemini and Perplexity by sending 30 topical questions, generating almost 3,000 responses. This feedback was then subjected to criteria such as accuracy, provenance, editorial quality or context.
It found that “45% of all AI responses had at least one major flaw,” while 31% of attendee comments “had serious sourcing issues: missing, misleading, or incorrect attributions.” Thus, in 28% of their responses, Gemini showed inaccuracies in their statements or in quotes. 7% of the responses from Microsoft’s Copilot were in the same case, and 4% of those from ChatGPT.
That’s not all, 20% of all attendee responses contained major accuracy issues, whether it be outdated information or extravagant details.
Gemini, great winner of answers to problems
In this little game, Gemini, the Google assistant, obtained the worst results, with “major problems” in 76% of European responses, more than double that of the rest of the assistants.
At the national level, for France, the results are even more critical. Thus, Gemini shows that 93% of the answers contain a significant defect. Thus, Radio France, which is part of the EBU, indicates that Gemini used a satirical column from the program “Charline explose les faits”, on France Inter, as a source of serious information… His inability to identify the type of source led him to present erroneous facts indicating that it came from an otherwise reliable source, France Inter.
A concern that is undoubtedly related to the fact that the “rejection rate” has decreased drastically. “Attendees are now more likely to respond, even when they cannot give a reliable answer,” the press release details.
The risk of a loss of trust
“We found that AI assistants mimic the authority of journalism, but fall short of its rigour,” said Peter Archer, director of generative AI programs at the BBC, quoted in the EBU press release. It goes on to note that “this study demonstrates the urgency for AI companies to correct these flaws,” to “better reflect the values of trusted journalism.”
Liz Corbin, director of information at the EBU, goes further. For her, “this investigation conclusively demonstrates that these failures are not isolated incidents. (…) They are systemic, global and multilingual.” These errors can lead to a loss of public trust. However, “when people no longer know what to trust, they end up not believing anything at all, which leads to a democratic retreat.”
Constant attention, at all levels.
To help the AI giants improve the results of their assistants, the BBC and the EBU are currently working on a “practical guide”, which will establish a classification, a “taxonomy of common AI failure modes”, enriched with examples. It will also list “practical recommendations for AI developers.” Guidance will also be published to help media raise awareness about detecting these distortions.
Finally, the EBU calls on “European and national regulators to enforce existing laws on information integrity, digital services and media pluralism.” The EBU also emphasizes the importance of “independent and continuous monitoring of AI assistants.”
Source: BFM TV
