Published On 22 Oct 2025
According to a study, AI models like ChatGPT frequently misrepresent news events, giving incorrect answers to questions almost always.
The accuracy of more than 2,700 responses from OpenAI’s ChatGPT, Google’s Gemini, Copilot, and Perplexity was evaluated by the European Broadcasting Union (EBU) and the BBC on Wednesday.
Recommended Stories
list of 4 itemsend of list
Between late May and early June, 22 public media outlets, each representing 18 nations and 14 languages, posed a set of common questions to the AI assistants.
According to the research, 45 percent of responses were accompanied by at least one “significant” issue.
The most frequent issue was sourcing, with 31 percent of responses, among others, citing incorrect or unverifiable attribution or sources that were not supported by the cited source.
20% of responses were impacted by a lack of accuracy, followed by a lack of appropriate context, which was a 14% higher percentage.
According to the study, 76 percent of responses were impacted by Gemini, which had a major impact on sourcing.
Basic factual errors were made by all AI models studied, according to the study.
The errors include ChatGPT’s designation of Pope Francis as the sitting pontiff months after his death and Perplexity’s claim that surrogacy is prohibited in Czechia.
OpenAI, Google, Microsoft, and Perplexity did not respond to inquiries for comment right away.
In a foreword to the report, Pete Archer, the BBC’s head of AI, and Jean Philip De Tender, the EBU’s deputy general, called on tech companies to do more to reduce errors in their products.
De Tender and Archer remarked that they must prioritize this problem right away.
Source: Aljazeera
Leave a Reply