Study: AI chatbots fail to summarise news accurately
Study: AI chatbots fail to summarise news accurately
Occurred: February 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
Leading AI chatbots, including ChatGPT, Copilot, Gemini and Perplexity AI regularly produce inaccurate and misleading summaries of news articles, according to a BBC study.
The BBC analysed 100 news articles summarised by four major AI chatbots, finding that 51 percent of the AI-generated responses contained serious inaccuracies, with 19 percent of summaries citing BBC content introduced factual errors and 13 percent altering quotes from the original stories.
These inaccuracies ranged from misrepresenting critical details to providing incorrect dates and figures.
The study revealed that AI chatbots struggle with distinguishing between opinion and factual reporting, often editorialising content and failing to understand context.
The finding raises concerns about the reliability of AI-generated news summaries and their potential impact on public understanding of current events. It also points to the creation and spread of misinformation and disinformation and the erosion of public trust in news media, even respected entities such as the BBC.
It also demonstrates the need for technology companies to review AI-generated news output, highlights the need for caution when relying on AI-generated summaries when consuming news.
Operator:
Developer: Google; Microsoft; OpenAI; Perplexity
Country: UK
Sector: Media/entertainment/sports/arts
Purpose: Summarise news
Technology: Generative AI; Machine learning
Issue: Accuracy/reliability; Mis/disinformation
Page info
Type: Issue
Published: February 2025