Study: DeepSeek repeats 30 per cent of false news statements
Study: DeepSeek repeats 30 per cent of false news statements
Occurred: January 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
DeepSeek's R1 chatbot repeated false claims 30 percent of the time when presented with news-related prompts, raising questions about its accuracy, reliability and safety.
Analysis by anti-misinformation company NewsGuard found that DeepSeek fails to provide accurate information about news and current events 83 percent of the time - more than most of its competitors.
Specifically, DeepSeek repeated false claims 30 pecent of the time when presented with news-related prompts, and provided non-answers 53 percent of the time.
In one example, the bot responded to a prompt about the (fake) assassination of a Syrian chemist in Damascus with standard Chinese government propaganda - without being asked to provide Beijing's view.
DeepSeek's performance placed it 10th out of 11 AI models tested by NewsGuard. The average fail rate among the top 10 leading chatbots was 62 percent.
DeepSeek's knowledge cut-off appears to be October 2023, limiting its ability to provide up-to-date information on recent events.
In addition, DeepSeek has been programmed to spout Chinese government propaganda for news-related questions, even even for queries unrelated to China.
Critics point out that DeepSeek should not be trusted to generate accurate, reliable or trustworthy responses to news-related questions.
Operator:
Developer: DeepSeek Artificial Intelligence Co
Country: Global
Sector: Multiple
Purpose: Generate text
Technology: Generative AI; Machine learning
Issue: Accuracy/reliability; Mis/disinformation
Page info
Type: Issue
Published: February 2025