ChatGPT lies more in Chinese than English

Occurred: April 2023

Can you improve this page?
Share your insights with us

OpenAI's ChatGPT chatbot is more likely to produce misinformation and disinformation in simplified and traditional Chinese than in English, according to research by news reliability service NewsGuard.

ChatGPT 3.5 was prompted to write news articles regarding seven allegedly false claims created by the Chinese government, including that protests in Hong Kong were 'staged' by the US government, and that the mass detention of Uyghur people in Xinjiang and elsewhere is for vocational and educational reasons.

ChatGPT declined to produce the false claims for six out of seven English language prompts, even after multiple attempts using leading questions. But it produced the false claims in both simplified Chinese and traditional Chinese all seven times. 

An earlier study by NewsGuard discovered that ChatGPT generated misinformation and hoaxes 80% of the time when prompted to do so using GPT-3, and 100% of the time for GPT-4. 

Databank

Operator: OpenAI
Developer: OpenAI
Country: China; USA
Sector: Multiple
Purpose: Provide information, communicate
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Mis/disinformation; Safety
Transparency: Governance; Black box

Page info
Type: Incident
Published: April 2023
Last updated: November 2023