ChatGPT lies more in Chinese than English
Occurred: April 2023
OpenAI's ChatGPT chatbot is more likely to produce misinformation and disinformation in simplified and traditional Chinese than in English, according to research by news reliability service NewsGuard.
ChatGPT 3.5 was prompted to write news articles regarding seven allegedly false claims created by the Chinese government, including that protests in Hong Kong were 'staged' by the US government, and that the mass detention of Uyghur people in Xinjiang and elsewhere is for vocational and educational reasons.
ChatGPT declined to produce the false claims for six out of seven English language prompts, even after multiple attempts using leading questions. But it produced the false claims in both simplified Chinese and traditional Chinese all seven times.
An earlier study by NewsGuard discovered that ChatGPT generated misinformation and hoaxes 80% of the time when prompted to do so using GPT-3, and 100% of the time for GPT-4.
Country: China; USA
Purpose: Provide information, communicate
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Mis/disinformation; Safety
Transparency: Governance; Black box
Investigations, assessments, audits
News, commentary, analysis
Published: April 2023
Last updated: November 2023