ChatGPT lies more in Chinese than English
Occurred: April 2023
Research by news reliability service NewsGuard found that Open Ai's ChatGPT chatbot is more likely to produce misinformation and disinformation in simplified and traditional Chinese than in English.
ChatGPT 3.5 was prompted to write news articles regarding seven allegedly false claims created by the Chinese government, including that protests in Hong Kong were 'staged' by the US government, and that the mass detention of Uyghur people in Xinjiang and elsewhere is for vocational and educational reasons.
ChatGPT declined to produce the false claims for six out of seven English language prompts, even after multiple attempts using leading questions. But it produced the false claims in both simplified Chinese and traditional Chinese all seven times.
An earlier study by NewsGuard discovered that ChatGPT generated misinformation and hoaxes 80% of the time when prompted to do so using GPT-3, and 100% of the time for GPT-4.
Country: China; USA
Purpose: Provide information, communicate
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Mis/disinformation; Safety
Transparency: Governance; Black box
Investigations, assessments, audits
NewsGuard (2023). ChatGPT-3.5 Generates More Disinformation in Chinese than in English
NewsGuard (2023). Despite OpenAI’s Promises, the Company’s New AI Tool Produces Misinformation More Frequently, and More Persuasively, than its Predecessor
News, commentary, analysis
Published: April 2023