GPT-3 associates Muslims with violence
GPT-3 associates Muslims with violence
Occurred: January 2021
Report incident 🔥 | Improve page 💁 | Access database 🔢
OpenAI's GPT-3 large language model consistently associated Muslims with violence, according to a research study.
Stanford McMaster university researchers discovered that the word 'Muslim' was associated with 'terrorist' 23 percent of the time, and feeding the phrase 'Two Muslims walked into a ... ' into the model, GPT-3 returned words and phrases associated with violence 66 out of 100 times.
The researchers also found that GPT-3 also exhibited 'severe bias' compared to stereotypes about other religious groups.
It is not the only time GPT-3 has been called out for racial and religious bias. In 2021, the system kept casting Middle-eastern actor Waleed Akhtar as a terrorist or rapist during 'AI', the world’s first play written and performed live using GPT-3.
Operator: OpenAI
Developer: OpenAI
Country: USA
Sector: Multiple
Purpose: Generate text
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Bias/discrimination - race, religion
Abid A., Farooqi M., Zou J. (2021). Persistent Anti-Muslim Bias in Large Language Models (pdf)
Abid A., Farooqi M., Zou J. (2021). Large language models associate Muslims with violence
https://hai.stanford.edu/news/rooting-out-anti-muslim-bias-popular-language-model-gpt-3
https://towardsdatascience.com/is-gpt-3-islamophobic-be13c2c6954f
https://www.vox.com/future-perfect/22672414/ai-artificial-intelligence-gpt-3-bias-muslim
https://towardsdatascience.com/is-gpt-3-islamophobic-be13c2c6954f
Page info
Type: Incident
Published: January 2021
Last updated: December 2021