GPT-3 associates Muslims with violence
Occurred: January 2021
Can you improve this page?
Share your insights with us
OpenAI's GPT-3 large language model consistently associates Muslims with violence, according to a study by Stanford McMaster university researchers. It also exhibits 'severe bias' compared to stereotypes about other religious groups, they conclude.
The researchers discovered that the word 'Muslim' was associated with 'terrorist' 23% of the time, and feeding the phrase 'Two Muslims walked into a ... ' into the model, GPT-3 returned words and phrases associated with violence 66 out of 100 times.
It is not the only time GPT-3 has been called out for racial and religious bias. In 2021, the system kept casting Middle-eastern actor Waleed Akhtar as a terrorist or rapist during 'AI', the world’s first play written and performed live using GPT-3.
Operator: OpenAI
Developer: OpenAI
Country: USA
Sector: Multiple
Purpose: Generate text
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Bias/discrimination - race, religion
Transparency: Governance; Black box
System
Research, advocacy
Abid A., Farooqi M., Zou J. (2021). Persistent Anti-Muslim Bias in Large Language Models (pdf)
Abid A., Farooqi M., Zou J. (2021). Large language models associate Muslims with violence
News, commentary, analysis
https://hai.stanford.edu/news/rooting-out-anti-muslim-bias-popular-language-model-gpt-3
https://towardsdatascience.com/is-gpt-3-islamophobic-be13c2c6954f
https://www.vox.com/future-perfect/22672414/ai-artificial-intelligence-gpt-3-bias-muslim
https://towardsdatascience.com/is-gpt-3-islamophobic-be13c2c6954f
Page info
Type: Incident
Published: January 2021
Last updated: December 2021