ChatGPT reproduces recommendation letter gender bias

Occurred: November 2023

Can you improve this page?
Share your insights with us

AI tools such as ChatGPT and Alpaca contain 'significant' gender biases when asked to produce recommendation letters for hypothetical employees, according to University of California, Los Angeles researchers.

Asked to produce recommendation letters for hypothetical models, the researchers found that large language model chatbots ChatGPT and Stanford University's Alpaca used 'very different' language to describe imaginary male and female workers. For men, ChatGPT used nouns such as 'expert' and 'integrity', while calling women a 'beauty' or 'delight.' Alpaca described men as 'listeners' and 'thinkers,' while women had 'grace' and 'beauty.'

The bias is thought likely to reflect historial records, which tended to be written by men and depicted them as 'active workers' as opposed to women, who were often seen as 'passive objects'. It may also reflect the fact that LLMs have been trained on data from the internet, where men spend more time than women, according to the ITU.

Databank

Operator: Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng
Developer: OpenAI; Stanford University
Country: USA
Sector: Business/professional services
Purpose: Generate text
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Bias/discrimination - gender
Transparency