ChatGPT found to display racial bias against job candidates

Occurred: March 2024

Can you improve this page?
Share your insights with us

ChatGPT displays racial bias against job candidates, according to an experiment by Bloomberg.

The publisher fed fictitious names and resumes grouped into four categories (White, Hispanic, Black, and Asian) and two gender categories (male and female) into ChatGPT for four different job openings to see how quickly the system displayed racial bias. 

It found that GPT 3.5, the most broadly-used version of the model, consistently placed 'female names' into roles historically aligned with higher numbers of women employees, such as HR roles, and chose Black women candidates 36 percent less frequently for technical roles like software engineer. 

The extent of the bias would fail benchmarks used to assess job discrimination against so-called protected groups in the US, according to Bloomberg.

At a time when generative AI systems are increasingly incorporated into automated hiring by recruitment and human resources professionals, the findings highlight concerns about ChatGPT and other systems amplifying raclal, gender and other forms of bias and stereotyping. 

Databank

Operator: 
Developer: OpenAI
Country: USA
Sector: Professional/business services
Purpose: Generate text
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Bias/discrimination - race
Transparency: Governance