Kenyan workers paid under USD 2 an hour to de-toxify ChatGPT

Occurred: November 2021-February 2022

Can you improve this page?
Share your insights with us

Kenyan workers were paid under USD 2 an hour to sift through large amounts of extremely graphic content to help build a tool that tags problematic content on ChatGPT.

A Time investigation revealed that OpenAI was outsourcing the labelling of images and text describing in graphic detail sexual abuse, bestiality, self-harm, incest, hate speech, torture, murder, and violence, to Sama, a self-styled 'ethical AI' company based in San Francisco. Sama employees were paid between $1.32 to $2 an hour to do the work. The data was then used to train ChatGPT to keep it from responding with problematic answers.

The work reportedly caused severe distress for some data labellers, with one employee calling the work he had to do reading and labeling text for OpenAI, including reading a graphic description of a man having sex with a dog in the presence of a young child, as ‘torture’. 

Sama employs workers in Kenya, Uganda, and India to label data for Silicon Valley clients, inclusing Google, Meta, and Microsoft. A February 2022 TIME investigation revealed low pay, poor working conditions and alleged union-busting at Sama's office in Nairobi, Kenya for its team moderating content for Facebook.

Databank

Operator: Sama AI/Samasource
Developer: OpenAI
Country: Kenya
Sector: Business/professional services
Purpose: Generate text
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Employment
Transparency: Governance