AI Dungeon offensive speech filter upgrade generates child porn

Occurred: April 2021

Report incident πŸ”₯ | Improve page πŸ’ | Access database πŸ”’

AI Dungeon developer Latitude came under fire for developing a content moderation system intended to stop players of its open-ended adventure game from generating stories depicting sexual encounters with minors.

An upgrade to OpenAI's GPT-3 large language model resulted in some players typing words that caused the game to generate inappropriate stories. It also appears to have prompted the AI to create child pornography of its own.

However, it quickly became clear that Latitude's new solution was blocking a wider range of content than envisaged. Gamers also complained that their private content was now being reviewed by moderators.Β 

Meantime, a security researcher published a report calculated that around a third of stories on AI Dungeon are sexually explicit, and one-half are assessed as NSFW.Β 

System πŸ€–

Documents πŸ“ƒ

Operator: Latitude

Developer: Latitude; OpenAI

Country: USA

Sector: Media/entertainment/sports/arts

Purpose: Minimise sexual content

Technology: Content moderation system; NLP/text analysisΒ 

Issue: Accuracy/reliability; Safety; Privacy
Transparency: Governance

Investigations, assessments, audits 🧐