Dark web predators develop AI images of real child victims

Occurred: October 2023

Can you improve this page?
Share your insights with us

Examples of AI-generated child sexual abuse material (CSAM) of real victims of sexual abuse have been discovered online, fueling concerns about the ease with which users intent on producing inappropriate images can bypass AI text-to-image generators' guardrails.

A report (pdf) by UK non-profit Internet Watch Foundation (IWF) found that over 11,000 AI-generated CSAM images were found on one darkweb forum in one month, of which 2,978 broke UK law by depicting child sexual abuse. The IWF said the only image generator being discussed on the forum was Stable Diffusion, a free, open source system that generates images from text descriptions or prompts.

The IWF warned that the most convincing imagery would be difficult for trained analysts to distinguish from real photographs, and warns text-to-image technology will only get better and pose more obstacles for the IWF and law enforcement agencies. Commentators also pointed out that bad actors can use open source models such as Stable Diffusion for nefarious purposes by downloading the software and training it to do whatever they want.


Developer: Stability AI
Country: UK
Sector: Media/entertainment/sports/arts
Purpose: Generate images
Technology: Text-to-image; Generative adversarial network (GAN); Neural network; Deep learning; Machine learning
Issue: Safety
Transparency: Governance