Images of Australian children are used to train major AI models
Images of Australian children are used to train major AI models
Occurred: July 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
Personal images of Australian children have been used to train major AI models, violating their privacy and potentially resulting in the use of their images to create pornographic deepfakes.
According to Human Rights Watch, the photos of 190 Australian children were scraped from personal blogs, video and photo-sharing sites, school websites, photographers’ collections of family portraits and other services without the consent of the children or their parents to create the LAION-5B dataset.
The photos were then used to create popular generative AI tools such as Stable Diffusion and Midjourney and, HRW argues, were later used to create synthetic images that could be categorised as child pornography.
HRW said it found children whose images were in the dataset were easily identifiable, with some names included in the accompanying caption or the URL where the image was stored.
The finding raised ethical concerns about data privacy and the need for consent when AI training datasets.
➖ December 2023. Stanford Internet Observatory researchers discovered thousands of child sex abuse pictures on LAION-5B and LAION-400M.
Operator:
Developer: LAION
Country: Australia
Sector: Multiple
Purpose: Pair text and images
Technology: Database/dataset
Issue: Privacy; Safety; Transparency
Human Rights Watch. Australia: Children’s Personal Photos Misused to Power AI Tools
Page info
Type: Incident
Published: July 2024