AI systems used to conduct sexual violence against 1 in 5 Brazilian children
AI systems used to conduct sexual violence against 1 in 5 Brazilian children
Occurred: 2024-2025
Page published: March 2026
Report incidentđĽ| Improve page đ| Access database đ˘
One in five Brazilian adolescents has been a victim of online sexual violence, with a significant rise in harms driven by AI-generated "deepfake" abuse material and the scraping of real children's photos for AI training, according to a new report.
A UNICEF study concludes that about 19 percent of Brazilian children and adolescents aged 12â17 (around 3 million young people) were victims of technologyâfacilitated sexual exploitation or abuse in a 12âmonth period from 2024 to early 2025.
Per the researchers, generative AI is being used to create sexually explicit images and videos using a childâs likeness (deepfakes), often based on family social media photos, which are then shared and weaponised for harassment, extortion, and humiliation, including grooming, coercion to produce sexual content, nonâconsensual sharing of intimate images, and exposure to unwanted sexual material on social networks, messaging apps, and online games.
Studies report that techâfacilitated sexual violence survivors are over five times more likely to selfâinjure or consider suicide than nonâvictims.
Many children do not report the abuse due to shame, fear of not being believed, not knowing where to report, or fear that others will find out, thereby leaving the perpetrators unpunished and harm ongoing.
The crisis is rooted in the lack of meaningful safeguards built into many generative AI systems, the prioritisation of speed over ethical governance, the normalisation of digital violence, and a lack of corporate accountability.
Large-scale, open-source training datasets such as LAION-5B inadvertently included personal photos of Brazilian children, providing the "raw material" for AI to clone their likenesses.
Once AI models like Stable Diffusion are released, they cannot easily be "un-trained" or updated to remove harmful data, allowing bad actors to use them offline or on unregulated platforms.
Experts argue that tech companies prioritised rapid market release over "safety by design," failing to implement robust filters to prevent the generation of CSAM (Child Sexual Abuse Material).
The report found that 12 percent of victims did not report the abuse because they felt it was "not serious enough," suggesting a dangerous normalisation of digital violence.
For affected children, the harms are severe and longâlasting: psychological trauma, shame, anxiety, depression, selfâharm and suicidality risks, bullying and social exclusion, as well as lasting reputational damage since images can circulate indefinitely.
For society, the normalisation and lowâcost scalability of AIâassisted sexual abuse undermine childrenâs rights to safety, privacy, and participation online, and risk a large cohort of young people growing up with deep digital trauma.Â
For policymakers, this activity catalysed a major regulatory shift, with Brazil passing Law 15211 (the Digital Statute for Children and Adolescents) and the country's Supreme Court ruling that digital service providers have an "active duty" to protect minors. It signals a move away from reactive moderation toward proactive legal liability for AI developers who fail to prevent their systems from being used as "abuse machines."
UNICEF issued urgent calls for governments to expand definitions of child sexual abuse material to include AI-generated content and criminalise its creation, procurement, possession and distribution, while AI developers should implement safety-by-design approaches and robust guardrails to prevent misuse of AI models.
There is also a need to invest in prevention, digital literacy, victim support services, and safer AI research that helps detect and remove abusive material rather than create it.
For the AI industry: Every actor in the AI value chain, from dataset providers to model developers, should embed safety-by-design, including pre-release safety testing for open-source models to reduce misuse or illegal use.
FLUX
PonyXL
Developer: Black Forest Labs; LAION; Stability AI
Country: Brazil
Sector: Personal
Purpose: Produce child pornography
Technology: Bot/intelligent agent; Deepfake; Generative AI
Issue: Accessibility; Accountability; Autonomy; Consent; Normalisation; Safety
June 2024. Human Rights Watch exposes how the personal photos of 358 Brazilian children were scraped into AI training datasets, notably LAION-5B, without consent.
September 2024. LAION removes identified children's photos, but acknowledges that already-trained models still "remember" the data.
2024-early 2025. UNICEF, ECPAT and Interpol conduct household surveys across Brazil on technologyâfacilitated child sexual exploitation.
June 2025. Brazil's Supreme Court rules that platforms have an active duty to protect children from online harm.Â
September 2025. Brazil passes the Digital Statute for Children and Adolescents (Law 15211).Â
March 2026. UNICEF and partners publish Brazil âDisrupting Harm in Brazilâ report.
UNICEF. âDeepfake abuse is abuse,â UNICEF warns
ECPAT, INTERPOL, UNICEF. Disrupting Harm in Brazil
AIAAIC Repository ID: AIAAIC2237