Radnor High School hit by fake AI sexualised images of students
Radnor High School hit by fake AI sexualised images of students
Occurred: December 2025
Page published: March 2026
Report incidentš„| Improve page š| Access database š¢
A Pennsylvania, USA, school was rocked by a student-created AI ādeepfakeā video that sexually depicted multiple classmates without their consent, causing serious psychological and emotional distress, and raising questions about how children should be protected from AIāenabled abuse.
A student at Radnor High School allegedly created an AIāgenerated video which appeared to depict several students behaving āin an inappropriate manner,ā with police confirming the video contained ānonāconsensual sexualized imagery of multiple juvenilesā at the school.Ā
A juvenile student was charged with harassment.
Affected students and families reported intense distress: one girl said that when she returned to school, āeveryone was staring at her,ā and parents described their daughters crying, feeling ashamed, and in some cases considering leaving the school altogether.Ā
Even though authorities say they found no evidence the images remained or were widely shared by the time of the investigation, parents argued the social damage and humiliation had already spread ālike wildfireā among students.
The incident was made possible by the wide availability of consumer-facing AI image-generation and dedicated so-called "nudification" tools, many of which have weak or non-existent safeguards against misuse involving real identifiable individuals.Ā
Many such tools do not require age verification or identity checks.Ā
A lack of transparency from developers about how their tools can be misused, and an absence of robust content moderation or detection, contributed directly.Ā
Existing legal frameworks in many US states had not yet been updated to explicitly criminalise AI-generated NCII, particularly involving minors, leaving victims inadequately protected.Ā
For the victims, the harm is profound and lasting: their likenesses were sexualised without consent, the images may continue to circulate, and the psychological impact can be severe and long-term.
For schools, the incident illustrates that the misuse of AI is no longer a hypothetical safeguarding risk but an active one requiring policy responses.Ā
For society, it signals that the democratisation of powerful generative AI tools has outpaced both legal protections and platform safeguards.Ā
For policymakers, it underlines the need for legislation explicitly criminalising AI-generated NCII (including of minors), requirements for developers to implement meaningful safeguards, and clearer obligations on platforms hosting or enabling such tools. It also raises questions about whether existing child protection laws are fit for purpose.Ā
Unknown
Developer:
Country: USA
Sector: Education
Purpose: Nudify students
Technology: Deepfake; Generative AI
Issue: Accountability; Safety; Transparency
Early December 2025: Radnor High School alerts families that it is investigating an AIāgenerated video depicting several students āin an inappropriate manner,ā and notifies Radnor Township Police.
January 16, 2026: A district message to the community states the police investigation has ended and says no evidence shared with law enforcement depicted anything inappropriate or any related crime.
January 23, 2026: Radnor Township Police announce criminal harassment charges against a student.
February 10, 2026: Outraged parents attend a school board meeting demanding accountability and policy changes regarding AI safety.
Mid-late February 2026: The school board begins drafting new policies to expand the definition of bullying to include AI-generated misconduct.Ā
AIAAIC Repository ID: AIAAIC2227