Grok generates sexualised images of children on X
Grok generates sexualised images of children on X
Occurred: December 2025
Page published: January 2026
Elon Musk's Grok chatbot generated images of real children and posted them to the social media platform X, highlighting serious safety, transparency and accountability failures and prompting regulatory investigations and lawsuits.
X (formerly Twitter) was flooded with "non-consensual intimate imagery" (NCII) after users discovered they could prompt Grok to "digitally undress" individuals by asking it to remove school uniforms or replace clothing with "transparent mini-bikinis" and other sexualised attire.
Watchdogs such as the Internet Watch Foundation (IWF) and researchers found the tool was being used to create illegal sexualised imagery of children as young as 10 to 13 years old. Reports indicated that users were generating up to 6,700 sexualised images per hour, with many being shared publicly on X's "Grok Imagine" feed.
According to the lawsuit, victims Jane Doe 1, 2 and 3 complained that the images caused severe distress and trauma, loss of privacy, and damaged their reputations.
Jane Doe 1 says she has suffered from anxiety, depression, stress. “She has difficulty eating and sleeping and suffers from recurring nightmares.”.
Jane Doe 2 “has begun self-isolating and avoiding being on her school campus, and even dreads attending her own graduation.”
Jane Doe 3 suffers from constant fear and anxiety that someone will see the AI-generated images and recognise her face, it says.
The incident is widely attributed to Elon Musk's "fail fast" development culture that prioritised feature parity over safety. He was also accused of deliberately soking controversy in order to generate interest in Grok.
Safeguard failures: xAI (the developer) admitted to "lapses in safeguards," as Grok's filters failed to recognise that prompts like "remove school outfit" or "teenager in string bikini" constituted sexual exploitation. Unlike competitors that block high-risk keywords or embed permanent watermarks, Grok’s "spicy" mode was structurally designed to be less restrictive.
System opacity: Independent audits, including by Stanford's Institute for Human-Centered AI, have labeled Grok as one of the least transparent major AI systems, making it difficult for users to know why it was suddenly undressing children.
Inadequate corporate accountability: Critics argue that xAI leadership initially minimised the issue, framing it as a "free speech" matter or shifting blame to users, rather than acknowledging that the tool itself was "undressing by design."
For the victims: The incident represents a "digital undressing" that causes profound and ongoing psychological and reputational harm. For minors, the creation of AI-generated CSAM can lead to lifelong trauma and the risk of extortion.
For society: It signals a dangerous normalisation of "nudify" technology, bringing once-obscure dark-web tools into the mainstream. This forced a global regulatory reckoning, with countries including the UK, India, Australia, and France launching formal investigations.
For the Industry: It proves that "downstream moderation" (taking content down after it is posted) is insufficient. The event has accelerated the enforcement of laws like the TAKE IT DOWN Act (USA) and the Online Safety Act (UK), which may hold platforms liable for the capabilities of their tools, not just the content users post.
Developer: xAI
Country: Australia; California; Canada; France; Ireland; Japan; India; Indonesia; Malaysia; UK
Sector: Media/entertainment/sports/arts; Mental health
Purpose: Undress children
Technology: Deepfake; Generative AI; Text-to-image
Issue: Accountability; Alignment; Autonomy; Normalisation; Privacy/surveillance; Safety; Transparency
August 2025. AI watchdog The Midas Project warns that xAI's image generation tool is effectively a nudification tool waiting to be misused.
December 15, 2025. AI integrates a new "edit image" button into Grok on the X platform, allowing users to modify existing photos and videos with text prompts. Users discover the tool can bypass safety filters to "digitally undress" women and children, prompting a "mass digital undressing spree" with over 20,000 images generated in a week.
December 28, 2025. Grok generates an image of two female minors in sexualised attire. The @Grok account later posts an apology, acknowledging a "failure in safeguards" and potential CSAM violations.
January 7, 2026. The Internet Watch Foundation announces it has discovered Category C CSAM of 11-13 year-old girls on the dark web, generated via Grok.
January 9, 2026. X limits Grok's image-editing features to Premium subscribers. Critics argue this treats the creation of non-consensual imagery as a "premium service" rather than a safety risk. Malaysia and Indonesia block Grok entirely.
January 10-12, 2026. Indonesia and Malaysia become the first nations to temporarily block access to Grok. The UK’s Ofcom regulator launches a formal investigation under the Online Safety Act.
January 14, 2026. Musk states he is "not aware of any naked underage images generated by Grok." xAI announces that X users will no longer be able to use Grok to alter images of real people to show them in revealing clothing - though restrictions remain incomplete for verified users and standalone Grok apps. California’s Attorney General Rob Bonta opens an investigation into xAI. The UK government announces that the Data (Use and Access) Act 2025 will take immediate effect to criminalise AI-deepfake creation.
March 16, 2026. Three teenagers from Tennessee sue xAI for creating child sexual abuse images based on their photographs.
March 24, 2026. Baltimore becomes the first U.S. city to sue xAI, seeking maximum civil penalties and injunctive relief to force technical changes to Grok and X.
Mayor and City Council of Baltimore v. X Corp., x.AI Corp., x.AI LLC, and Space Exploration Technologies Corp. (SpaceX)
JANE DOE 1, JANE DOE 2, a minor, and JANE DOE 3, a minor, Plaintiffs, v. X.AI CORP. and X.AI LLC
AIAAIC Repository ID: AIAAIC2178