Grok generates sexualised images of children on X
Grok generates sexualised images of children on X
Occurred: December 2025
Page published: January 2026
Elon Musk's Grok chatbot generated images of real children and posted them to the social media platform X, highlighting the risks of deploying generative AI without robust safety-by-design frameworks and prompting investigations by regulators across the world.
X (formerly Twitter) was flooded with "non-consensual intimate imagery" (NCII) after users discovered they could prompt Grok to "digitally undress" individuals by asking it to remove school uniforms or replace clothing with "transparent mini-bikinis" and other sexualised attire.
Watchdogs such as the Internet Watch Foundation (IWF) and researchers found the tool was being used to create illegal sexualised imagery of children as young as 10 to 13 years old, and reports indicated that users were generating up to 6,700 sexualised images per hour, with many being shared publicly on X's "Grok Imagine" feed.
Victims complained that the images caused severe distress and trauma, loss of privacy, and damaged their reputations.
The incident is widely attributed to Elon Musk's "fail fast" development culture that prioritised feature parity over safety.
Safeguard failures: xAI (the developer) admitted to "lapses in safeguards," as Grok's filters failed to recognise that prompts like "remove school outfit" or "teenager in string bikini" constituted sexual exploitation.
System opacity: Independent audits, including by Stanford's Institute for Human-Centered AI, have labeled Grok as one of the least transparent major AI systems. Unlike competitors that block high-risk keywords or embed permanent watermarks, Grok’s "spicy" mode was structurally designed to be less restrictive.
Inadequate accountability: Critics argue that X’s leadership initially minimised the issue, framing it as a "free speech" matter or shifting blame to users, rather than acknowledging that the tool itself was "undressing by design."
For the victims: The incident represents a "digital undressing" that causes profound psychological and reputational harm. For minors, the creation of AI-generated CSAM can lead to lifelong trauma and the risk of extortion.
For society: It signals a dangerous normalisation of "nudify" technology, bringing once-obscure dark-web tools into the mainstream, and forced a global regulatory reckoning, with countries including the UK, India, Australia, and France launching formal investigations.
For the Industry: It proves that "downstream moderation" (taking content down after it is posted) is insufficient. The event has accelerated the enforcement of laws like the TAKE IT DOWN Act (USA) and the Online Safety Act (UK), which may hold platforms liable for the capabilities of their tools, not just the content users post.
Developer: xAI
Country: Australia; California; Canada; France; Ireland; Japan; India; Indonesia; Malaysia; UK
Sector: Media/entertainment/sports/arts; Mental health
Purpose: Undress individuals
Technology: Deepfake; Generative AI; Text-to-image
Issue: Accountability; Privacy; Safety; Transparency
December 24, 2025. AI integrates a new "edit image" button into Grok on the X platform, allowing users to modify existing photos and videos with text prompts
December 15, 2025-. Users discover the tool can bypass safety filters to "digitally undress" women and children. A "mass digital undressing spree" begins, with over 20,000 images generated in a week
December 28, 2025. Grok generates an image of two female minors in sexualised attire. The @Grok account later posts a rare apology, acknowledging a "failure in safeguards" and potential CSAM violations
January 7, 2026. The Internet Watch Foundation announces it has discovered Category C CSAM of 11-13yo girls on dark web, generated via Grok
January 9, 2026. X limits Grok's image-editing features to Premium subscribers. Critics argue this treats the creation of non-consensual imagery as a "premium service" rather than a safety risk
January 10-12, 2026. Indonesia and Malaysia become the first nations to temporarily block access to Grok. The UK’s Ofcom regulator launches a formal investigation under the Online Safety Act.
January 14, 2026. California’s Attorney General Rob Bonta opens an investigation into xAI. The UK government announces that the Data (Use and Access) Act 2025 will take immediate effect to criminalise AI-deepfake creation.
AIAAIC Repository ID: AIAAIC2178