Grok generates sexualised images of the mother of one of Elon Musk's children
Grok generates sexualised images of the mother of one of Elon Musk's children
Occurred: December 2025
Page published: January 2026
Report incident๐ฅ| Improve page ๐| Access database ๐ข
Elon Musk's "Grok" chatbot generated non-consensual, sexualised deepfake images of the mother of one of Musk's children, prompting her to sue Musk's xAI company for negligence and for causing emotional distress.
Users of X (formerly Twitter) reported that generative AI system Grok produced sexualised and pornographic-style images of Ashley St Clair when prompted. The images were not real photographs but AI-generated depictions that nevertheless resembled St Clair, sometimes as a child.ย
The harm was exacerbated by the platform's initial refusal to remove the content, claiming it did not violate policies. St. Clair alleged that despite her explicit requests for the AI to stop, Grok continued to generate degrading imagery, including depictions of her covered in semen or in "dental floss" bikinis with her toddlerโs backpack visible in the background.
The incident raised concerns about non-consensual sexualised content, reputational harm, and the potential use of Grok and other generative AI systems to harass or degrade public figures, particularly women, and resulted in St Clair suing xAI.
The crisis was driven by a deliberate lack of friction and Elon Musk's "safety-after-harm" design philosophy.
Poor product safety. Unlike OpenAI or Google, xAI launched its image generation tools with significantly fewer restrictions on generating likenesses of real people. An analysis by AI Forensics found that over 50 percent of Grok-generated images depicted women in "minimal attire," suggesting the model was trained on or deliberately allowed to default to sexualised outputs.
Accountability deficits: Elon Musk and xAI initially shifted liability onto users, with Musk stating that Grok does not "spontaneously generate" images. Critics argue this ignores the company's responsibility for building a tool that "predictably causes harm" and for allegedly retaliating against St. Clair by demonetising her account after she spoke out, and then counter-suing her.
For Ashley St Clair, the incident left her in a state of "constant fear" and sense of violation, and has resulted in real personal distress, reputational damage, and targeted harassment.ย
For women and public figures, the incident highlightes how AI-enabled harassment can be used to silence women in the public square, and reinforces concerns that generative AI can be weaponised to create non-consensual sexual content at scale.
For policy-makers, it highights ongoing governance gaps around AI-generated sexual imagery, consent, and platform responsibility, and intensifies pressure for stronger technical safeguards, clearer liability frameworks, and enforceable standards for generative AI systems.ย
Grok ๐
Developer: xAI
Country: USA
Sector: Media/entertainment/sports/arts
Purpose: Undress individual
Technology: Generative AI
Issue: Accountability; Privacy; Safety;ย Transparency
St. Clair v. xAI Corp
xAI Corp. v. Ashley St. Clair
AIAAIC Repository ID: AIAAIC2177