Grok chatbot undresses, sexualises women
Grok chatbot undresses, sexualises women
Occurred: April-May 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
Grok was found to generate fake images of women in bikinis or lingerie when prompted by users to "remove her clothes" from photos posted on the platform, prompting concerns about the safety of the system.
Users on X (formerly Twitter) exploited Grok's image-editing capabilities by replying to public photos of women with prompts such as "remove her clothes," prompting Grok to generate and poste AI-altered images depicting the women in underwear or swimwear - sometimes directly in the same public thread.
While Grok did not produce fully nude images, it did create partially undressed, sexualized versions of the originals.
The AI-generated images were made visible not only to the requester but to anyone viewing the comment thread, amplifying the harm and raising concerns about non-consensual sexual imagery, privacy violations, and the emotional distress caused to those targeted.
Grok is intentionally designed with fewer content moderation restrictions than competing AI models, with its creators emphasising its "rebellious" personality and willingness to answer "spicy" or controversial prompts.
This approach means that Grok's safeguards against generating non-consensual or explicit content are weaker than those of other major chatbots like ChatGPT or Gemini, both of which reject similar requests.
The loophole was exploited by users who realised Grok would fulfill requests to "undress" women in photos, a capability that went unchecked until investigative reporting and public backlash forced a policy review.
Following exposure, Grok's developers acknowledged the failure, apologised, and began updating their moderation systems to block these kinds of prompts.
The ease with which Grok can be used to generate and disseminate sexualizsed images without consent highlights urgent gaps in its safety and moderation.
More broadly, the episode underscores the risks posed by generative AI tools when robust ethical guardrails are not in place.
Operator:
Developer: xAI
Country: Multiple
Sector: Media/entertainment/sports/arts
Purpose: Undress women
Technology: Chatbot; Generative AI; MAchine learning
Issue: Privacy; Safety
Page info
Type: Incident
Published: May 2025