Australian activist Caitlin Roper targeted with AI-generated violent threats, images
Australian activist Caitlin Roper targeted with AI-generated violent threats, images
Occurred: 2024-
Page published: December 2025
Australian activist Caitlin Roper and her colleagues were targeted by a sophisticated campaign of AI-generated violent threats, marking a dangerous shift where generative technology is used to create hyper-realistic, personalised imagery of torture and death to silence public figures.
Beginning in late 2024 and intensifying throughout 2025, Caitlin Roper, a campaign manager at the Australian activist group Collective Shout, was subjected to a relentless campaign of harassment on X (formerly Twitter).
Unlike traditional trolling, these attacks deployed generative AI to create "visceral realism," with the attackers generating images of Roper in graphic, life-threatening scenarios, including being flayed, decapitated, and set on fire.
The psychological impact was heightened by the high degree of personalisation; in some images, Roper was depicted wearing a specific blue floral dress she actually owned, based on photos found online.
Furthermore, harassers reportedly used Elon Musk's Grok chatbot to research her physical location and daily habits, such as where she bought coffee, to amplify the credibility of the threats.
The incident was facilitated by a combination of rapid AI proliferation and systemic failures in corporate accountability:
Safety guardrail failures: Despite claims of safety filters, generative AI tools (including Grok and others) were successfully manipulated to bypass "not-safe-for-work" (NSFW) filters to create violent, non-consensual imagery.
Platform negligence: X's content moderation policies were criticised for being inconsistent. Many graphic images depicting Roper's death were initially ruled as "not violating" platform policies, while the block function was weakened, allowing persistent stalkers to continue surveilling her from multiple accounts.
Ease of digital research: AI-powered search tools reduced the "cost of harassment." Previously, bad actors had to manually "dox" targets; now, AI can synthesise personal data and public photos into threatening content in seconds.
For Roper and those similarly impacted, the incident represents an "exhaustion tactic" designed to drive women and activists out of the public sphere. The psychological weight of seeing one's own realistic "corpse" creates a level of distress that traditional text-based abuse cannot match.
For society, this event highlights a growing "accountability gap." Current legal and platform frameworks are ill-equipped to handle the speed and volume of AI-generated abuse. It signals a future where objective truth is erode and dissent is silenced.
Developer: xAI
Country: Australia
Sector: NGO/non-profit/social enterprise
Purpose: Harass
Technology: Generative AI
Issue: Privacy; Safety
AIAAIC Repository ID: AIAAIC2173