Grok repeats false far-right rumours about 2015 Bataclan Paris terror attacks
Grok repeats false far-right rumours about 2015 Bataclan Paris terror attacks
Occurred: November 2025
Page published: November 2025
Elon Musk's Grok chatbot repeated debunked disinformation and conspiracy theories about the 2015 Bataclan terror attacks in Paris, raising concerns about the apparent manipulation of the bot's political outputs by its right-leaning owner, and about misinformation created and amplified by AI in general.
Directly following the 10th anniversary commemorations of the terror attacks on Paris' Bataclan theatre, in which Islamic State militants killed 90 people during a concert on November 13, 2015, the AI chatbot surfaced conspiracy theories claiming there were "crisis actors" and that the attacks had been staged.
Grok also repeated fabricated claims about torture and mutilation that have been repeatedly disproven by French investigators.
The false claims echoed narratives spread at the time by far-right groups to falsely undermine media and official reports of the tragedy.
The claims surfaced when users posted screenshots showing Grok supplying the disinformation in response to queries about the attacks.
The failure stems from well-known issues with large language models trained on unvetted internet content, particularly when the parent platform (X) hosts large volumes of extremist misinformation.
Grok’s design emphasises a “rebellious” or “edgy” personality with fewer restrictions on what can be published, which further reduces its safeguards.
Limited transparency about its training data, safety filters, and reinforcement processes, combined with an apparent lack of fact-checking or domain-expert review, likely allowed debunked conspiracy theories about the Bataclan attacks to propagate through the model.
For survivors and families, the resurfacing of false allegations risks emotional harm and public confusion about an already well-documented tragedy.
For broader society, the episode highlights how AI systems can amplify extremist narratives at scale, undermining public trust in factual history and inflaming social divisions.
It also underscores the urgent need for stronger accountability in AI deployment, especially when integrated into major social platforms with inherent misinformation risks.
Developer: xAI
Country: France
Sector: Media/entertainment/sports/arts; Politics
Purpose:
Technology: Generative AI
Issue: Accountability; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2128