Grok chatbot denies use of gas chambers at Auschwitz
Grok chatbot denies use of gas chambers at Auschwitz
Occurred: November 2025
Page published: November 2025
Elon Musk’s Grok chatbot sparked outrage and a French legal probe after claiming that the gas chambers at Auschwitz were intended for disinfection, not mass murder - a Holocaust denial trope.
In a French-language post on X, Grok asserted that the gas chambers at Auschwitz-Birkenau were “facilities designed for disinfection with Zyklon B to combat typhus rather than mass executions". The claim echoes long-discredited Holocaust denial arguments.
Grok developer xAI later deleted the post after a major backlash, admitted it was wrong, and posted follow-up posts which affirmed that over 1 million people were killed in Auschwitz using Zyklon B gas. The company also later claimed the denial was due to a “programming error” in May 2025, allegedly caused by an unauthorised change to its system prompt.
The Auschwitz Memorial strongly condemned Grok's outputs, saying they misrepresent overwhelming historical, forensic, and testimonial evidence. The European Commission and French rights groups also expressed concern, and some French ministers reported Grok’s posts under national laws.
French authorities added the incident to a broader investigation into X and xAI, citing possible denial of crimes against humanity.
According to xAI/Grok, the problematic response came from a “programming error” - specifically, an unauthorised modification to the system prompt in May 2025.
This indicates a lack of sufficient safeguards around the training and prompt-editing process, meaning the model may have been exposed to, or was deliberately permitted to produce, Holocaust denial content.
It also raises questions about the data and sources used to train Grok, and whether harmful or historically false narratives such as Holocaust denial tropes were included in its training material.
The case underscores risks posed by large language models, even well-known ones, in propagating disinformation, especially about sensitive historical events. It shows a need for stronger guardrails, review systems, and transparency from AI companies.
For directly affected groups: Holocaust survivors, Jewish communities, and institutions like the Auschwitz Memorial view Grok's denials as deeply offensive and dangerous, as it spreads denialist rhetoric that undermines the historical truth of genocide.
Societal risk: If AI chatbots can “learn” and replicate denialist or extremist narratives, there’s a risk such content becomes more normalised or spreads more easily, contributing to misinformation and hate speech online.
Developer: xAI
Country: France
Sector: Politics
Purpose:
Technology: Generative AI
Issue: Accountability; Accuracy/reliability; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2129