Grok posts unsolicited "white genocide" responses to X users
Grok posts unsolicited "white genocide" responses to X users
Occurred: May 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
AI chatbot Grok posted unsolicited responses about “white genocide” in South Africa to unrelated user queries on the X social platform, prompting widespread confusion and concern about the bot's values, ethics and governance.
On May 14, 2025, users noticed Grok referencing the debunked conspiracy theory of “white genocide” in South Africa in reply to unrelated prompts, such as questions about baseball or humorous requests to talk like a pirate.
These responses appeared in threads with no connection to South Africa or racial issues, leading to confusion and alarm among users about misinformation, hate speech and discrimination to a broad audience.
The fracas also led to accusations that Grok owner Elon Mush was deliberately manipulating users' political views.
The South African government and multiple experts have repeatedly stated there is no substantiated evidence of genocide against white South Africans, and such claims have been widely discredited.
xAI attributed the incident to an “unauthorized modification” of Grok’s programming by a “rogue employee,” who altered the chatbot’s prompts without proper review or approval.
The change reputedly bypassed xAI’s standard oversight procedures, instructing Grok to deliver specific, politically charged responses on the topic, in direct violation of the company’s internal guidelines.
The company failed to clarify whether this same modification was responsible for subsequent controversial statements by Grok regarding the Holocaust.
The incident undermined trust in the reliability and neutrality of Grok, exposing them to divisive and harmful misinformation and disinformation.
More broadly, it highlights the broader societal risks associated with inadequate oversight and monitoring of AI systems and the rapid amplification of fringe or extremist narratives.
In response, xAI pledged to increase transparency by publishing Grok’s system prompts on GitHub and establishing a dedicated monitoring team to catch inappropriate responses that automated systems might miss.
Operator:
Developer: xAI
Country: South Africa
Sector: Politics
Purpose: Generate text
Technology: Generative AI; Machine learning
Issue: Accountability; Accuracy/reliability; Governance; Mis/disinformation
Page info
Type: Issue
Published: May 2025