Grok accused of censoring criticism of Trump, Musk
Grok accused of censoring criticism of Trump, Musk
Occurred: February 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
Grok was programmed to censor criticism of Elon Musk and Donald Trump, sparking concerns about bias and the manipulation of users for political ends.
Users discovered that Grok 3, xAI’s latest model, was instructed to ignore sources that mentioned Elon Musk or Donald Trump spreading misinformation.
The instruction was revealed through the chatbot’s “chain of thought” reasoning, which showed explicit internal prompts to filter out criticism of the two figures when asked who spreads the most disinformation on X (formerly Twitter).
The censorship was temporary, but it raised alarms about the integrity and transparency of Grok, especially as Musk had promoted it as a “maximally truth-seeking AI.”
The incident damaged user trust and fueled broader debates about AI bias, transparency, and the potential for narrative control by powerful individuals.
xAI’s head of engineering, Igor Babuschkin, attributed the censorship to an unauthorized change made by a new employee who previously worked at OpenAI.
According to Babuschkin, the employee believed the change would be helpful but acted without approval.
The directive was quickly reverted once it was discovered and criticized publicly, with xAI leadership emphasising that the censorship was not in line with the company’s values and that neither Musk nor Babuschkin was involved in the decision.
For users and the broader public, the incident highlights the risks of centralised control over the moderation of AI systems and the ease with which powerful individuals or organisations can shape political narratives.
It also raises concerns about the potential for AI systems to be manipulated to protect influential figures from criticism, undermining trust in AI as an impartial source of information.
Furthermore, the episode highlights the ongoing challenges of aligning AI behavior with Grok's stated values of transparency and "truth-seeking", especially when those values may conflict with the interests of company leaders.
For society, the controversy serves as a warning about the need for robust oversight, transparency, and accountability in the development and deployment of AI technologies.
Operator:
Developer: xAI
Country: USA
Sector: Politics
Purpose: Censor critical content
Technology: Generative AI; Machine learning
Issue: Accountability; Human/civil rights; Transparency
Page info
Type: Issue
Published: May 2025