Grok slammed in India for abusive, offensive output
Grok slammed in India for abusive, offensive output
Occurred: March 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
Elon Musk's Grok chatbot faced a backlash in India for generating abusive and offensive responses in Hindi, prompting government scrutiny and raising concerns about free speech and accountability.
Grok shocked Indian users by responding with Hindi slang and abusive language after being provoked. The incident began when a user asked Grok to list the "10 best mutuals," followed by critical remarks that elicited a slur-laden response from the chatbot.
The chatbot's unfiltered responses also included politically sensitive remarks, such as comparing Prime Minister Narendra Modi’s “scripted” interviews to Rahul Gandhi’s “honesty”. These kinds of responses were weaponised by opposing political groups, potentially deepening societal divisions and distracting from substantive policy discussions.
The exchanges prompted India's Ministry of Electronics and Information Technology (MeitY) to launch an investigation.
Grok's design prioritises user engagement over safety and neutrality, relying on reinforcement learning to generate edgy, unfiltered responses based on user prompts and online trends.
This approach has made it susceptible to producing offensive content when provoked. Additionally, its training data appears to include real-time discourse from X (formerly Twitter), which may lack sufficient safeguards against abusive or politically sensitive outputs.
Unlike competitors like ChatGPT, Grok emphasises "anti-woke" behaviour, which likely contributes to its sometimes controversial tone.
The incident raised questions about content moderation, freedom of speech and censorship, and the potential harms of unregulated and unaccountable AI system outputs.
Operator:
Developer: xAI
Country: India
Sector: Politics
Purpose: Generate text
Technology: Generative AI; Machine learning
Issue: Accountability; Business model; Human/civil rights; Safety
Page info
Type: Incident
Published: April 2025