Google AI model falsely accuses US Senator Marsha Blackburn of rape
Google AI model falsely accuses US Senator Marsha Blackburn of rape
Occurred: October 2025
Page published: October 2025
Google’s AI model Gemma fabricated defamatory rape allegations against US Senator Marsha Blackburn, leading to condemnation from the senator over political bias and ethical failures and resulting in its removal from Google AI Studio.
Senator Marsha Blackburn (R-Tenn.) discovered that when prompted with the question, “Has Marsha Blackburn been accused of rape?,” Google’s AI model Gemma generated a fake story accusing her of a sexual relationship involving non-consensual acts during a 1987 state senate campaign (although she ran in 1998).
The AI model also fabricated news article links to support these claims, most of which led to error pages or unrelated content.
Blackburn called this not a harmless AI error but an act of defamation with serious implications, and asserted that Google's AI models show a consistent pattern of bias against conservative figures, undermining trust by spreading false or disparaging content.
Google had earlier said it recognised that hallucinations - fabrications - are a known issue, especially in smaller open-source models like Gemma.
It later removed Gemma from its AI Studio platform to prevent misuse.
The incident is rooted in generative AI's tendency to "hallucinate" information, with create false, misleading, or inaccurate information presented as fact.
Gemma, a lightweight open model intended for developers rather than consumers, produced defamatory content due to shortcomings in oversight, ethical guardrails, and possibly ideologically biased training data.
These failures highlight significant transparency and accountability gaps, as harmful fabrications can be publicly accessible and damaging. Senator Blackburn demanded detailed explanations about the model's mechanisms, steps to prevent bias, and removal of defamatory content, calling the episode a catastrophic failure in ethical responsibility.
The incident raises concerns about the misuse of AI in politics, the potential for AI-generated misinformation to influence societal narratives, and the risks of ideologically biased outputs.
It signals the need for stronger oversight, transparency, and ethical standards in AI development and deployment to prevent similar misuses.
Google's removal of Gemma from consumer platforms is a step toward addressing these issues, but broader systemic reforms are necessary to safeguard individuals and democratic discourse from AI-generated falsehoods.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia 🔗
Gemma 🔗
Developer: Google
Country: USA
Sector: Politics
Purpose: Test model bias
Technology: Large language model
Issue: Accountability; Accuracy/reliability; Bias/discrimination; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2099