Google AI systems falsely call conservative activist Robby Starbuck a “child rapist”
Google AI systems falsely call conservative activist Robby Starbuck a “child rapist”
Occurred: December 2023
Page published: October 2025
Google’s AI systems described Robby Starbuck, a US conservative activist, as a “child rapist,” a false accusation that appeared in AI-generated content and Google’s search summaries, leading to accusations of political bias and a lawsuit.
Starting in December 2023, Google AI systems Bard, Gemini and Gemma generated a range of fabricated and defamatory accusations about Robby Starbuck, creating a false narrative portraying him as a violent criminal and sexual predator.
The accusations included:
Being a child rapist and serial sexual abuser.
Having been accused of sexual assault and rape.
Allegations of stalking, drug charges, and resisting arrest.
Accusations of murder and being arrested for it.
Claims that he flew on Jeffrey Epstein’s plane.
Being linked to harassment and financial exploitation.
Google's systems also fabricated entire fake news articles with hyperlinks that led to non-existent or unrelated sources, validating these false claims.
Google dismissed the accusations as "hallucinations" appearing in responses generated by its AI systems and, despite multiple requests from Starbuck, the technology firm failed to correct the erroneous outputs, which were reportedly delivered to approximately 2.8 million unique users.
In October 2025, Starbuck filed a lawsuit accusing Google of negligence and actual malice for providing false statements about him and for not correcting them.
Starbuck's lawsuit states that Gemini AI “admitted” that some false statements about Starbuck were deliberately engineered biases to damage reputations of individuals politically opposed to Google executives.
Google responded by saying it was unable to replicate the results in its consumer products, which is how most people experience the company's AI products and services.
The mistake is seen as likely due to flaws in Google’s AI algorithms that misinterpreted or failed to properly filter out inaccurate and defamatory information from unreliable sources.
The false accusations reputedly have had major impacts on Starbuck’s reputation and safety.
The fracas highlights the inherent inaccuracy of Google's AI systems and their ability to cause real harm.
It also points to transparency and accountability gaps in Google's data curation and AI moderation processes.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia 🔗
AIAAIC Repository ID: AIAAIC2100