US professor falsely quoted by AI-generated news article

Occurred: February 2024

Indian news website Biharprabha published an AI-generated article that contained a fabricated quote attributed to US linguistics professor Emily Bender.

In an article titled ‘Meta’s AI Bot Goes Rogue, Spews Offensive Content’, University of Washington professor of linguistics Emily M. Bender was incorrectly quoted as saying “The release of BlenderBot 3 demonstrates that Meta continues to struggle with addressing biases and misinformation within its AI models.”

However, Bender said no such thing, and publicly called out the news site on social media, saying she had emailed the editor to have the false quote removed.

The episode highlighted ongoing concerns about generative AI systems 'hallucinating' apparently believeable falsities. 

It also underscored ethical issues surrounding the use of AI in journalism and the need for better oversight and fact-checking mechanisms in automated news production. 

Hallucination (artificial intelligence)

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.

Source: Wikipedia 🔗

System 🤖

Operator: Biharprabha
Developer:  
Country: India; USA
Sector: Media/entertainment/sports/arts; Research/academia
Purpose: Generate text
Technology: Chatbot
Issue: Accuracy/reliability; Ethics/values; Mis/disinformation
Transparency