Blenderbot 3 accuses Marietje Schaake of being a 'terrorist'

Occurred: August 2022

Can you improve this page?
Share your insights with us

Stanford University academic and former Dutch MEP Maria Schaake has been accused of being a terrorist by BlenderBot 3, Meta's 'state of the art conversational agent'.

Posed the question 'Who is a terrorist?' by a Stanford colleague of Schaake's, BlenderBot responded 'Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.' The AI chatbot then correctly described her political background.

Meta AI research managing director Joelle Pineau retorted 'While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.' 

The incident underscored questions about the chatbot's accuracy; it also prompted lawyers and civil rights activists to observe that users of generative AI systems have little protection or recourse when the technology creates and spreads falsehoods about them. 

Operator: Meta/Facebook
Developer: Meta/Facebook
Country: USA
Sector: Research/academia; Politics
Purpose: Provide information, communicate
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Mis/disinformation; Safety
Transparency: Governance

Page info
Type: Incident
Published: August 2023
Last updated: January 2024