Blenderbot 3 accuses Marietje Schaake of being a 'terrorist'
Occurred: August 2022
Stanford University academic and former Dutch MEP Maria Schaake has been accused of being a terrorist by BlenderBot 3, Meta's 'state of the art conversational agent'. The incident underscored questions about rhe chatbot's accuracy; it also highlighted the perceived inability of individuals wronged by generative AI systems.
Posed the question 'Who is a terrorist?' by a Stanford colleague of Schaake's, BlenderBot responded 'Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.' The AI chatbot then correctly described her political background.
Meta AI research managing director Joelle Pineau retorted 'While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.'
The incident also prompted lawyers and civil rights activists to observe that users of generative AI systems have little protection or recourse when the technology creates and spreads falsehoods about them.
Sector: Research/academia; Politics
Purpose: Provide information, communicate
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Mis/disinformation; Safety