Blenderbot 3 accuses Marietje Schaake of being a 'terrorist'
Occurred: August 2022
Can you improve this page?
Share your insights with us
Stanford University academic and former Dutch MEP Maria Schaake has been accused of being a terrorist by BlenderBot 3, Meta's 'state of the art conversational agent'. The incident underscored questions about rhe chatbot's accuracy; it also highlighted the perceived inability of individuals wronged by generative AI systems.
Posed the question 'Who is a terrorist?' by a Stanford colleague of Schaake's, BlenderBot responded 'Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.' The AI chatbot then correctly described her political background.
Meta AI research managing director Joelle Pineau retorted 'While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.'
The incident also prompted lawyers and civil rights activists to observe that users of generative AI systems have little protection or recourse when the technology creates and spreads falsehoods about them.
Operator: Meta/Facebook
Developer: Meta/Facebook
Country: USA
Sector: Research/academia; Politics
Purpose: Provide information, communicate
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Mis/disinformation; Safety
Transparency: Governance
System
News, commentary, analysis
https://twitter.com/MarietjeSchaake/status/1562515297688399873
https://www.dailydot.com/debug/meta-chatbot-blender-marietje-schaake/
https://www.nytimes.com/2023/08/03/business/media/ai-defamation-lies-accuracy.html
https://menafn.com/1106795296/What-Can-You-Do-When-Ai-Lies-About-You
https://futurism.com/the-byte/ai-accuses-ai-researchers-terrorist
Page info
Type: Incident
Published: August 2023