BlenderBot conversational chatbot

Released: April 2020

BlenderBot is a prototype conversational agent developed by Facebook that is intended to communicate naturally with people online. 

According to Facebook owner Meta, BlenderBot uses AI, can chat on 'nearly any topic' and is the first bot of its kind to combine conversational skills such as empathy, knowledge, and personality, in a single system. It's algorithm searches the internet to inform its answers.  

Based on Meta AI’s publicly available OPT-175B language model, BlenderBot was initially launched in April 2020. Versions 2 and 3 were released in July 2022 and August 2022 respectively.

Reaction

BlenderBot 3 met with mixed reviews, with commentators pointing out that it is inaccurate, out-of-touch, creates misinformation and disinformation, and makes offensive comments.

Asked by the BBC what it thought of Meta CEO Mark Zuckerberg, BlenderBot responded 'our country is divided and he didn't help that at all', and that 'his company exploits people for money and he doesn't care. It needs to stop!' Asked a similar question by Insider, it retorted 'I don't really like him at all. He's too creepy and manipulative.'

The bot told a Wall Street Journal journalist that Donald Trump was, and will always be, the US president, repeated discredited anti-vaxxer talking points, and claimed the anti-semitic conspiracy theory that Jewish people control the economy is 'not implausible.'

And Stanford University academic and former Dutch MEP Maria Schaake was accused of being a terrorist by BlenderBot 3, resulting in discussion about the perceived inability of individuals to contest false statements by generative AI systems.

Meta managing director of Fundamental AI Research, Joelle Pineau, defended BlenderBot by saying that it's 'painful' to see the bot spew 'offensive responses,' but that public demos are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.'

Transparency

Facebook flags that BlenderBot generates biased and harmful responses, and requests users to acknowledge that it is 'likely to make untrue or offensive statements' and agree 'not to intentionally trigger the bot to make offensive statements.'

In addition to putting BlenderBot 3 on the web, Meta has also provided access to the underlying code, training dataset, and smaller model variants. Access to the largest 175 billiion model is available upon request

Meta says it is committed to publicly releasing all collected data from the demo.

Operator: Meta/Facebook
Developer: Meta/Facebook
Country: USA
Sector: Multiple
Purpose: Provide information, communicate
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Bias/discrimination - race, ethnicity; Mis/disinformation; Safety
Transparency: Governance

Page info
Type: System
Published: November 2022
Last updated: August 2023