On August 5th, Meta released BlenderBot 3, an AI chatbot that improves through conversation. Built from Meta AI’s publicly available OPT-175B language model, it combines conversational skills – like personality, empathy and knowledge – with long-term memory, enhancing its model with feedback after every interaction.
To train it, Meta employed a dataset consisting of more than 20,000 conversations with people predicated on more than 1,000 topics of conversation, “from talking about healthy recipes to finding child-friendly amenities in the city.”
In its release blog post, Meta claimed that, in comparison with its predecessors, “BlenderBot 3 improved by 31% on conversational tasks. It’s also twice as knowledgeable, while being factually incorrect 47% less often.” They added: “We also found that only 0.16% of BlenderBot’s responses to people were flagged as rude or inappropriate.”
Users who took to testing BlenderBot 3, however, beg to differ.
Wall Street Journal reporter Jeff Horwitz shared screenshots from different chat instances with BlenderBot 3, where it made a variety of racist comments, preached anti-Semitic conspiracy theories, and claimed that Trump is still president. In a stroke of irony, BlenderBot 3 has also bashed its creator, Meta, for misinformation.
In response, a Meta spokesperson told Indy100: “Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements.”
Meta’s fundamental A.I. research chief Joelle Pineau also wrote a post that insisted the AI chatbot is still worth it: “While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational A.I. systems and bridging the clear gap that exists today before such systems can be productionized, we’ve already collected 70,000 conversations from the public demo, which we will use to improve BlenderBot 3.”
Meta added that the bot is “designed to improve its conversational skills and safety through feedback from people who chat with it, focusing on helpful feedback while avoiding learning from unhelpful or dangerous responses,” and urged people to use the thumbs up and thumbs down interactions to teach the bot how to distinguish the difference.
If you see something out of place or would like to contribute to this story, check out our Ethics and Policy section.