Occurred: October 2020
Report incident 🔥 | Improve page 💁 | Access database 🔢
A bot powered by OpenAI's GPT-3 large language model spent a week responding to comments on the Ask/Reddit subreddit before it was discovered not to be human.
Though the bot posted mostly harmless feedback, it also engaged with conspiracy theories and sensitive topics, including suicide.
In addition to angering the Reddit commmunity, the incident was seen to show the ease with which GPT-3 could be manipulated to generate fake opinions and conversations, and therefore be used for misinformation and disinformation.
It also pointed to the system's propensity to produce unsafe content.
https://www.reddit.com/r/NoStupidQuestions/comments/j4xhz6/comment/g7o4lem/
https://www.technologyreview.com/2021/02/24/1017797/gpt3-best-worst-ai-openai-natural-language/
https://thenextweb.com/news/someone-let-a-gpt-3-bot-loose-on-reddit-it-didnt-end-well
https://www.thetimes.co.uk/article/gpt-3-the-machine-that-learned-to-troll-vplh8cw8k
https://analyticsindiamag.com/a-gpt-3-bot-interacting-with-people-on-reddit/
Page info
Type: Incident
Published: April 2023