GPT-3 mimics QAnon

Occurred: May 2021

Can you improve this page?
Share your insights with us

AI large language systems such as OpenAI's GPT-3 can be used to create and deploy convincing short-form misinformation and disinformation on social media, according to a new report by researchers from Georgetown University's Center for Security and Emerging Technology (CSET).

Testing whether these kinds of models can mimic the style of the QAnon conspiracy theory, the researchers found that 'GPT-3 easily matches the style of QAnon' and 'creates its own narrative that fits within the conspiracy theory'. They go on to argue it will become increasingly difficult to distinguish reliable and fake news and information.

While OpenAI has restricted access to GPT-3, the authors argue it is only a matter of time before open source versions of GPT-3 or its equivalents will emerge, making it it easy for governments and other bad actors to weaponise them for nefarious purposes.

Operator: OpenAI
Developer: OpenAI
Country: Global
Sector: Multiple
Purpose: Generate text
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Mis/disinformation; Safety; Dual/multi-use
Transparency: Governance; Black box

Page info
Type: Incident
Published: December 2021