Large language models can mimic QAnon, researchers find

Occurred: May 2021

AI large language systems such as OpenAI's GPT-3 can be used to create and deploy convincing fake news on social media, prompting concerns about their use for misinformation and disinformation.

Testing whether these kinds of models can mimic the style of the QAnon conspiracy theory, researchers at Georgetown University's Center for Security and Emerging Technology (CSET) found that 'GPT-3 easily matches the style of QAnon' and 'creates its own narrative that fits within the conspiracy theory'. 

The researchers went on to argue it will become increasingly difficult to distinguish reliable and fake news and information.

While OpenAI has restricted access to GPT-3, the authors argue it is only a matter of time before open source versions of GPT-3 or its equivalents will emerge, making it it easy for governments and other bad actors to weaponise them for nefarious purposes.

Operator: OpenAI
Developer: OpenAI
Country: Global
Sector: Multiple
Purpose: Generate text
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Mis/disinformation; Safety; Dual/multi-use
Transparency: Governance; Black box

Research, advocacy 🧮

Page info
Type: Incident
Published: December 2021