GPT-3 creates short-form disinformation

Occurred: May 2021

Can you improve this page?
Share your insights with us

AI large language systems such as OpenAI's GPT-3 can be used to create and deploy convincing short-form misinformation and disinformation on social media, according to a new report by researchers from Georgetown's Center for Security and Emerging Technology (CSET).

Testing whether these kinds of models can mimic the style of the QAnon conspiracy theory, the researchers found that 'GPT-3 easily matches the style of QAnon' and 'creates its own narrative that fits within the conspiracy theory'. They go on to argue it will become increasingly difficult to distinguish reliable and fake news and information.

While OpenAI has restricted access to GPT-3, the authors argue it is only a matter of time before open source versions of GPT-3 or its equivalents will emerge, making it it easy for governments and other bad actors to weaponise them for nefarious purposes.

Operator: OpenAI
Developer: OpenAI
Country: Global
Sector: Politics
Purpose: Generate natural language
Technology: NLP/text analysis
Issue: Mis/disinformation; Dual/multi-use
Transparency: 

Page info
Type: Incident
Published: December 2021