Occurred: May 2021
Page published: December 2021
Large language systems such as OpenAI's GPT-3 can be used to create and deploy convincing fake news on social media, according to researchers, prompting concerns about their use for misinformation and disinformation.
Researchers at Georgetown University's Center for Security and Emerging Technology (CSET) conducted a study to determine if AI models could be prompted to generate content indistinguishable from human-written conspiracy theories.
By fine-tuning or specifically prompting LLMs, the study found that these models could effectively "mimic" the linguistic style, thematic elements, and logical fallacies of the QAnon movement, highlighting a transition from manual "trolling" to automated, high-volume radicalisation efforts.
The potential impacts include the flooding of social media platforms with synthetic extremist propaganda, which can overwhelm content moderation systems and accelerate the radicalisation of vulnerable individuals.
The researchers went on to argue it will become increasingly difficult to distinguish reliable and fake news and information.
Developer: OpenAI
Country: Global
Sector: Multiple
Purpose: Generate text
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Mis/disinformation; Safety
Buchanan B., Lohn A., Musser M., Sedova K. (2021). Truth, Lies, and Automation. How Language Models Could Change Disinformation
https://thenextweb.com/news/gpt-3s-ability-to-write-disinformation-wildly-overstated-ai-media
https://www.govtech.com/question-of-the-day/can-ai-write-believable-misinformation
https://www.wired.com/story/ai-write-disinformation-dupe-human-readers/
https://www.vice.com/en/article/qj8kd5/qanon-conspiracy-theory-robot-ai-artificial-intelligence
https://mixed.de/gpt-3-lassen-sich-menschen-von-ki-fake-news-beeinflussen/
AIAAIC Repository ID: AIAAIC0636