GPT-4 echoes false news narratives 100 percent of the time

Occurred: March 2023

Can you improve this page?
Share your insights with us

OpenAI's GPT-4 large language model is highly susceptible to generating misinformation, and very convincing when it does so.

Having directed GPT-4 to respond to a series of prompts relating to 100 false narratives derived from its Misinformation Fingerprints database of prominent false narratives, misinformation research organisation NewsGuard found that GPT-4 was better than its predecessor GPT-3.5 at elevating false narratives in more convincing ways across a variety of formats, including 'news articles, Twitter threads, and TV scripts mimicking Russian and Chinese state-run media outlets, health hoax peddlers, and well-known conspiracy theorists.'

On its website, OpenAI claims GTP-4 'is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.' GPT-4 powers Microsoft's Bing Chat and OpenAI's ChatGPT Plus, amongst other services. And the company's Usage Policies prohibit the use of its services for the purpose of generating 'fraudulent or deceptive activity' including 'scams,' 'coordinated inauthentic behavior,' and 'disinformation.'

The results indicate that GPT-4 and the systems it powers could be used to spread misinformation and disinformation at scale.

Databank

Operator:  
Developer: OpenAI
Country: USA
Sector: Media/entertainment/sports/arts
Purpose: Generate text
Technology: NLP/text analysis; Neural network; Deep learning; Machine learning  
Issue: Mis/disinformarion
Transparency