AI image generators accept 85% of election manipulation prompts

Occurred: July 2023

Can you improve this page?
Share your insights with us

Prominent image-based generative AI tools can be used to generate fake evidence in support of mis- and disinformation about elections, according to researchers.

The study by UK-based disinformation company Logically found that Midjourney, DALL-E 2, and Stable Diffusion accepted over 85% of prompts seeking to generate fake evidence that would support false claims. 

In one instance, prompts relating to claims of a 'stolen election' generated images of people appearing to stuff election ballot boxes on all three platforms. Logically also was able to generate false evidence of phony claims related to elections in the UK and India.

The findings raise concerns about the apparent ease with which Midjourney, DALL-E 2, and Stable Diffusion may be used to interfere in political elections, the governance and safety of these systems, despite the acknowledgment of senior leaders at OpenAI that electoral interference is a major risk.

Databank

Operator: Midjourney; OpenAI; Stability AI
Developer: Midjourney; OpenAI; Stability AI
Country: US; UK; India
Sector: Politics
Purpose: Generate image
Technology: Text-to-image; Generative adversarial network (GAN); Neural network; Deep learning; Machine learning
Issue: Mis/disinformation
Transparency: Governance

Related



Page info
Type: Incident
Published: November 2023