Microsoft AI Image Creator generates violent political and religious images

Occurred: November-December 2023

Can you improve this page?
Share your insights with us

Microsoft’s AI Image Creator produced violent images, including synthetic decapitations, of politicians, religious leaders, and ethnic minorities. 

Canadian artist Josh McDuffie discovered a so-called 'kill-prompt' that used visual paraphrases instead of explicit descriptions. For example McDuffie used the term 'red corn syrup' - a term for movie blood - rather than 'blood'.

McDuffie reported the vulnerability to Microsoft though its security bug bounty programme. But the technology company rejected his submission, and later blamed users for attempting to use AI Image Creator 'in ways that were not intended.'

The incident raised questions about the oversight, safety, and security of Microsoft's system. It also indicated a potential lack of accountability for the unintended uses of its system.

Databank

Operator: Washington Post
Developer: Microsoft
Country: USA
Sector: Politics; Religion
Purpose: Generate images
Technology: Text-to-image; Generative adversarial network (GAN); Neural network; Deep learning; Machine learning
Issue: Accountability; Oversight; Safety; Security
Transparency: Governance