Sora users create AI videos of Martin Luther King making monkey noises
Sora users create AI videos of Martin Luther King making monkey noises
Occurred: October 2025
Page published: October 2025
Users of OpenAI's Sora AI video generator created tastless and offensive videos of Martin Luther King Jr., including making monkey noises, sparking outrage and forcing OpenAI to suspend the ability of users to create content using his likeness.
Sora users created highly disrespectful videos of Martin Luther King Jr., including clips of him making monkey noises during his iconic "I Have a Dream" speech and others depicting him wrestling with Malcolm X.
The videos caused deep distress to the families of both figures and elicited a strong backlash from the public and from King’s estate, citing the harmful and racist nature of such depictions.
OpenAI later paused the ability of users to generate AI videos resembling King following a formal request from the King estate, and implemented new safeguards allowing authorised representatives or estate holders to request that their likeness not be used in Sora-generated content.
The controversy arose because Sora allows users to create videos from text prompts featuring historical figures without any default safeguards to prevent offensive or abusive outputs.
Advocacy from the estate of Martin Luther King and his daughter prompted OpenAI to act, highlighting significant limitations in how the AI company controls and monitors user-generated content, including incorporating the likeness of public figures.
The incident highlights tensions between protecting expressive freedoms and preventing harm.
Advocates of free speech argue that restricting AI-generated likenesses could limit artistic and political expression, satire, historical reinterpretation, and critical discourse important to democratic society.
Others argue that free speech must be balanced against harm caused by unauthorised, disrespectful, or misleading depictions that can damage reputations, perpetuate racist stereotypes, or spread misinformation.
The fracas also demonstrates that OpenAI prioritises launching products with few meaningful safeguards in order to build buzz, earn revenue and gain market share over the safety of its users and society.
It also added to broader concerns about the ease with which generative AI technologies can be misused to produce offensive, cruel, or racist content, thereby amplifying societal harm and perpetuating hateful stereotypes.
Developer: OpenAI
Country: USA
Sector: Politics
Purpose: Ridicule public figure
Technology: Generative AI; Machine learning
Issue: Accountability; Human/civil rights; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2067