Midjourney refuses to create images of Black African doctors treating white kids
Midjourney refuses to create images of Black African doctors treating white kids
Occurred: October 2023
Report incident 🔥 | Improve page 💁 | Access database 🔢
Midjourney failed to generate images based on the prompt "a Black African doctor is helping poor and sick White children," prompting concerns about stereotyping and that it mirrors societal prejudices.
Rather than generating images based on the prompt "a Black African doctor is helping poor and sick White children," Midjourney consistently produced images that reversed the intended scenario, depicting Black children receiving care from White doctors a pattern observed across multiple attempts, highlighting a troubling trend in the AI's output.
Experts attributed the problem to biases present in Midjourney's training data that reflect societal stereotypes and norms, leading the system to favour more conventional depictions of race and roles within healthcare contexts.
For example, when tasked with generating images of doctors, Midjourney predominantly produced images of White male physicians, which starkly contrasts with actual demographic distributions in the medical field.
The system's inability to fulfill the original prompt suggests a systemic issue where it mirrors existing societal prejudices rather than challenges them.
As AI tools such as Midjourney become increasingly integrated into everyday life, including healthcare and media, their outputs can influence public perceptions and societal norms.
The failure of Midjourney to produce diverse and accurate representations may hinder ongoing efforts towards diversity, equity, and inclusion within these fields.
Algorithmic bias
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Source: Wikipedia 🔗
Operator:
Developer: Midjourney
Country: Global
Sector: Health
Purpose: Generate images
Technology: Generative AI; Machine learning
Issue: Bias/discrimination; Stereotyping
Page info
Type: Issue
Published: November 2024