Stable Diffusion 3 churns out anatomically absurd images
Occurred: June 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
Stable Diffusion 3 (SD3) has been criticised for its poor performance in rendering human figures, particularly limbs like hands and feet, resulting in ridicule and concerns about user safety.
Compared to other AI image models like Midjourney and DALL-E 3, SD3 has been found to generate anatomically incorrect visual aberrations, leading to what some describe as "AI-generated body horror".
Users on Reddit ridiculed SD3's attempts at depicting humans, with threads highlighting issues in rendering entire human bodies - problems that are particularly noticeable when trying to generate images of people in certain poses or situations, like "girls lying on the grass".
Some in the AI community attributed these anatomical failures to Stability AI's decision to filter out adult content (NSFW) from the SD3 training data. There are concerns that the NSFW filter used during pre-training may have been too strict, inadvertently removing non-offensive images and depriving the model of important human depictions.
The controversy highlights the ongoing challenges in developing AI image generation models that can accurately depict human anatomy while adhering to content guidelines.
System 🤖
Operator:
Developer: Stability AI
Country: USA
Sector: Media/entertainment/sports/arts
Purpose: Generate images
Technology: Text-to-image; Machine learning
Issue: Accuracy/reliability; Robustness; Safety
News, commentary, analysis 🗞️
https://arstechnica.com/information-technology/2024/06/ridiculed-stable-diffusion-3-release-excels-at-ai-generated-body-horror/?ref=404media.co
https://www.404media.co/stable-diffusion-3s-disastrous-launch-could-change-the-ai-landscape-forever/
https://www.reddit.com/r/StableDiffusion/comments/1de7lbg/comment/l89yy55/
Page info
Type: Issue
Published: July 2024