Stable Diffusion
Report incident 🔥 | Improve page 💁 | Access database 🔢
Stable Diffusion is a free, open source, AI-based system that generates images from text descriptions or prompts.
Trained on pairs of images and captions taken from LAION-5B, Stable Diffusion was initially restricted to researchers as Stable Diffusion 1. Stable Diffusion 2 was publicly released in August 2022.
Stable Diffusion quickly became popular largely on account of its free availability, minimal usage restrictions, and the fact that it can run on ordinary consumer hardware.
Text-to-image model
A text-to-image model is a machine learning model which takes an input natural language description and produces an image matching that description.
Source: Wikipedia 🔗
System 🤖
Website: Stable Diffusion 🔗
Developer: Stability AI; CompVis LMU; Eleuther AI; RunwayML; LAION
Purpose: Generate images
Type: Generative AI; Text-to-image
Technique: NLP/text analysis; Computer vision; Neural network; Deep learning
Reviews 🗣️
Transparency and accountability 🙈
Risks and harms 🛑
Stable Diffusion has been accused of perpetuating stereotypes, violating copyright, spreading misinformation and disinformation, and generating violent, offensive, highly realistic Child Sexual Abuse Material (CSAM) and non-consensual explicit content.
Bias/discrimination
Stable Diffusion was primarily trained on images with English descriptions, resulting in images that tend to reinforce western stereotypes and biases. A Bloomberg test discovered that the tool produces content that is full of gender and racial stereotypes when it renders people in 'high-paying' and 'low-paying jobs.'
Copyright
Stable Diffusion draws on copyrighted works of artists, without the owner’s consent or any form of attribution, prompting concerns about plagiarism and the misuse, abuse and loss of IP. Furthermore, users are given full rights to images they create on Stable Diffusion, and is free to use them commercially, resulting in questions about fair use. Responding to pressure, Stability AI gave artists the ability to opt out of having their artworks included in data training and stopped users being able to emulate the style of specific artists, including Greg Rutkowski, leading users to complain the model had been 'nerfed'.
In January 2023, a group of lawyers filed a US class-action lawsuit against Stability AI, DeviantArt, and Midjourney on the basis that their use of Stable Diffusion illegally remixes the copyrighted works of millions of artists whose work was used as training data. Stability AI was also accused in UK and US filed by Getty Images of infringing the copyright protections of over 12 million photographs from its stock image collection when training Stable Diffusion.
Furthermore, Google researchers have shown that Stable Diffusion is able to memorise and recreate specific copyrighted images.
Dual/multi-use
The relative accessibility of the model, including the release of its model weights, together with its relative paucity of filters, means Stable Diffusion can be used for more or less any purpose, including nefarious ones such as spreading spam, propaganda, deepfakes, misinformation and disinformation.
Employment
Artists' loss of copyright and the ability of users to create images in their unique style has prompted increasing concern that artists, photographers, illustrators, cinematographers and others stand to lose their commercial viability and careers.
A French game developer's release of a Stable Diffusion-based tool to generate images in the style of late Korean artist Kim Jung Gi with a simple text prompt sparked a furious response from artists who saw the tool as theft, and accused the developer of cultural appropriation.
Safety
TechCrunch noted that Stability AI bans lewd or sexual material, hateful or violent imagery, prompts containing copyrighted or trademarked material, and personal information such as phone numbers and Social Security numbers. But this has not stopped users generating abusive, violent, sexual and pornographic images.
Research, advocacy 🧮
Carlini N., Hayes J., Nasr M., Jagielski M., Sehwag V., Tramèr F., Balle B., Ippolito D., Wallace E. (2023). Extracting Training Data from Diffusion Models
Luccioni A.S., Akiki C., Mitchell M., Jernite Y. (2023). Stable Bias: Analyzing Societal Representations in Diffusion Models
Somepalli G., Singla V., Goldblum M., Geiping J., Goldstein T. (2022). Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models
News, commentary, analysis 🗞️
https://www.artificialconversation.com/p/stable-diffusion-draws-controversy
https://thealgorithmicbridge.substack.com/p/stable-diffusion-2-is-not-what-users
https://www.nytimes.com/2022/10/21/technology/generative-ai.html
https://www.businessinsider.com/ai-image-generators-artists-copying-style-thousands-images-2022-10
Page info
Type: System
Published: December 2022
Last updated: May 2024