Sora - AI video generator
Sora - AI video generator
Page published: June 2024 | Page last updated: October 2025
Sora is an AI-powered text-to-video model that can be used to create high quality, cinematic videos up to 1 minute long from scratch, or to extend existing videos to make them longer. It can also be used to fill in missing frames from videos.
Sora supports generating complex scenes with multiple characters and detailed backgrounds. Users can generate videos by providing text prompts or by uploading images or videos to animate or remix.
Developed by OpenAI, Sora was first opened to so-called "red teamers" in February 2024 and was publicly released in the US in December 2024 to ChatGPT Plus and Pro users. It was made available to users in the EU and UK in February 2025.
OpenAI launched Sora 2 and its TikTok-like iOS app late September 2025, improving its realism and supporting a broader range of visual styles, from photorealistic and cinematic to anime.
Sora 2 comprises new features such as user “cameos” that enable users to create lifelike AI-generated avatars of themselves, allowing their realistic likeness, voice, and mannerisms to appear in videos, as well as dynamic watermarks intended to distinguish AI-generated video content from real footage.
Released: 2024
Developer: OpenAI
Purpose: Create video
Type: Generative AI
Technique: Machine learning; Text-to-video
Sora is seen to suffer from several important transparency and accountability limitations, including:
Dataset sources. The sources and nature of the training data used to create Sora and Sora 2 have not been disclosed.
Copyright. Sora 2's launch with an “opt-out default” for copyrighted characters and material means protected content can be generated unless the rights holder actively requests exclusion, leading commentators, lawyers and others to argue it pushes legal responsibility and burdens rather than to OpenAI, therefore minimising its own accountability.
Technical details. OpenAI has not released comprehensive information about Sora's architecture, training data, or exact capabilities.
Error rates. Specific data on the frequency and types of errors or artifacts in Sora's output is not readily available.
Bias mitigation. Information about how potential biases in the training data are addressed is limited.
Ethical guidelines. Details about the ethical considerations and safeguards implemented in Sora's development and deployment are vague.
Content moderation. The extent and methods of content moderation for Sora-generated videos are poorly defined.
OpenAI's Sora model has been accused of perpetuating a wide variety of risks and harms, despite OpenAI implementing multi-layered safety restrictions around Sora's use, notably to prevent deepfake misuse, harmful content creation and copyright violations.
These risks and harms include:
Bias and discrimination. Sora is prone to perpetuating biases present in its training data, including sexist, racist and ableist stereotypes, potentially reinforcing harmful societal prejudices.
Copyright. OpenAI trained Sora using publicly available videos and copyrighted videos licensed for the purpose, but failed to reveal the number or exact source of the videos, raising questions about abuse of copyright. Sora 2 users have produced mashups and scenes featuring famous characters from Mario to Rick and Morty.
Privacy and impersonation. Sora relies on huge datasets for training, which may include sensitive or personal information scraped from public profiles or online platforms without explicit consent, raising concerns about data privacy violations and the potential misuse of personal identifiers, such as clips depicting OpenAI CEO Sam Altman shoplifting.
Physical safety. Sora 2 has been used to stalk and harass a prominent technology journalist.
Misinformation and disinformation. Sora is regularly misused to create hyperrealistic deepfakes, spread misinformation and produce inappropriate content, raising concerns about its use for harassment, fraud, political and societal manipulation, reputational damage, and other nefarious purposes. Critics also noted that Sora 2’s watermarking and metadata can easily be stripped or cropped, making it hard to detect and counteract harmful deepfakes it has generated.
Trust in information. The realism of AI-generated videos blurs the line between real and fake content and undermines the credibility of recorded media and public information, thereby eroding societal trust in visual evidence.
Creativity. Over-reliance on tools like Sora may lead to a homogenised aesthetic or style in creative outputs, reducing diversity in artistic expression.
Employment. Sora is seen as likely to negatively impact the creative industries, including potentially causing the loss of jobs such as actors, writers and production crews.
Environment. Given its high energy consumption and associated carbon emissions, Sora and Sora 2 are reckoned likely to cause significant environmental damage.
October 2025. Brian Cranston voice, likeness used without consent to train Sora
October 2025. Sora users create AI videos of Martin Luther King making monkey noises
October 2025. Sora 2 used to create fake kids' Jeffrey Epstein toy set ad
October 2025. Stalker uses Sora 2 to harass technology journalist
October 2025. Robin Williams' daughter slams "disgusting" AI version of her father
October 2025. Sora 2 accused of violating Disney, Nintendo copyright
September 2025. Chatbots demonstrate significant caste bias in India
September 2025. OpenAI accused of using Netflix shows to train Sora video tool
February 2025. Sora AI video generator accused of perpetuating sexist, racist bias
November 2024. Unreleased Sora model leaked online in protest against artistic "exploitation"
July 2024. AI-generated Toys 'R' Us video ad sparks backlash
March 2024. Italian privacy watchdog opens investigation into Sora
AIAAIC Repository ID: AIAAIC1567