DeepSeek-R1
DeepSeek-R1
Report incident ๐ฅ | Improve page ๐ | Access database ๐ข
DeepSeek-R1 ("R1") is an open-source AI-powered reasoning model developed by Chinese AI start-up DeepSeek.
It reputedly performs as well as - and in some instances better - than other more established large language models, at significantly lower cost.
R1's release in January 2025 generated a blizzard of publicity and led technology professionals and others to question the amount of money US and other technology companies, their suppliers and investors are using to build their AI infrastructure, and to highlight the environmental damage and related consequences of this development.
DeepSeek's low-cost, high-performance model was also seen as potentially reshaping the AI market by opening opportunities for smaller entities to innovate. The focus on innovation and the need to accelerate development will likely make AI less safe, some prominent AI professionals argue.
The model is available under an MIT license, allowing free commercial and academic use.
Generative artificial intelligence
Generative artificial intelligence (generative AI, GenAI, or GAI) is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts.
Source: Wikipedia ๐
Website: DeepSeek-R1 ๐
Released: 2025
Developer: High-Flyer
Sector: Multiple
Purpose: Generate text
Type: Large language model; Generative AI
Technique: Machine learning
DeepSeek subreddit (unofficial)
DeepSeek R1 suffers from multiple important transparency and accountability limitations, including:
Content sources. DeepSeek fails to acknowledge the provenance of the content used to train and operate R1
User privacy. DeepSeek's privacy policy says user data "may" be shared with its business partners, but fails to say who these are and what they might do with users' data
Censorship. DeepSeek has been trained to avoid politically sensitive topics such as Tiananmen Square and Taiwan but does not disclose this publicly
Environmental impacts. No mention is made of the environmental impacts of its training and deployment
Risk management. DeepSeek does not disclose how it manages known risks its R1 model poses
Complaints and appeals. DeepSeek's terms of use state complaints can be lodged, but there is no mention of this at the point of use on its web app. There does not appear to be a formal complaints procedure.
DeepSeek is seen to pose serious risks to individuals, business and society, in the form of:ย
Loss of copyright, resulting in financial loss and increased competition
Breaches of confidentiality and privacy
Bias and discrimination in race, gender, health, religion and other characteristics
Generating and amplifying misinformation and disinformation, including Chinese government propaganda
Generating and amplifying content for creating dangerous items like bombs and toxins
February 2025. DeepSeek accused of denying claims of Uyghur genocide
February 2025. DeepSeek tricked into setting out how to steal the Mona Lisa
January 2025. Study: DeepSeek fails to block 100 percent of jailbreaking attempts
January 2025. Study: DeepSeek repeats 30 per cent of false news statements
January 2025. Study: DeepSeek explains biochemical interactions of mustard gas with DNA
January 2025. DeepSeek accused of using OpenAI models to train AI system
January 2025. DeepSeek AI database exposes user data, chat histories