DeepSeek V3 large language model thinks it is ChatGPT
DeepSeek V3 large language model thinks it is ChatGPT
Occurred: December 2024
Report incident ๐ฅ | Improve page ๐ | Access database ๐ข
A new large language model developed by the Chinese start-up DeepSeek has gained attention for its impressive performance and peculiar behaviour of identifying itself as ChatGPT.
DeepSeek V3 has been reported to outperform established models like OpenAI's GPT-4 and Meta's Llama 3 on a number of benchmarks.ย
The model features 671 billion parameters and was trained at a notably low cost of approximately USD 5.58 million over two months.ย
However, it has also been observed that DeepSeek V3 sometimes claims to be a version of ChatGPT.
The phenomenon of DeepSeek V3 identifying itself as ChatGPT may stem from the training data used for its development, with commentators speculating that the system may have been trained on datasets that include outputs from ChatGPT.
Deepseek has not explained the behaviour of its system.
Deepseek V3 may perform strongly, but the start-up has been noticeably reluctant to discuss the sources of its training data, prompting questions about its ethics and integrity.ย
With AI models increasingly mimicing one another, understanding their unique characteristics and origins will become important differentators.
Large language model
A large language model (LLM) is a computational model capable of language generation or other natural language processing tasks.
Source: Wikipedia ๐
DeepSeek V3 ๐
Operator:ย
Developer: Deepseek
Country: China
Sector: Technology
Purpose: Provide information
Technology: Generative AI; Machine learning
Issue: Cheating/plagiarism; Transparency
https://techcrunch.com/2024/12/27/why-deepseeks-new-ai-model-thinks-its-chatgpt/
https://www.neowin.net/news/deepseek-v3-has-a-problem-it-keeps-claiming-to-be-chatgpt/
https://startupnews.fyi/2024/12/28/why-deepseeks-new-ai-model-thinks-its-chatgpt/
https://www.reddit.com/r/LocalLLaMA/comments/1hmdh5q/deepseek_v3_thinks_its_openais_gpt4/
Page info
Type: Issue
Published: January 2025