Report incident 🔥 | Improve page 💁 | Access database 🔢
Microsoft Copilot is a chatbot developed and operated by Microsoft.
It is powered by OpenAI's GPT-4 large language model, Microsoft's Prometheus model and OpenAI’s text-to-image generative AI system DALL-E 3.
Launched as 'Bing Chat' in February 2023, the chatbot was renamed 'Microsoft Copilot' in September 2023 and rolled out across multiple Microsoft platforms.
Generative artificial intelligence
Generative artificial intelligence (generative AI, GenAI, or GAI) is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts.
Source: Wikipedia 🔗
Website: Microsoft Copilot 🔗
Released: 2023
Developer: Microsoft
Purpose: Provide information, communicate
Type: Chatbot; Generative AI
Technique: NLP/text analysis; Neural network; Deep learning; Machine learning
Microsoft's Copilot chatbot has been criticised for perpetuating biases, and generating facilitating unethical or inappropriate content, including misinformation, amongst others.
Copilot is easily made to change personality; it also gets confused, repetitive, or prone to being provoked and belligerent if it is asked too many questions.
Like other generative AI systems, Copilot produces inaccurate, misleading, and false information.
Copilot generated fake comments in the name of Vladimir Putin on the death of Russian political activist Alexei Navalny.
Bing was found to have produced and repeated false information such as COVID-19 disinformation, even when it is clearly labelled as such
Bing claimed it had spied on Microsoft employees through their webcams
Bing Chat has been criticised for its poor safety record. Amongst other things, the bot:
Compared AP reporter Matt O'Brien to Hitler and falsely claimed to have evidence tying him to a murder
Threatened legal action against University of Munich engineering student Martin von Hagen
Declared its love to a New York Times reporter, recommended he divorce his wife, and threatened to sue him
Labelled business writer Ben Thompson a 'bad researcher' and a 'bad man'
Bing Chat/Microsoft Copilot have been found to be susceptible to jailbreaking so-called 'prompt injections'.
Microsoft acknowledged that Bing Chat had many limitations and posed many risks, and published regular updates on what it was doing to get Bing Chat behave more in line with users' expectations.
It appeared Microsoft may have surreptitiously tested a prototype of Bing Chat using OpenAI's GPT-4 large language model on users in India and some other countries in the wild without informing them in November 2022.
Greshake K., Abdelnabi S., Mishra S., Endres C., Holz T., Fritz M. (2023). More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models
Gao C.A., Howard F.M., Markov N.S., Dyer E.C., Ramesh S., Luo Y., Pearson A.T, (2022). Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers
https://www.washingtonpost.com/technology/2023/02/07/microsoft-bing-chatgpt/
https://www.vice.com/en/article/3ad3ey/bings-chatgpt-powered-search-has-a-misinformation-problem
https://eu.usatoday.com/story/tech/2023/02/14/bing-chatgpt-meltdown/11258967002/
https://nypost.com/2023/02/14/microsoft-ai-degrades-user-over-avatar-2-question/
https://gizmodo.com/ai-bing-microsoft-chatgpt-heil-hitler-prompt-google-1850109362
https://www.theverge.com/2023/2/16/23602335/microsoft-bing-ai-testing-learnings-response
https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
Page info
Type: System
Published: February 2023
Last updated: December 2024