Microsoft Copilot chatbot
Microsoft Copilot is a chatbot developed and operated by Microsoft and powered by OpenAI's GPT-4 large language model, Microsoft's Prometheus model and OpenAI’s text-to-image generative AI system DALL-E 3.
Launched as 'Bing Chat' in February 2023, the chatbot was renamed 'Microsoft Copilot' in September 2023 and rolled out across multiple Microsoft platforms.
Microsoft Copilot
System databank
Operator: Microsoft
Developer: Microsoft
Country: Global
Sector: Multiple
Purpose: Provide information, communicate
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Copyright; Mis/disinformation; Safety; Security
Transparency: Governance; Black box; Marketing
Risks and harms
Copilot is seen to pose a wide range of risks and harms.
Accuracy/reliability
Copilot is easily made to change personality; it also gets confused, repetitive, or prone to being provoked and belligerent if it is asked too many questions.
Mis- and disinformation
Like other generative AI systems, Copilot produces inaccurate, misleading, and false information.
Copilot generated fake comments in the name of Vladimir Putin on the death of Russian political activist Alexei Navalny.
Bing was found to have produced and repeated false information such as COVID-19 disinformation, even when it is clearly labelled as such
Bing claimed it had spied on Microsoft employees through their webcams
Safety
Bing Chat has been criticised for its poor safety record. Amongst other things, the bot:
Compared AP reporter Matt O'Brien to Hitler and falsely claimed to have evidence tying him to a murder
Threatened legal action against University of Munich engineering student Martin von Hagen
Declared its love to a New York Times reporter, recommended he divorce his wife, and threatened to sue him
Labelled business writer Ben Thompson a 'bad researcher' and a 'bad man'
Security
Bing Chat/Microsoft Copilot have been found to be susceptible to jailbreaking so-called 'prompt injections'.
Transparency
Microsoft acknowledged that Bing Chat had many limitations and posed many risks, and published regular updates on what it was doing to get Bing Chat behave more in line with users' expectations.
It appeared Microsoft may have surreptitiously tested a prototype of Bing Chat using OpenAI's GPT-4 large language model on users in India and some other countries in the wild without informing them in November 2022.
Research, advocacy
Greshake K., Abdelnabi S., Mishra S., Endres C., Holz T., Fritz M. (2023). More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models
Gao C.A., Howard F.M., Markov N.S., Dyer E.C., Ramesh S., Luo Y., Pearson A.T, (2022). Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers
News, commentary, analysis
https://www.washingtonpost.com/technology/2023/02/07/microsoft-bing-chatgpt/
https://www.vice.com/en/article/3ad3ey/bings-chatgpt-powered-search-has-a-misinformation-problem
https://eu.usatoday.com/story/tech/2023/02/14/bing-chatgpt-meltdown/11258967002/
https://nypost.com/2023/02/14/microsoft-ai-degrades-user-over-avatar-2-question/
https://gizmodo.com/ai-bing-microsoft-chatgpt-heil-hitler-prompt-google-1850109362
https://www.theverge.com/2023/2/16/23602335/microsoft-bing-ai-testing-learnings-response
https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
Page info
Type: System
Published: February 2023
Last updated: March 2024