ChatGPT chatbot
Released: November 2022
Can you improve this page?
Share your insights with us
ChatGPT (or 'Chat Generative Pre-trained Transformer') is an AI chatbot that is capable of understanding natural human language and generating detailed human-like written text and that provides information and answers questions through a conversational interface.
Built on a version of Open AI's GPT-3 large language model, ChatGPT was released as a prototype for general public use in November 2022.
Reaction
ChatGPT's ability to produce cogent, plausible responses on a huge range of topics, together with its ease of use, has been met with widespread acclaim. It has also proved popular; according to Open AI CEO Sam Altman, the bot attracted over one million registered users in five days and over one billion users by the end of March 2023, making it the fastest-growing consumer application in history.
On the other hand, ChatGPT is seen to suffer from multiple technical limitations, notably its tendency to produce inaccurate, untrue, and offensive responses. It also seen to pose risks to jobs, education, academics and research organisations, and to social cohesion, as well as to the reputation of organisations using it and their compliance with government regulations.
Risks
Accuracy/reliability. ChatGPT is prone to factual inaccuracies, with coding Q&A site Stack Overflow banning responses to questions generated using ChatGPT on its platform on the basis that 'The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce', and that 'the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.' NYU professor Gary Marcus labelled ChatGPT's production of errors 'discomprehension', while singer Nick Cave described ChatGPT's attempt at generating lyrics in his style as 'a grotesque mockery of what it is to be human.' The Guardian points out that system's accuracy is not helped by the current knowledge base ending in 2021, rendering some queries and searches useless.
Bias/discrimination. Despite its built-in safeguards, researchers and others have demonstrated that ChatGPT produces outputs that are demonstrably biased in multiple ways, including gender and race.
Confidentiality. Samsung employees are reported to have fed sensitive company source code into ChatGPT in order to check and optimise it, whilst another gave it a recording of an internal meeting to convert it into a presentation. Accordingly, it has been banned or restricted by Samsung, Apple, CitiGroup, Bank of America, Deutsche Bank, Goldman Sachs, Wells Fargo, and other organisations.
Copyright. According to ChatGPT itself, the content generated by it may be protected by copyright, but it is not owned by the system or by OpenAI, which avoids including citations or attributions to the original sources of information trained and generated by ChatGPT. However, Dow Jones and other news publishers have said OpenAI is using their articles to train ChatGPT without informing or paying them, and the EU proposed that organisations deploying generative AI systems such as ChatGPT must disclose any copyrighted materials used to train their systems.
Dual/multi-use. A general purpose system, ChatGPT can be used more or less however the user wants. But it is also being used for many purposes that arguably could have been foreseen. For example, ChatGPT is being used to generate spam and write fake online reviews.
Mis/disinformation. ChatGPT is prone to producing misinformation and disinformation. For example, it was used to spread a false rumour that authorities in Hangzhou, China, would end alternate-day number-plate driving restrictions, and enabled a Chinese man to spread fake news about a fatal train accident, which ended in his arrest. It created fake links about privacy technologist Alexander Hanff to articles that never existed and falsely claimed German geocoding company OpenCage offers an API to turn a mobile phone number into the location of the phone. ChatGPT also falsely accused law professor Jonathan Hurley of sexual harrassment. News reliability service NewsGuard found that ChatGPT generated misinformation and hoaxes 80% of the time when prompted to do so using GPT-3, and 100% of the time for GPT-4. NewsGuard also found that ChatGPT is more likely to produce disinformation in Chinese than English. In addition, research shows that ChatGPT can write such convincing fake research-paper abstracts that scientists are often unable to spot them. The model's tendency to produce plausible-sounding factual falsities led Princeton computer science professor Arvind Narayanan to call it 'the greatest bullshit generator ever.'
Privacy. Given the lack of information about what data is was trained on, concerns exist about ChatGPT's impact on privacy. In March 2023, ChatGPT was temporarily blocked and banned by Italy's privacy regulator for potentially infringing the privacy rights of Italians before OpenAI agreed to introduce a number of transparency and age verification measures. Germany, France and Spain have also launched investigations, while Japan has issued a warning. Canada's privacy regulator opened an investigation into Open AI on the basis of a local complaint.
Safety. While ChatGPT has safeguards in place to stop requests generating inappropriate, abusive or offensive content, numerous examples show these can be overriden relatively easily. In March 2023, Vice journalist Steph Maj Swanson easily manipulated ChatGPT to produce BDSM role-play scenarios and describing sex acts with children and animals.
Security. ChatGPT has been shown to be capable of producing malware and manipulated to enable scammers to develop phishing attacks. It has also been used to generate bug reports for crypto white hat platform Immunefi, which has since banned them on the basis that 'the output is never accurate or relevant.'
Harms
Academia. There are concerns that ChatGPT enables people without significant domain expertise to write scientific papers, thereby raising questions about the future of research production and the nature of authorship. In one instance, a professor falsely accused his entire class of using ChatGPT to write essays after the system had told him it was responsible for their submissions.
Compliance. Major organisations, including Citigroup, JPMorgan Chase, Wells Fargo, and Goldman Sachs, have banned and blocked the use of ChatGPT on concerns that its use by employees could result in data and information leaks, potentially breaching government regulations.
Jobs. Whilst there is some evidence that ChatGPT benefits lower-skilled workers such as customer service agents, it is also seen to pose a risk to jobs across a swathe of industries, from law, finance, and marketing to academics, journalists, and software programmers.
Worker pay. In January 2023, Time journalist Billy Perrigo revealed that OpenAI used Kenyan workers being paid less than USD 2 an hour to de-toxify ChatGPT. The data labelers were provided by Sama AI, the self-styled 'ethical AI' company discovered in an earlier investigation to be suppressing union-busting attempts in the African country. According to Perrigo, 'the work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.'
Education. Numerous examples show ChatGPT can generate plausible essays across a wide range of topics, sparking concerns that the system may encourage cheating and plagiarism, and that student writing assignments could become obsolete. Citing such fears, the NYC education department blocked access to ChatGPT, only later to rescind its decision. Oxford and Cambridge universities have also banned GPT. OpenAI has said it is trying to counter the risk of plagiarism by 'watermarking' the bot’s output and making plagiarism easier to spot.
Environment. Open AI has not revealed the environmental impacts of its products. However, these are considered to be substantial. A study (pdf) by University of California researchers estimates that ChatGPT needs to 'drink' a 500ml bottle of water for a simple conversation of roughly 20-50 questions and answers.
Social cohesion. ChatGPT's ability to generate plausible-sounding misinformation, disinformation and hate speech is seen to have potentially serious effects on the well-being of communities, and on democracy. Risk analysts Eurasia Group rate misinformation's impact on society as the third highest global risk in its Top Risks 2023 report (pdf).
Transparency
OpenAI may have been open about some of ChatGPT's limitations, but its governance and algorithms are - and appear likely to remain - carefully concealed, commercially-driven black boxes. Little public information about how ChatGPT works is available, and access to its data, model, and algorithm are jealously guarded.
Furthermore, it appears to have been made deliberately difficult to find privacy-related information and to make complaints and request personal information is removed, as pointed out by Italy's privacy regulator.
ChatGPT's visibility has also led to accusations that OpenAI is highly opaque on many aspects of its corporate governance, from its funding, financial performance, and supply chain management, including the amount of energy required to train and run its systems.
Operator: OpenAI
Developer: OpenAI
Country: USA; Global
Sector: Multiple
Purpose: Provide information, communicate
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Accuracy/reliability; Bias/discrimination; Confidentiality; Copyright; Dual-multi-use; Employment; Mis/disinformation; Privacy; Safety; Security
Transparency: Governance; Black box; Complaints/appeals; Marketing; Privacy
System
OpenAI (Nov 2022). ChatGPT: Optimizing Language Models for Dialogue
OpenAI (Nov 2023). Introducing ChatGPT
OpenAI (Feb 2023). How Should AI Systems Behave, and Who Should Decide?
OpenAI (April 2023). New ways to manage your data in ChatGPT
Legal, regulatory
Office of the Privacy Commissioner of Canada (2023). OPC launches investigation into ChatGPT
Garante per la Protezione dei Dati Personali (2023). ChatGPT: OpenAI reinstates service in Italy with enhanced transparency and rights for european users and non-users
Garante per la Protezione dei Dati Personali (2023). ChatGPT: Italian SA to lift temporary limitation if OpenAI implements measures
Research, advocacy
EPIC (2023). Generative Harms (pdf)
Derner E., Batistic K. (2023). Beyond the Safeguards: Exploring the Security Risks of ChatGPT (pdf)
Shaping AI - University of Warwick (2023). Shifting AI controversies (pdf)
Brynjolfsson E., Li D., Raymond L.R. (2023). Generative AI at Work
Li P., Yang J., Islam M.A., Ren S. (2023). Making AI Less 'Thirsty': Uncovering and Addressing the Secret Water Footprint of AI Models (pdf)
Europol (2023). ChatGPT - the impact of Large Language Models on Law Enforcement
Newsguard (2023). Despite OpenAI’s Promises, the Company’s New AI Tool Produces Misinformation More Frequently, and More Persuasively, than its Predecessor
Newsguard (2023). The Next Great Misinformation Superspreader: How ChatGPT Could Spread Toxic Misinformation At Unprecedented Scale
Gao C.A., Howard F.M., Markov N.S., Dyer E.C., Ramesh S., Luo Y., Pearson A.T, (2022). Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers
News, commentary, analysis
https://www.theverge.com/23488017/openai-chatbot-chatgpt-ai-examples-web-demo
https://venturebeat.com/ai/openai-debuts-chatgpt-and-gpt-3-5-series-as-gpt-4-rumors-fly/
https://www.nytimes.com/2022/12/10/technology/ai-chat-bot-chatgpt.html
https://www.economist.com/business/2022/12/08/how-good-is-chatgpt
https://theintercept.com/2022/12/08/openai-chatgpt-ai-bias-ethics/
https://ny.chalkbeat.org/2023/1/3/23537987/nyc-schools-ban-chatgpt-writing-artificial-intelligence
Page info
Type: System
Published: December 2022
Last updated: May 2023