ChatGPT chatbot
ChatGPT (or 'Chat Generative Pre-trained Transformer') is an AI-powered chatbot that is capable of understanding natural human language and generating detailed human-like information and answers questions through a conversational interface.
Built on a version of Open AI's GPT-3 large language model, ChatGPT was released as a prototype for general public use in November 2022. Voice chat and image generation (via DALLE-3) capabilities were incorporated in September 2023.
ChatGPT databank 🔢
Operator: OpenAI
Developer: OpenAI
Country: USA; Global
Sector: Multiple
Purpose: Generate text
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Accuracy/reliability; Anthropomorphism; Bias/discrimination; Confidentiality; Copyright; Environment; Employment; Mis/disinformation; Privacy; Safety; Security
Transparency: Governance; Black box; Complaints/appeals; Marketing; Privacy
Risks and harms 🛑
ChatGPT is seen to pose a wide range of risks and harms:
Accuracy, reliability
ChatGPT is prone to generate factual inaccuracies, including those known as 'hallucinations' (or plausible-sounding inaccurate information such as fake references or sources), and can be inconsistent and unreliable.
For example, the chatbot:
Wrongly claimed the death of privacy technologist Alexander Hanff.
Invented false legal case citations in a lawsuit involving Colombian airline Avianca.
Produced an error-strewn legal decision in the name of a Brazilian judge.
Generated two fake legal cases in a Canadian divorce case, resuting in a lawyer being reprimanded
Generated false citations in a scientific paper about millipedes in the name of Danish academic Henrik Enghoff.
Made up research claiming guns are not harmful to kids.
Invented articles and bylines in the name of The Guardian newspaper.
Made up fake information when asked about breast cancer screening.
Failed to recommend appropriate cancer treatment in one-third of cases.
Provided inaccurate medication query responses.
Incorrectly diagnosed 8 out of 10 pediatric case studies.
Overwhelmed coding Q&A site Stack Overflow moderators with inaccurate, low quality content.
Wrongly answered over half of software engineering questions it received, according to Purdue University researchers.
Was found to generate significantly less accurate computer code over time.
Anthropomorphism
ChatGPT's ability to generate plausible-looking, human-like information and advice runs the risk of convincing users that it is somehow human and/or 'sentient'.
A US professor falsely accused his whole class of using ChatGPT to write essays after believing the system when it told him it was responsible for their submissions.
Lawyer Steven Schwartz believed that six fake cases ChatGPT had cited as relevant to his legal case against Colombian airline Avianca were real.
Bias, discrimination
Despite its built-in safeguards, researchers and others have demonstrated that ChatGPT produces outputs that are explicitly and implicitly biased in multiple ways.
AI chatbots display racial prejudice despite anti-racism training, according to researchers
ChatGPT was found to display racial bias against job candidates, according to a Bloomberg investigation
ChatGPT reproduces gender bias in job recommendation letters, according to University of California, Los Angeles researchers.
University of East Anglia researchers discovered that ChatGPT demonstrates 'significant' and 'systemic' left-wing bias across the political spectrum in Brazil, the UK, and USA.
GPT-3.5 and GPT-4, the large language models that power versions of ChatGPT, produce responses that strongly reinforce and amplify gender bias, according to Princeton researchers.
Confidentiality
Unless they expressly opt-out, OpenAI uses users' information and data inputted to ChatGPT to train its models, thereby potentially or actually violating their personal confidentality, or the confidentiality of their employers.
Perth's South Metropolitan Health Service said doctors had been using ChatGPT to write medical notes, which were then being uploaded to patient record systems, thereby potentially compromising patient confidentiality.
Australian Research Council peer reviewers for were found to have been using ChatGPT to assess grant applications, thereby risking loss of confidentiality.
Samsung employees fed sensitive information about company meetings and source code into ChatGPT on several occasions, resulting in the company banning third-party generative AI services.
ChatGPT has been variously banned, blocked, or restricted by Apple, CitiGroup, Bank of America, Deutsche Bank, Goldman Sachs, Wells Fargo, and many other organisations concerned about their employees sharing sensitive information with the chatbot.
Data privacy
Concerns exist about ChatGPT's approach to and impact on personal privacy.
ChatGPT was found to have allowed some users to see other users' conversations and personal information, suggesting OpenAI had access to their conversations.
ETH Zurich researchers showed that large language models such as GPT-4, which powers ChatGPT, can identify an individual's age, location, gender and income with up to 85 per cent accuracy simply by analysing their posts on Reddit.
ChatGPT can be used to identify individual internet users, according to Google DeepMind and other researchers.
Italy's privacy regulator banned ChatGPT for potentially infringing the privacy rights of Italians.
Poland's privacy regulator announced it was launching an investigation into OpenAI and ChatGPT following a complaint.
Canada's privacy regulator opened an investigation into OpenAI and ChatGPT on the basis of a complaint.
Germany, France, and Spain launched regulatory investigations into ChatGPT.
Japan's privacy watchdog issued issued a warning on the collection of user and third-party data.
US law firm Clarkson filed a lawsuit against OpenAI and Microsoft for 'stealing' personal information to create ChatGPT.
Software engineers sued OpenAI and Microsoft for illegally feeding their AI models with their personal information and professional expertise.
Employment
ChatGPT is seen to pose a risk to jobs across a swathe of industries, from law, finance, and marketing to academics, journalists, and software programmers.
Evidence exists of customer service agents, marketing copywriters, Kenyan student ghostwriters and Chinese video game illustrators being 'displaced' or laid off, amongst others.
OpenAI's development of ChatGPT involved poorly paid workers in third-world countries with limited rights.
Time journalist Billy Perrigo revealed that OpenAI used Kenyan workers being paid less than USD 2 an hour to de-toxify ChatGPT.
Environment
OpenAI has not revealed the environmental impacts of its products, which are thought to be substantial.
University of California, Riverside researchers estimate that ChatGPT needs to 'drink' a 500ml bottle of water for a simple conversation of roughly 20-50 questions and answers.
ChatGPT's training emitted approximately over 500 metric tons of carbon, according to research by AI community Hugging Face.
IP/copyright
ChatGPT has been accused of using copyright protected information without permission to train its language models and ChatGPT by a wide range of individuals and organisations.
2,200+ news and media publishers complained that OpenAI used their articles to train ChatGPT without informing or paying them.
The New York Times sued OpenAI and Microsoft for illegally copying millions of Times articles to train ChatGPT and other services.
Author Jane Friedman discovered 5 books likely to have been generated by ChatGPT being sold under her name on Amazon. Amazon initially refused to remove them, later changing course after she had taken her case public.
Authors Mona Awad and Paul Tremblay sued OpenAI for breaching copyright law by training ChatGPT on their novels without permission.
Comedienne Sarah Silverman sued OpenAI and Meta for USD 1 billion on the basis that the two companies used 'shadow libraries' to train AI chatbots in what they argued is 'unprecedented open-source software piracy'.
Michael Chabon and a group of other writers sued OpenAI for copying their works without permission in order to teach ChatGPT how to respond to human prompts, arguing that the system can accurately summarise their works and generate text that mimics their styles.
17 authors, including Jonathan Franzen and John Grisham, sued OpenAI for engaging 'in a systematic course of mass-scale copyright infringement'.
Author and journalist Julian Sancton sued OpenAI and Microsoft for copyright abuse.
Omegaverse fan fiction appears to have been used to train OpenAI models.
Misinformation and disinformation
ChatGPT has been shown to produce large volumes of convincing misinformation and disinformation:
Was used to generate and spread a false rumour that authorities in Hangzhou, China, would end alternate-day number-plate driving restrictions.
Enabled a Chinese man to spread fake news about a fatal train accident, which ended in his arrest.
Falsely claimed German geocoding company OpenCage offered an API to turn a mobile phone number into the location of the phone.
GPT-4 powered ChatGPT Plus echoed false news narratives 100 percent of the time, prompting accusation that it was a 'disinformation engine'.
Allegedly powered Fox8, an autonomous botnet using fake personas to promote crypto/blockchain/NFT content.
Was found to be more likely to produce disinformation in Chinese than English.
Cheating and plagiarism
ChatGPT has fueled concerns that students can easily generate high-quality essays and other material, thereby reducing their need to build critical-thinking and problem-solving skills.
New York City's Board of Education banned students and teachers from using ChatGPT due to 'concerns about negative impacts on student learning.'
Politics
ChatGPT's ability to generate plausible-sounding misinformation, disinformation, and hate speech is seen to have potentially serious effects on the well-being of communities and democracy.
A Washington Post investigation discovered that ChatGPT can be used to develop political messaging and campaigns, despite OpenAI's rules to the contrary.
Philadelphia sheriff Rochelle Bilal's team posted fake ChatGPT-generated 'news' stories to her campaign website.
Reputational damage
ChatGPT can harm the reputation of individuals and organisations it generates information about, in some instances resulting in threats to sue OpenAI for defamation.
An Australian mayor said he would sue OpenAI if it did not correct ChatGPT's false claims that he had served time in prison for bribery.
ChatGPT falsely accused law professor Jonathan Hurley of sexual harrassment, prompting Hurley to say that he had been 'defamed by ChatGPT'.
US radio host Mark Walters sued OpenAI after ChatGPT wrongly stated that Walters had been accused of defrauding and embezzling funds from a non-profit organisation.
The US Federal Trade Commission opened an investigation into whether OpenAI and its AI models, including ChatGPT, 'engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm.'
ChatGPT has also been shown to harm the reputation of individuals and organisations using it, including when used naively.
Safety
ChatGPT has safeguards to minimise it being tricked into making errors and to stop requests generating inappropriate, abusive, or offensive content. However, it appears some of these can be overridden relatively easily.
Researchers found that ChatGPT is more likely to generate rude, disrespectful or unreasonable comments when prompted to assume the style of even benign personas.
Vice journalist Steph Maj Swanson manipulated ChatGPT to produce BDSM role-play scenarios and describe sex acts with children and animals.
A driver persuaded a ChatPGT-powered chatbot to sell a new car for USD 1.
Security
ChatGPT has been shown to be vulnerable to malfunction and manipulation, and can easily be made to generate spam and malware. The chatbot:
Experienced a bug that jeopardised ChatGPT users' personal data, including chat histories and payment details.
Wrote malicious code that made databases leak sensitive information.
Wrote 'plausible' phishing emails and malicious code.
Powered content farms spewing low-quality posts and spam in multiple languages.
Generated fake online reviews on Amazon and many other online stores and review sites.
Generated inaccurate and irrelevant bug reports for crypto bug bounty platform Immunefi.
Was tricked into producing malicious code, which could be used to launch cyber attacks.
Was manipulated into having its safety rules bypassed in 'virtually unlimited ways' using character suffixes.
Was tricked into creating cybercrime tools, including phishing emails.
Was used by Chinese, Russian and other nation state hackers to improve cyberattacks.
Transparency 🙈
OpenAI may have been open about some of ChatGPT's limitations, but its governance and algorithms are - and appear likely to remain - carefully concealed, commercially-driven black boxes.
Research studies show that ChatGPT is amongst the least open large language models, with little public information available about how it works, and with access to its data, model, and algorithm jealously guarded.
Italy's privacy regulator pointed out OpenAI appears to have been made deliberately difficult to find privacy-related information, make complaints, and request the removal of personal information.
ChatGPT's visibility has also led to accusations that OpenAI is highly opaque about many aspects of its corporate governance, including its funding, financial performance, supply chain management, and the amount of energy required to train and run its systems.
Page info
Type: System
Published: December 2022
Last updated: March 2024