ChatGPT chatbot

Released: November 2022
Occurred: November 2022-

Can you improve this page?
Share your insights with us

ChatGPT is an AI chatbot that is capable of understanding natural human language and generating detailed human-like written text and that provides information and answers questions through a conversational interface.

Built on Open AI's GPT-3 large language model, ChatGPT (or 'Chat Generative Pre-trained Transformer') was released as a prototype for general public use in November 2022.


ChatGPT's ability to produce cogent, plausible responses on a huge range of topics, together with its ease of use, has been met with widespread acclaim. It has also proved popular; according to Open AI CEO Sam Altman, the bot attracted over one million registered users in five days.

Equally, ChatGPT is seen to suffer from several technical limitations, and to pose risks to jobs, education, academics and research organisations, and social cohesion.


  • Accuracy/reliability: ChatGPT is prone to factual inaccuracies, with coding Q&A site Stack Overflow banning responses to questions generated using ChatGPT on its platform on the basis that 'The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce', and that 'the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.' NYU professor Gary Marcus calls ChatGPT's production of errors 'discomprehension', whilst singer Nick Cave described ChatGPT's attempt at his lyrics as 'a grotesque mockery of what it is to be human.' The Guardian points out that system's accuracy is not helped by the current knowledge base ending in 2021, rendering some queries and searches useless.

  • Mis/disinformation: Research shows that ChatGPT can write such convincing fake research-paper abstracts that scientists are often unable to spot them. The model's tendency to produce plausible-sounding factual falsities led Princeton computer science professor Arvind Narayanan to call it 'the greatest bullshit generator ever' and warn that 'using ChatGPT in its current form would be a bad idea for applications like education or answering health questions.'

  • Safety: While ChatGPT has safeguards in place to stop requests generating inappropriate, abusive or offensive content, numerous examples show these can be overriden relatively easily.

  • Bias/discrimination: Despite built-in safeguards, ChatGPT produces outputs that are racially and gender biased.

  • Security: ChatGPT has been shown to be capable of producing malware and manipulated to enable scammers to develop phishing attacks.


  • Employment: ChatGPT is seen to pose a potential risk to jobs in the creative industries, including skilled knowledge workers, academics and journalists, and software programmers.

  • Education: Numerous examples show ChatGPT can generate plausible essays across a wide range of topics, sparking concerns that the system may encourage cheating and plagiarism, and that student writing assignments could become obsolete. Citing such fears, the NYC education department blocked access to ChatGPT. In response, OpenAI said it is trying to counter the risk of plagiarism by 'watermarking' the bot’s output and making plagiarism easier to spot.

  • Academia: There are concerns that ChatGPT enables people without significant domain expertise to write scientific papers, thereby raising questions about the future of research production and the nature of authorship.

  • Social cohesion: ChatGPT's ability to generate plausible-sounding misinformation, disinformation and hate speech is seen to have potentially serious effects on the well-being of communities, and on democracy. Risk analysts Eurasia Group rate misinformation's impact on society as the third highest global risk in its Top Risks 2023 report (pdf).


While OpenAI has been open about some of ChatGPT's limitations, its governance and algorithms are - and appear likely to remain - carefully concealed black boxes.

Supply chain

In January 2023, Time journalist Billy Perrigo revealed that OpenAI used Kenyan workers being paid less than USD 2 an hour to de-toxify ChatGPT. The data labelers were provided by Sama AI, the self-styled 'ethical AI' company discovered in an earlier investigation to be suppressing union-busting attempts in the African country.

According to Perrigo, 'the work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.'

Operator: OpenAI

Country: USA; Global

Sector: Technology

Purpose: Optimise language models for dialogue

Technology: Chatbot; NLP/text analysis
Issue: Accuracy/reliability; Bias/discrimination; Mis/disinformation; Safety; Security; Employment

Transparency: Governance; Black box


Research, audits, investigations, inquiries, litigation

News, commentary, analysis

Page info
Published: December 2022
Last updated: January 2023