GPT-2 large language model
GPT-2 is a large-scale, general purpose language model that generates text. Developed by OpenAI, GPT-2 was released as a limited beta in February 2019. The complete version, comprising 1.5 billion parameters, was published in November 2019.
Created to perform a variety of purposes, such as providing prompts, summarising text, and answering questions, GPT-2 was trained on BookCorpus and a dataset of 8 million web pages scraped from the internet.
Reaction
On its release, researchers, practitioners and commentators generally praised GPT-2's ability to generate realistic, plausible writing and translate text. Its flexibility was also welcomed.
Some were less enthusiastic, pointing out the model's tendency to generate gibberish, incoherent lengthy text, and to answer questions inaccurately.
Others raised were potential misuses of GPT-2, including to generate convincing misinformation and disinformation, its ability to 'corrupt' people even when they are aware the source of advice is AI, and the huge amounts energy required to train and run GPT-2 and similar models.
Transparency
OpenAI said that it would keep GPT-2's dataset, code, and model weights private 'Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale'.
The decision proved contentious, with some people supporting the decision by arguing GPT-2 was too dangerous to publish in the wild.
Others argued that the threats posed by GPT-2 were over-stated, the approach was opaque, and would not allow the software to be properly tested.
OpenAI later published the code on Github.
Operator: OpenAI; Microsoft; Crisis Text Line; Latitude
Developer: OpenAI
Country: USA
Sector: Technology
Purpose: Improve general language models
Technology: Large language model (LLM); NLP/text analysis; Neural networks; Deep learning; Machine learning
Issue: Accuracy/reliability; Dual/multi-use; Mis/disinformation; Environment; Marketing
Transparency: Governance; Black box
System
OpenAI (2019). GPT-2: 1.5B release
OpenAI (2019). Better Language Models
GPT-2 research study (pdf)
Research, advocacy
Leib M., Köbis N.C., Rilke R.M., Hagens M., Irlenbusch B. (2021). The corruptive force of AI-generated advice
Kobis A., Mossink L.D. (2021). Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry
Strubell E., Ganesh A., McCullum A. (2019). Energy and Policy Considerations for Deep Learning in NLP
News, commentary, analysis
https://www.theverge.com/2019/2/21/18234500/ai-ethics-debate-researchers-harmful-programs-openai
https://towardsdatascience.com/openais-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8
https://thegradient.pub/openai-please-open-source-your-language-model/
https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/
Page info
Type: System
Published: November 2022
Last updated: March 2023