GPT-2 large language model
Released: February 2019
GPT-2 is a large-scale, general purpose language model that generates text. Developed by OpenAI, GPT-2 was released as a limited beta in February 2019. The complete version, comprising 1.5 billion parameters, was published in November 2019.
Created to perform a variety of purposes, such as providing prompts, summarising text, and answering questions, GPT-2 was trained on BookCorpus and a dataset of 8 million web pages scraped from the internet.
On its release, researchers, practitioners and commentators generally praised GPT-2's ability to generate realistic, plausible writing and translate text. Its flexibility was also welcomed.
Some were less enthusiastic, pointing out the model's tendency to generate gibberish, incoherent lengthy text, and to answer questions inaccurately.
Others raised were potential misuses of GPT-2, including to generate convincing misinformation and disinformation, its ability to 'corrupt' people even when they are aware the source of advice is AI, and the huge amounts energy required to train and run GPT-2 and similar models.
OpenAI said that it would keep GPT-2's dataset, code, and model weights private 'Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale'.
The decision proved contentious, with some people supporting the decision by arguing GPT-2 was too dangerous to publish in the wild.
OpenAI later published the code on Github.
Operator: OpenAI; Microsoft; Crisis Text Line; Latitude
Purpose: Improve general language models
Technology: Large language model (LLM); NLP/text analysis; Neural networks; Deep learning; Machine learning
Issue: Accuracy/reliability; Dual/multi-use; Mis/disinformation; Environment; Marketing
Transparency: Governance; Black box
OpenAI (2019). GPT-2: 1.5B release
OpenAI (2019). Better Language Models
GPT-2 research study (pdf)
Leib M., Köbis N.C., Rilke R.M., Hagens M., Irlenbusch B. (2021). The corruptive force of AI-generated advice