GPT-3 large language model
Released: May 2020
GPT-3 (Generative Pre-trained Transformer 3) is a large language model that uses deep learning to generate natural, human-like language from text prompts.
Developed by OpenAI and released as a beta in May 2020, GPT-3 has 175 billion parameters, ten times more than previous models. ChatGPT was built using a GPT-3.5 model.
GPT-3 won praise from technology professionals, commentators, scientists, and philosophers for the quality of text it produces, and its ability to synthesise massive amounts of content. MIT Technology Review described it as 'shockingly good' and able to generate 'amazing human-like text on demand'. It is, according to the National Law Review, 'a glimpse of the future'.
But New York Times technology columnist Farhad Manjoo described GPT-3 as 'at once amazing, spooky, humbling, and more than a little terrifying,' and NYU professor Gary Marcus dismissed it as 'a fluent spouter of bullshit, but even with 175 billion parameters and 450 gigabytes of input data, it’s not a reliable interpreter of the world. '
OpenAI CEO Sam Altman admitted the tool has 'serious weaknesses and sometimes makes very silly mistakes.'
Open AI laid the ground for GPT-3 by publishing a research paper Language Models are Few-Shot Learners which highlighted a number of the risks and harms associated with its model.
However, the company's decision to grant access to the model to a few select researchers rather than release GPT-3's algorithm to the general public proved controversial, with some seeing it as 'unscientific' and 'opaque', whilst others saw it as necessary given its scale and potential for serious and widespread harm.
Equally, a number of commentators figured Open AI's decision to operate a de facto black box likely reflected its inability to fully understand its model, and a commercial desire to protect its IP.
In January 2023, Time journalist Billy Perrigo revealed that OpenAI used Kenyan workers being paid less than USD 2 an hour to de-toxify GPT-3 and Open AI's ChatGPT. According to Perrigo, 'the work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.'
Spitale G., Biller-Adorno N., Germani F. (2023). AI model GPT-3 (dis)informs us better than humans
Bender E.M., Gebru T., McMillan-Major A., Mitchell M. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
Wolff Anthony L.F., Kanding B., Selvan R. (2020). Carbontracker: Tracking and Predicting the Carbon Footprint of Training Deep Learning Models (pdf)
Nabla (2020). Doctor GPT-3: Hype or Reality?
News, commentary, analysis
Published: January 2023
Last updated: April 2023