GPT-3 large language model

Released: May 2020

Can you improve this page?
Share your insights with us

GPT-3 (Generative Pre-trained Transformer 3) is a language model that uses deep learning to generate natural, human-like language from text prompts. 

Developed by OpenAI, GPT-3 has 175 billion parameters, ten times more than any previous model. It was released as a beta in May 2020. 

Reaction

GPT-3 has won fulsome praise from technology professionals, commentators, scientists, and philosophers for the quality of text it produces, and its ability to synthesise massive amounts of content. 

MIT Technology Review described it as 'shockingly good' and able to generate 'amazing human-like text on demand'. It is, according to the National Law Review, 'a glimpse of the future'. 

But New York Times technology columnist Farhad Manjoo described GPT-3 as 'at once amazing, spooky, humbling, and more than a little terrifying,' and NYU professor Gary Marcus dismissed it as 'a fluent spouter of bullshit, but even with 175 billion parameters and 450 gigabytes of input data, it’s not a reliable interpreter of the world. ' 

Even OpenAI CEO Sam Altman admitted the tool has 'serious weaknesses and sometimes makes very silly mistakes.'

Limitations


Risks

Transparency

OpenAI researchers point out in a research paper accompanying the GPT-3's release

Operator: Open AI; Microsoft
Developer: Open AI

Country: USA

Sector: Technology

Purpose: Generate text

Technology: Large language model (LLM); NLP/text analysis; Neural networks; Deep learning
Issue: Business model; Competition/price fixing; Accuracy/reliability; Bias/discrimination - multiple; Dual/multi-use; Safety; Environment - emissions

Transparency: Governance; Black box

System

Research, audits, investigations, inquiries, litigation

News, commentary, analysis

Page info
Type: System
Published: January 2023
Last updated: February 2023