GPT-3 large language model

Released: May 2020

GPT-3 (Generative Pre-trained Transformer 3) is a large language model that uses deep learning to generate natural, human-like language from text prompts. 

Developed by OpenAI and released as a beta in May 2020, GPT-3 has 175 billion parameters, ten times more than previous models. ChatGPT was built using a GPT-3.5 model.

GPT-3 won praise from technology professionals, commentators, scientists, and philosophers for the quality of text it produces, and its ability to synthesise massive amounts of content. MIT Technology Review described it as 'shockingly good' and able to generate 'amazing human-like text on demand'. It is, according to the National Law Review, 'a glimpse of the future'. 

But New York Times technology columnist Farhad Manjoo described GPT-3 as 'at once amazing, spooky, humbling, and more than a little terrifying,' and NYU professor Gary Marcus dismissed it as 'a fluent spouter of bullshit, but even with 175 billion parameters and 450 gigabytes of input data, it’s not a reliable interpreter of the world. ' 

OpenAI CEO Sam Altman admitted the tool has 'serious weaknesses and sometimes makes very silly mistakes.'

Operator: OpenAI; Microsoft
Developer: OpenAI

Country: USA

Sector: Multiple

Purpose: Generate text

Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Governance; Accuracy/reliability; Bias/discrimination - multiple; Dual/multi-use; Safety; Employment; Environment

Transparency: Governance; Black box

Risks and harms


GPT-3 is seen to be useful for summarising or simplifying text. Yet it has been shown to make basic errors, lack common sense and empathy, and repeat or contradict itself in lengthy passages of text.


GPT-3 has been shown to emit sexist and racist language, including anti-Muslim and anti-Semitic screeds. 


GPT-3 was built using Common Crawl data, meaning it incorporates web pages, posts, copyrighted articles and books from the BBC, The New York Times, and others, according to TechCrunch.


GPT-3 is prone to 'hallucinate' and invent realistic-looking information based on its training data. 


GPT-3 has been shown to be capable of spouting offensive and hateful content. 


Large language models such as GPT-3 have been criticised for their impact on carbon emissions, including by Google researchers (who subsequently lost their jobs).

Business model

In September 2020, Microsoft licensed 'exclusive' use of GPT-3's underlying code, allowing it to embed, repurpose, and modify the model as it pleases.

The deal was seen as further confirmation of Open AI's move away from its non-profit status and declared mission to 'ensure that artificial general intelligence benefits all of humanity' to a more commercial model. 

It also reinforced concerns over Microsoft's potential sway over Open AI, and the concentration of power amongst a few technology companies.


In January 2023, Time journalist Billy Perrigo revealed that OpenAI used Kenyan workers being paid less than USD 2 an hour to de-toxify GPT-3 and Open AI's ChatGPT

According to Perrigo, 'the work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.'


Open AI laid the ground for GPT-3 by publishing a research paper Language Models are Few-Shot Learners which highlighted a number of the risks and harms associated with its model. 

However, the company's decision to grant access to the model to a few select researchers rather than release GPT-3's algorithm to the general public proved controversial, with some seeing it as 'unscientific' and 'opaque', whilst others saw it as necessary given its scale and potential for serious and widespread harm.

Equally, a number of commentators figured Open AI's decision to operate a de facto black box likely reflected its inability to fully understand its model, and a commercial desire to protect its IP.

Research, advocacy

Page info
Type: System
Published: January 2023
Last updated: April 2023