GPT-4 large language model

Released: March 2023

GPT-4 (Generative Pre-trained Transformer 4) is a large language model that uses deep learning to generate natural, human-like language from text and image prompts. 

Developed by OpenAI and released in March 2023, GPT-4 underpins Microsoft's Bing Chat (renamed Copilot) and is available on a subscription basis from OpenAI as ChatGPT Plus.

GPT-4 has been praised as a major improvement on its GPT-3 predecessor in terms of its multi-modal and basic reasoning capabilities, the latter of which is seen to benefit from the model's larger size and increased parameters. 

In its technical report, OpenAI claims GPT-4 is 82 percent less likely than GPT-3.5 to respond to requests for unsafe content, and 60 percent less likely to make stuff up, or 'hallucinate'. 

Operator: OpenAI
Developer: OpenAI

Country: USA; Global

Sector: Multiple

Purpose: Generate text

Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning 
Issue: Accuracy/reliability; Bias/discrimination; Employment; Impersonation; Mis/disinformation; Privacy; Safety; Security; Lethal autonomous weapons

Transparency: Governance; Black box

Risks and harms

GPT-4 generates biased, false, and offensive content, and is vulnerable to prompt engineering attacks and 'jailbreaks'. Its answers are also reported to have suffered a significant decline in quality, which some people attribute to an effort to make the model faster, possibly by splitting it into multiple smaller models trained on specific models, or to its 'cannibalisation' of AI-generated content on the web.

The risks outlined in its System Card (pdf) are: 

Risks

Harms

Transparency

By not releasing access to its data, code, model or energy costs, and providing little or no information about them, OpenAI has come in for strong criticism, especially from the research community. OpenAI claims this is due to fears over safety; others see it as proof of a commercial imperative, and an attempt to reduce legal liability.

NYU professor Gary Marcus argues GPT-4's closed nature 'puts all of us in an extremely poor position to predict what GPT-4 consequences will be for society, if we have no idea of what is in the training set and no way of anticipating which problems it will work on and which it will not. One more giant step for hype, but not necessarily a giant step for science, AGI, or humanity.'

Legal, regulatory

Page info
Type: System
Published: April 2023
Last updated: November 2023