GPT-4 large language model

GPT-4 (Generative Pre-trained Transformer 4) is a large language model that uses deep learning to generate natural, human-like language from text and image prompts. 

Developed by OpenAI and released in March 2023, GPT-4 underpins Microsoft's Bing Chat (renamed Copilot) and is available on a subscription basis from OpenAI as ChatGPT Plus.

GPT-4 has been praised as a major improvement on its GPT-3 predecessor in terms of its multi-modal and basic reasoning capabilities, the latter of which is seen to benefit from the model's larger size and increased parameters. 

In its technical report, OpenAI claims GPT-4 is 82 percent less likely than GPT-3.5 to respond to requests for unsafe content, and 60 percent less likely to make stuff up, or 'hallucinate'. 

Operator: OpenAI
Developer: OpenAI

Country: USA; Global

Sector: Multiple

Purpose: Generate text

Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning 
Issue: Accuracy/reliability; Bias/discrimination; Employment; Impersonation; Mis/disinformation; Privacy; Safety; Security; Lethal autonomous weapons

Transparency: Governance; Black box

Risks and harms 🛑

GPT-4 has been accused of generating biased, false, and offensive content, and of being vulnerable to prompt engineering attacks and 'jailbreaks', causing the loss of jobs and damaging the environment, amongst others.

The risks outlined in GPT-4's System Card (pdf) are: 



Transparency 🙈

By not releasing access to its data, code, model or energy costs, and providing little or no information about them, OpenAI has come in for strong criticism, especially from the research community. OpenAI claims this is due to fears over safety; others see it as proof of a commercial imperative, and an attempt to reduce legal liability.

NYU professor Gary Marcus argues GPT-4's closed nature 'puts all of us in an extremely poor position to predict what GPT-4 consequences will be for society, if we have no idea of what is in the training set and no way of anticipating which problems it will work on and which it will not. One more giant step for hype, but not necessarily a giant step for science, AGI, or humanity.'

Legal, regulatory 👩🏼‍⚖️

Page info
Type: System
Published: April 2023
Last updated: May 2024