GPT-4 large language model
Released: March 2023
GPT-4 (Generative Pre-trained Transformer 4) is a large language model that uses deep learning to generate natural, human-like language from text and image prompts.
Developed by OpenAI and released in March 2023, GPT-4 underpins Microsoft's BingChat and is available on a subscription basis from OpenAI as ChatGPT Plus.
GPT-4 has been praised as a major improvement on its GPT-3 predecessor in terms of its multi-modal and basic reasoning capabilities, the latter of which is seen to benefit from the model's larger size and increased parameters.
In its technical report, OpenAI says GPT-4 is 82% less likely than GPT-3.5 to respond to requests for unsafe content, and 60% less likely to make stuff up, or 'halucinate'.
However, like GPT-3, GPT-4 generates biased, false, and offensive content, and is vulnerable to prompt engineering attacks and 'jailbreaks'. These risks, and others, are outlined in its System Card (pdf) as follows:
Safety - harmful content
Disinformation and influence operations
Interactions with other systems
Economic impacts - employment
Representation, allocation, and quality of service
By not releasing access to its data, code, model or energy costs, and providing little or no information about them, OpenAI has come in for strong criticism, especially from the research community. OpenAI claims this is due to fears over safety; others see it as proof of a commercial imperative, and an attempt to reduce legal liability.
NYU professor Gary Marcus argues GPT-4's closed nature 'puts all of us in an extremely poor position to predict what GPT-4 consequences will be for society, if we have no idea of what is in the training set and no way of anticipating which problems it will work on and which it will not. One more giant step for hype, but not necessarily a giant step for science, AGI, or humanity.'
Country: USA; Global
Purpose: Generate text
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Accuracy/reliability; Bias/discrimination; Employment; Impersonation; Mis/disinformation; Privacy; Safety; Security; Lethal autonomous weapons
Transparency: Governance; Black box
Center for AI and Digital Policy (2023). FTC Complaint (pdf)
Future of Life Institute (2023). Pause Giant AI Experiments: An Open Letter
Goldman Sachs (2023). The Potentially Large Effects of Artificial Intelligence on Economic Growth (pdf)
Newsguard (2023). Despite OpenAI’s Promises, the Company’s New AI Tool Produces Misinformation More Frequently, and More Persuasively, than its Predecessor
Eloundou T., Manning S., Mishkin P., Rock D. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models (pdf)
Bubeck S., et al (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4
News, commentary, analysis
Published: April 2023