GPT-3 large language model
Released: May 2020
Can you improve this page?
Share your insights with us
GPT-3 (Generative Pre-trained Transformer 3) is a language model that uses deep learning to generate natural, human-like language from text prompts.
Developed by OpenAI, GPT-3 has 175 billion parameters, ten times more than any previous model. It was released as a beta in May 2020.
Reaction
GPT-3 has won fulsome praise from technology professionals, commentators, scientists, and philosophers for the quality of text it produces, and its ability to synthesise massive amounts of content.
MIT Technology Review described it as 'shockingly good' and able to generate 'amazing human-like text on demand'. It is, according to the National Law Review, 'a glimpse of the future'.
But New York Times technology columnist Farhad Manjoo described GPT-3 as 'at once amazing, spooky, humbling, and more than a little terrifying,' and NYU professor Gary Marcus dismissed it as 'a fluent spouter of bullshit, but even with 175 billion parameters and 450 gigabytes of input data, it’s not a reliable interpreter of the world. '
Even OpenAI CEO Sam Altman admitted the tool has 'serious weaknesses and sometimes makes very silly mistakes.'
Limitations
Accuracy/reliability: lack of common sense, cut-and-paste jobs,
Bias/discrimination: race, religion (anti-Muslim bias)
Safety: AI dungeon filter
Mis/disinformation: short-form
Risks
Business model
Competition/price fixing
Dual/multi-use - Koko mental health experiment
Environment: emissions
Transparency
OpenAI researchers point out in a research paper accompanying the GPT-3's release
Governance
Black box
Operator: Open AI; Microsoft
Developer: Open AI
Country: USA
Sector: Technology
Purpose: Generate text
Technology: Large language model (LLM); NLP/text analysis; Neural networks; Deep learning
Issue: Business model; Competition/price fixing; Accuracy/reliability; Bias/discrimination - multiple; Dual/multi-use; Safety; Environment - emissions
Transparency: Governance; Black box
System
Research, audits, investigations, inquiries, litigation
News, commentary, analysis
https://www.nytimes.com/2020/07/29/opinion/gpt-3-ai-automation.html
https://indiaai.gov.in/news/this-bot-actually-suggests-patients-to-kill-themselves
https://analyticsindiamag.com/yann-lecun-thrashes-gpt-3-is-the-hype-real/
https://www.ft.com/content/512cef1d-233b-4dd8-96a4-0af07bb9ff60
https://www.theverge.com/2020/8/16/21371049/gpt3-hacker-news-ai-blog
https://fortune.com/2020/09/29/artificial-intelligence-openai-gpt3-toxic/
https://www.wired.com/story/openai-text-generator-going-commercial/
https://syncedreview.com/2020/09/04/is-openais-gpt-3-api-beta-pricing-too-rich-for-researchers/
https://bdtechtalks.com/2020/09/24/microsoft-openai-gpt-3-license/
https://techcrunch.com/2020/08/07/here-are-a-few-ways-gpt-3-can-go-wrong/
Page info
Type: System
Published: January 2023
Last updated: February 2023