Stochastic Parrots study questions large language models

Occurred: December 2020

Can you improve this page?
Share your insights with us

A study by a group of researchers exploring the risks of large language models resulted in the dismissal of a number of high-profile Google employees and raised questions about Google's values, culture and governance. It also prompted heated discussion about the role of ethics in technology decision-making, and its effective 'privatisation' by commercial interests.

Written by linguist Emily Bender and then Google ethicists Timnit Gehru and Margaret Mitchell, the 'Stochastic Parrots' study assessed the financial, social, and environmental risks of large language models such as Google's BERT and OpenAI's GPT-2, and set out a series of recommendations for minimising these risks

Amongst other things, the paper referenced a 2019 University of Massachusetts, Amherst, study that had concluded that the energy consumption and carbon footprint of large language models had massively increased since 2017. The study also found that the training of a single model emits over 626,000 pounds of carbon dioxide equivalent, which is nearly five times the lifetime emissions of the average American car, including its manufacture.

The Information reported in May 2023 that OpenAI had incurred losses of USD 540 million during 2022 as it developed GPT-4 and ChatGPT, underscoring the huge costs of training its models.

Operator: Alphabet/Google
Developer: Alphabet/Google
Country: USA
Sector: Technology; Multiple
Purpose: Generate text
Technology: Large language model (LLM); NLP/text analysis; Neural networks; Deep learning; Machine learning
Issue: Bias/discrimination - race, ethnicity; Ethics; Employment; Environment
Transparency: Governance; Marketing