Stanford Alpaca language model safety, costs
Released: March 2023
Report incident 🔥 | Improve page 💁 | Access database 🔢
A public demonstration of Alpaca, a language model developed by researchers at Stanford University, has been removed from the internet days after concerns emerged about its safety and costs.
Alpaca was reportedly built on a USD 600 budget by fine-tuning Meta's LLaMA 7B large language model (LLM) and was intended to demonstrate how easy it is to develop a cheap alternative to ChatGPT and other language systems.
But the costs proved exorbitant. The researchers told The Register, 'Given the hosting costs and the inadequacies of our content filters, we decided to bring down the demo.'
Like other language models, Alpaca also proved adept at 'hallucinating', or inventing misinformation and disinformation in an apparently convincing manner.
System 🤖
Stanford HAI (2023). Stanford Alpaca
Stanford HAI (2023). Code and documentation
Meta AI (2023). Introducing LLaMA: A foundational, 65-billion-parameter large language model
Operator: Stanford University
Developer: Stanford University
Country: USA
Sector: Multiple
Purpose: Provide information, communicate
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Effectiveness/value; Mis/disinformation
Transparency:
Research, advocacy 🧮
Zhang R., et al (2023). LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention (pdf)
News, commentary, analysis 🗞️
https://aibusiness.com/nlp/that-was-fast-stanford-s-alpaca-demo-removed-for-hallucinating
https://gizmodo.com/stanford-ai-alpaca-llama-facebook-taken-down-chatgpt-1850247570
https://futurism.com/the-byte/stanford-pulls-down-chatgpt-clone
https://www.govtech.com/question-of-the-day/why-did-stanford-take-down-its-alpaca-ai-chatbot
Page info
Type: Incident
Published: June 2023