Stanford Alpaca language model safety, costs

Released: March 2023

Can you improve this page?
Share your insights with us

A public demonstration of Alpaca, a language model developed by researchers at Stanford University, has been removed from the internet days after concerns emerged about its safety and costs. 

Alpaca was reportedly built on a USD 600 budget by fine-tuning Meta's LLaMA 7B large language model (LLM) and was intended to demonstrate how easy it is to develop a cheap alternative to ChatGPT and other language systems. 

But the costs proved exorbitant. The researchers told The Register, 'Given the hosting costs and the inadequacies of our content filters, we decided to bring down the demo.'

Like other language models, Alpaca also proved adept at 'hallucinating', or inventing misinformation and disinformation in an apparently convincing manner.

Operator: Stanford University
Developer: Stanford University
Country: USA
Sector: Multiple
Purpose: Provide information, communicate
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Effectiveness/value; Mis/disinformation