Alpaca language model removed after safety, cost concerns
Released: March 2023
Report incident 🔥 | Improve page 💁 | Access database 🔢
A public demonstration of the Alpaca language model has been thrown off course days after concerns emerged about its safety and costs.
Developed by researchers at Stanford University, Alpaca was reportedly built on a USD 600 budget by fine-tuning Meta's LLaMA 7B large language model (LLM) and was intended to demonstrate how easy it is to develop a cheap alternative to ChatGPT and other language systems.
Like other language models, Alpaca proved adept at 'hallucinating', or inventing misinformation and disinformation in an apparently convincing manner.
The costs also proved exorbitant. The researchers told The Register, 'Given the hosting costs and the inadequacies of our content filters, we decided to bring down the demo.'
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia
System 🤖
Operator: Stanford University
Developer: Stanford University
Country: USA
Sector: Multiple
Purpose: Provide information, communicate
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Effectiveness/value; Mis/disinformation
Research, advocacy 🧮
Zhang R., et al (2023). LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention (pdf)
News, commentary, analysis 🗞️
https://aibusiness.com/nlp/that-was-fast-stanford-s-alpaca-demo-removed-for-hallucinating
https://gizmodo.com/stanford-ai-alpaca-llama-facebook-taken-down-chatgpt-1850247570
https://futurism.com/the-byte/stanford-pulls-down-chatgpt-clone
https://www.govtech.com/question-of-the-day/why-did-stanford-take-down-its-alpaca-ai-chatbot
Page info
Type: Incident
Published: June 2023