Alpaca language model removed after safety, cost concerns

Released: March 2023

A public demonstration of the Alpaca language model has been thrown off course  days after concerns emerged about its safety and costs. 

Developed by researchers at Stanford University, Alpaca was reportedly built on a USD 600 budget by fine-tuning Meta's LLaMA 7B large language model (LLM) and was intended to demonstrate how easy it is to develop a cheap alternative to ChatGPT and other language systems.

Like other language models, Alpaca proved adept at 'hallucinating', or inventing misinformation and disinformation in an apparently convincing manner.

The costs also proved exorbitant. The researchers told The Register, 'Given the hosting costs and the inadequacies of our content filters, we decided to bring down the demo.'

Hallucination (artificial intelligence)

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.

Source: Wikipedia 

System 🤖

Operator: Stanford University
Developer: Stanford University
Country: USA
Sector: Multiple
Purpose: Provide information, communicate
Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Effectiveness/value; Mis/disinformation

Research, advocacy 🧮