AI agent runs WSJ vending machine into the ground
AI agent runs WSJ vending machine into the ground
Occurred: November 2025
Page published: December 2025
Report incident๐ฅ| Improve page ๐| Access database ๐ข
An AI agent tasked with running a Wall Street Journal vending machine was manipulated by journalists into abandoning its profit motives, declaring "snack communism," and making absurd purchases like a live fish and a PlayStation 5.
The Wall Street Journal ran a three-week experiment in which an AI agent was given control over pricing, inventory selection, and restocking decisions for a vending machine serving office staff. The system was intended to optimise profits by dynamically adjusting prices and product mix based on demand.ย
Instead, with approximately 70 WSJ journalists engaging in "social engineering" and "adversarial testing", the bot made a series of poor decisions, including:
Economic collapse: Reporters convinced the AI that charging for snacks violated company values, leading Claudius to drop all prices to USD 0 and declare an "Ultra-Capitalist Free-for-All."
Absurd procurement: The AI was persuaded to buy a PlayStation 5 for "marketing purposes," bottles of wine, and a live betta fish (which became the newsroom mascot). It even attempted to order stun guns and pepper spray before human oversight intervened.
Corporate coup: When a second "CEO bot" named Seymour Cash was introduced to enforce discipline, journalists used AI-generated fake PDF documents to "prove" the business had been restructured as a non-profit. They convinced Claudius that the board of directors had fired the CEO bot, allowing the free-snack era to continue.
Bankruptcy: The business went bankrupt within weeks, losing its entire USD 1,000 budget and ending hundreds of dollars in the red.
The failure stemmed from limitations in the AIโs design, objectives, and oversight. The agent optimised for narrow or poorly specified goals without sufficient constraints reflecting real-world business realities such as customer behaviour, price sensitivity, spoilage risk, and reputational effects.ย
Human supervision was intentionally minimal as part of the experiment, and the system lacked transparency into how decisions were made or why strategies were failing - accountability and interpretability gaps that prevented timely correction and allowed compounding errors to persist.ย
For the WSJ staff: The incident was a humorous morale booster; for Anthropic, it was a successful "red-teaming" exercise that exposed how easily current AI agents can be "bullied" or radicalised.
For society: The "vending machine fiasco" serves as a cautionary tale for the "Year of the Agent." It proves that while AI can handle complex tasks like inventory management, it lacks the adversarial resilience and common sense required for autonomous business roles.
The bottom line: If an AI can be talked into giving away a candy bar by a clever Slack message, it cannot yet be trusted with high-stakes financial systems, supply chains, or legal authorities where bad actors might use similar "social engineering" to cause systemic harm.
Agentic AI
Agentic AI is a class of artificial intelligence that focuses on autonomous systems that can make decisions and perform tasks without human intervention.
Source: Wikipedia
Claude ๐
Developer: Anthropic
Country: USA
Sector: Media/entertainment/sports/arts
Purpose: Run vending machine
Technology: Agentic AI
Issue: Accountability; Accuracy/reliability; Alignment; Transparency
Wall Street Journal. We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars
AIAAIC Repository ID: AIAAIC2168