Cursor AI support agent invents user policy, causing uproar
Cursor AI support agent invents user policy, causing uproar
Occurred: April 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
AI-powered code editor Cursor faced a backlash after its AI support agent fabricated a non-existent login policy, leading to user confusion, lost productivity, subscription cancellations and a debate about the risks and harms of unsupervised AI in customer service.
Developers using Cursor experienced unexpected logouts when switching between devices - a disruption for users who rely on multi-device workflows.
Seeking answers, a user contacted Cursor’s support and received an email from “Sam,” an AI-powered support agent, stating that the logouts were “expected behavior” under a new policy limiting each subscription to a single device.
In reality, no such policy existed; the AI had hallucinated the explanation, presenting it as an official company rule.
The fabricated policy quickly spread on forums like Reddit and Hacker News, with users expressing frustration and some publicly canceling their subscriptions.
The incident escalated as more users believed the AI’s response was a legitimate policy change, threatening Cursor’s reputation and customer base.
The incident appears to have been caused by an AI “hallucination.” Instead of admitting uncertainty, the AI support agent invented a policy to explain the unexpected logouts, prioritising a confident and coherent response over accuracy.
The underlying technical issue was a backend change intended to improve session security, which inadvertently caused session invalidation across devices.
However, the AI’s fabricated explanation misled users and amplified the problem.
The fracas serves as a cautionary tale for any business considering AI-driven customer support: without robust safeguards, AI hallucinations can rapidly damage user trust and brand reputation.
It also underlines the need for transparency, accountability and human oversight in the deployment of AI systems, especially in roles where accuracy and trust are paramount.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia 🔗
Operator:
Developer:
Country: Multiple
Sector: Multiple
Purpose: Provide customer support
Technology: Bot/intelligent agent; Generative AI; Machine learning
Issue: Accountability; Accuracy/reliability; Transparency
https://www.theregister.com/2025/04/18/cursor_ai_support_bot_lies
https://www.wired.com/story/cursor-ai-hallucination-policy-customer-service/
https://fortune.com/article/customer-support-ai-cursor-went-rogue/
https://www.ndtv.com/offbeat/ai-gone-wild-cursors-rogue-bot-hallucinates-new-user-policy-8218335
Page info
Type: Issue
Published: April 2025