Chinese chatbot sued for giving inaccurate university location info
Chinese chatbot sued for giving inaccurate university location info
Occurred: June 2025
Page published: January 2026
A Chinese court ruled that AI developers are not liable for "hallucinations" or empty financial promises made by their chatbots, establishing a precedent that AI lacks the legal capacity to make binding commitments.
"Liang" used a Chinese generative AI platform (via a plugin) to research university admissions for his sibling. However, t8he chatbot provided an incorrect address for the university’s main campus. When Liang challenged the error, the AI "hallucinated" a defensive stance, doubling down on the misinformation.
The bot went as far as to declare: "If the generated content is incorrect, I will compensate you 100,000 yuan (approx. $14,000), and you can file a lawsuit at the Hangzhou Internet Court." After proving the error with official documents, Liang sued the developer for 9,999 yuan, arguing he was misled and that the bot’s "bet" constituted a binding offer.
Hangzhou Internet Court dismissed the claims on the grounds that because AI is not a "civil subject" (like a person or corporation), it cannot independently express legal intent. The court found no direct harm or "subjective fault" by the developer.
The incident was driven by the chatbot hallucinating - a technical phenomenon where large language models generate plausible-sounding but factually false information.
The court ruled that the "promise" was a result of the algorithm’s randomness, not a corporate decision.The fact that the developer had implemented Retrieval-Augmented Generation (RAG) to reduce errors; the court viewed this as a reasonable effort to ensure accuracy, even if it failed in this instance.
The developer was protected because it had fulfilled its "duty of care" by including clear disclaimers on the landing page and user agreement stating that AI-generated content may be inaccurate.
For users: It serves as a stark warning that AI is not an absolute "source of truth." For high-stakes decisions like university admissions, medical advice, or legal matters, the burden of verification remains entirely on the human user in China.
For developers: The ruling is a significant victory, shielding them from "strict liability." As long as companies provide clear warnings and follow standard technical safety protocols, they are generally not responsible for the unpredictable "speech" of their models.
For society: The case clarifies the legal status of AI in China, affirming that machines cannot enter into contracts. However, it also highlights a growing accountability gap: as AI becomes more conversational and "human-like," the potential for users to be emotionally or financially manipulated increases, even if no legal "intent" exists.
Unknown
Developer:
Country: China
Sector: Education
Purpose: Provide location info
Technology: Generative AI
Issue: Accountability; Accuracy/reliability; Mis/disinformation
Hangzhou Internet Court. 生成式人工智能“幻觉”侵权纠纷案一审生效
AIAAIC Repository ID: AIAAIC2186