Anthropic accused of using fake AI source in copyright case
Anthropic accused of using fake AI source in copyright case
Occurred: May 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
Anthropic used a non-existent AI-generated academic source in a court filing as part of its defence in a high-profile copyright case brought by major music publishers, raising concerns about the company's integrity and legal credibility.
During ongoing litigation in San Jose, California, in which Universal Music Group, Concord and ABKCO accuse Anthropic of using copyrighted song lyrics to train its Claude AI model, a data scientist for Anthropic, Olivia Chen, submitted a court filing that cited a non-existent academic article from The American Statistician journal.
Apparently created by Anthropic’s chatbot Claude, the citation included fabricated author and title details. The error was discovered when the plaintiffs’ attorney confirmed with the supposed authors and the journal that the article was fictitious, prompting the judge to call the issue “very serious and grave,” and to require Anthropic to formally respond.
The incident raised concerns about the reliability of AI-generated content in legal proceedings and the potential for misleading the court, which could undermine the integrity of the judicial process. It may also damage Anthropic’s legal credibility.
According to Anthropic’s legal team, the mistake occurred when their attorney used Claude to format references for a filing, but the AI “hallucinated” the title and authors, resulting in an inaccurate reference.
The company’s lawyers described the incident as an “embarrassing and unintentional mistake,” emphasising that it was not a deliberate fabrication but a citation error that slipped through manual review.
For Anthropic, the incident could weaken its legal position, increase scrutiny of its practices, and harm its reputation in both the legal and AI communities. The music publishers may gain leverage in their case, highlighting the risks of relying on AI-generated evidence.
More broadly, the episode underscores the dangers of uncritically using AI outputs in high-stakes contexts, raising alarms for legal professionals, technologists, and regulators about the reliability and verification of AI-generated information.
It may prompt courts and companies to adopt stricter protocols for vetting AI-assisted legal work, reinforcing the need for human oversight to prevent similar incidents in the future.
Operator:
Developer: Anthropic
Country: USA
Sector: Business/professional services
Purpose: Generatre academic citation
Technology: Generative AI; Machine learning
Issue: Accountability; Accuracy/reliability; Mis/disinformation
Page info
Type: Incident
Published: May 2025