US law firms fined for false AI-generated legal citations, quotations
US law firms fined for false AI-generated legal citations, quotations
Occurred: May 2025
Page published: May 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
A California judge fined two U.S. law firms USD 31,000 after they submitted a court brief containing multiple false and fabricated legal citations and quotations generated by artificial intelligence.
In a civil case against insurer State Farm, lawyers from two firms — one using Google Gemini and Westlaw’s AI tools —created an outline for a supplemental brief.
The draft contained fake legal research and was handed off to another major law firm, K&L Gates, which incorporated the fabricated citations into their filing without verifying their authenticity.
A judicial review revealed that at least two cited cases did not exist, and nine out of 27 citations were incorrect in some way.
The judge warned that these errors could have ended up in an official court order, potentially undermining the integrity of the judicial process and eroding trust in legal proceedings.
The fracas resulted from the attorneys’ uncritical reliance on generative AI tools for legal research and drafting, coupled with a lack of proper verification and oversight, with neither firm conducting adequate checks to confirm the authenticity of the AI-generated citations before submitting the brief.
The judge described this as professionally reckless and emphasised that no competent attorney should outsource legal research and writing to AI without careful review.
The episode adds to a growing list of cases where lawyers have faced sanctions for submitting AI-generated, fictitious legal citations.
For the law firms and attorneys involved, the fines and public reprimand serve as a significant deterrent and a warning about the professional risks of misusing AI in legal work.
For clients, these kinds of incidents could jeopardise the outcome of their cases and diminish confidence in their legal representation.
For the legal profession and the broader public, the case highlights the urgent need for robust oversight, transparency, and ethical standards when integrating AI into legal practice, to prevent the spread of misinformation and maintain trust in the justice system.
Developer: Google; Thomson Reuters
Country: USA
Sector: Business/professional services
Purpose: Conduct legal research
Technology: Generative AI; Machine learning
Issue: Accountability; Accuracy/reliability; Authenticity; Transparency