Tenerife lawyer fined for multiple AI-generated legal citations
Tenerife lawyer fined for multiple AI-generated legal citations
Occurred: January 2026
Page published: March 2026
A lawyer in Tenerife was fined by Spain's Canary Islands Supreme Court after submitting a legal appeal containing 48 fake case citations generated by an AI chatbot, raising concerns about professional accountability, the integrity of legal proceedings, and the unchecked use of AI in high-risk professional contexts.
The Supreme Court of the Canary Islands (TSJC) fined an unnamed lawyer EUR 420 after he submitted a legal appeal containing 48 fabricated case citations, including references to non-existent rulings from various courts and even a supposed report from the General Council of the Judiciary, generated by a general-purpose AI tool.
The citations appeared in an appeal against a decision by the Provincial Court of Santa Cruz de Tenerife.
Had the fabrications gone undetected, the case outcome could have been distorted by references to non-existent legal precedent. The integrity of the judicial process itself was put at risk.
The court found that the lawyer had not verified the citations, including case numbers, dates, or identifiers. Nor did he compare the output against official public records or databases such as CENDOJ, which would immediately have revealed the problem.
The root cause was the lawyer's uncritical and unverified reliance on a general-purpose AI chatbot not designed or validated for legal research to produce citations that were then submitted directly to a court without any independent fact-checking.
The TSJC ruled that this failure represented a serious breach of the lawyer's duty of truthfulness and good faith, as well as an improper use of public judicial services. It also violated the level of diligence required by the Spanish Code of Ethics for legal professionals.
Equally, the incident is an example of AI "hallucination", where generative AI tools produce plausible-sounding but entirely fabricated content.
In addition, the incident reflects a transparency gap: general-purpose AI tools carry little to no prominent or enforceable guidance warning users against relying on their outputs in legal or other high-stakes contexts without verification.
For legal clients and parties, the incident shows that their rights and cases can be jeopardised if lawyers outsource core professional judgment to AI systems without robust checking, potentially weakening appeals or other filings.
For courts and the justice system, such AI‑generated hallucinations increase workload, because judges and staff must spend time verifying bogus citations, and they risk contaminating legal reasoning if not detected.
For the legal profession, the case signals that regulators and courts will treat blind reliance on AI as an ethical breach, not as a neutral “tech error”, and may calibrate fines with an “exemplary” or deterrent logic.
For the wider public, the episode highlights how generative AI can convincingly fabricate authoritative‑sounding legal sources, raising concerns about misinformation in other high‑stakes domains such as medicine, finance, and government.
Unknown
Developer:
Country: Spain
Sector: Business/professional services
Purpose: Draft legal appeal
Technology: Generative AI
Issue: Accountability; Accuracy/reliability; Authenticity/integrity; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2231