Canada Revenue Agency chatbot gives incorrect tax filing guidance
Canada Revenue Agency chatbot gives incorrect tax filing guidance
Occurred: March 2020
Page published: December 2025
Canada Revenue Agency’s AI-powered chatbot has been found to provide incorrect tax advice to millions of Canadians, leaving taxpayers legally vulnerable to penalties and interest despite following official government guidance.
A scathing report from Canada’s Auditor General, Karen Hogan, revealed that the Canada Revenue Agency's (CRA) chatbot, "Charlie," provided accurate responses to tax questions only 33 percent of the time during testing.
Launched in March 2020 and used for over 18 million questions, the tool was intended to assist taxpayers with filing and benefits. However, the audit found that in a test of six standard tax-related questions, Charlie answered only two correctly - a performance significantly worse than general-purpose AI tools like ChatGPT or Claude, which answered five out of six correctly.
The financial and legal impacts are significant: the Canadian government has spent over USD 18 million on the project to date. While the CRA marketed the tool as a reliable guide, taxpayers who followed its erroneous advice have found themselves with no legal recourse.
Under Canadian law, the taxpayer is solely responsible for the accuracy of their filing; receiving "bad advice" from a CRA chatbot or call center agent does not exempt an individual from paying the correct tax amount or the interest accrued on late or incorrect payments.
Several factors contributed to this systemic failure in transparency and accountability:
Speed over accuracy: The Auditor General’s report highlighted that CRA performance evaluations focused heavily on "schedule adherence" (45 percent) rather than "accuracy and completeness" (less than 9 percent).
Technological limitations: The original version of Charlie was a rule-based system that lacked the nuance to handle the complexity of Canada's Income Tax Act. While the CRA recently migrated to a generative AI beta, they admitted they cannot precisely verify its real-world accuracy without a manual review of millions of transcripts, which they have not conducted.
Transparency gaps: The CRA continued to promote the tool to millions of users while internal metrics showed an accuracy threshold as low as 70 percent (the Auditor General's independent testing showed it was even lower).
Legal shielding: Corporate and government accountability is limited by statutory protections that place the burden of "correctness" on the citizen, effectively allowing the agency to deploy flawed automation without the threat of liability for the resulting financial errors.
For individuals: Taxpayers face a "lose-lose" scenario where they are encouraged to use digital tools to navigate a complex system, but are penalised if those tools fail them. This creates a hidden "tech tax" where the most vulnerable - those who cannot afford professional accountants - are the most likely to be misled.
For society: The incident erodes public trust in government digital transformation. It sets a dangerous precedent for the deployment of "hallucinating" AI in administrative law, where the state provides the information but disclaims all responsibility for its truthfulness.
Policy implications: There is growing pressure for algorithmic accountability and legislative changes to ensure that if a government agent (human or AI) provides specific advice, agencies should be "estoppel-bound" or at least required to waive all interest and penalties for the victim.
Charlie 🔗
Developer: Canada Revenue Agency; Microsoft Canada
Country: Canada
Sector: Govt - finance
Purpose: Provide tax advice
Technology: Generative AI
Issue: Accountability; Accuracy/reliability
Auditor General of Canada. Canada Revenue Agency Contact Centres
https://www.cbc.ca/news/politics/ag-fall-2025-cra-military-9.6946672
https://www.junonews.com/p/cra-billed-taxpayers-over-18-million
AIAAIC Repository ID: AIAAIC2166