Google AI falsely accuses Canadian musician of being a sex offender
Google AI falsely accuses Canadian musician of being a sex offender
Occurred: December 2025
Page published: January 2026
Report incidentš„| Improve page š| Access database š¢
A Google AI search summary incorrectly labeled Canadian fiddler Ashley MacIsaac as a convicted sex offender, leading to the cancellation of a scheduled concert, and significant psychological and reputational harm.Ā
Googleās AI Overviews generated false information stating that Juno Award-winning Canadian musician Ashley MacIsaac had been convicted of various sex crimes and was on a sex offender registry.
The error stemmed from the AI combining the musicianās identity with that of another person with the same surname who had criminal charges, effectively misattributing those charges to the artist.
As a direct consequence, the organisers of a planned December 19 concert at the Sipekneākatik First Nation near Halifax canceled the event after seeing the AIās summary.
MacIsaac only learned of the allegation when confronted about it and later confirmed the information was false. Google updated the summary after the error was flagged by MacIsaac and the media.
The incident occurred because Googleās AI system automatically synthesised information from web content and erroneously merged data from two distinct individuals.Ā
Commonly referred to in the industry as an AI āhallucinationā or erroneous summarisation, this type of error reflects broader limitations in how large language models contextualise and verify data.Ā
For MacIsaac: The false allegation harmed the musicianās professional opportunities, resulting in canceled performances, actual income loss and anxiety about his future. MacIsaac has indicated he is considering legal action against Google, with law firms reportedly offering to represent him on a pro bono basis.
For society: the episode highlights the real-world consequences of AI-generated misinformation, particularly when algorithmic outputs are presented without sufficient context or safeguards. It also underlines the need for stronger frameworks around AI accountability and redress (how individuals can contest and correct harmful algorithmic errors), transparent error reporting and mitigation practices, and regulatory oversight to ensure companies deploying AI tools balance utility and factual reliability.
Developer: Google
Country: Canada
Sector: Media/entertainment/sports/arts
Purpose: Generate artist profile
Technology: Generative AI
Issue: Accountability; Accuracy/reliability; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2181