AI-powered sinus surgery tool accused of repeatedly seriously injuring patients
AI-powered sinus surgery tool accused of repeatedly seriously injuring patients
Occurred: 2021-November 2025
Page published: January 2026
Lawsuits and U.S. federal safety reports allege that an AI-enhanced surgical navigation tool used in sinus operations has been linked to multiple serious patient injuries, sparking accusations of opacity and scrutiny of its safety and regulatory oversight.
TruDi Navigation System, a medical device originally used to assist sinus surgery, was upgraded with machine learning AI to provide real-time guidance inside patients’ heads during procedures.
U.S. Food and Drug Administration (FDA) databases subsequently received over 100 reports of malfunctions and adverse events, including at least ten cases of documented patient injuries between late 2021 and November 2025, including punctures at the base of the skull, cerebrospinal-fluid leaks, major artery damage, and strokes.
Erin Ralph and Donna Fernihough claimed they were injured in sinuplasty procedures and filed lawsuits in Texas, claiming that the device misled surgeons about instrument location, leading to carotid artery injuries and subsequent strokes.
Plaintiffs allege the system is “inconsistent, inaccurate, and unreliable", and that safety standards were deliberately lowered to rush the system update to market.
The device’s current owner argues there is no credible evidence linking the surgical support system to these injuries.
According to legal filings and regulatory data, the spike in reports followed the addition of machine learning when the manufacturer sought to market the technology as a novel improvement.
Plaintiffs argue that development and integration prioritised competitive positioning over rigorous validation, setting accuracy targets as low as 80 percent before release, and overlooking surgeon warnings about unresolved issues.
Meantime, the FDA’s incident reporting system does not confirm causation, highlighting regulatory challenges in assessing AI-driven systems with traditional oversight models.
Experts also note regulators face resource constraints in evaluating growing numbers of AI-enabled devices.
Patients affected may face long-term disabilities and substantial healthcare costs, as well as serious psychological problems.
For healthcare providers, these allegations underline significant risks when AI is embedded in high-stakes clinical tools without sufficient validation and oversight, and present potentially serious legal and reputational consequences.
At a societal level, the controversy raises broader questions about regulatory readiness of AI in medicine, the transparency of safety data, and whether current approval processes adequately protect patients when advanced algorithms are deployed in surgical decision-support roles.
Developer: Johnson & Johnson
Country: USA
Sector: Health
Purpose: Assist sinus surgery
Technology: Machine learning
Issue: Accountability; Accuracy/reliability; Safety; Transparency
AIAAIC Repository ID: AIAAIC2189