IBM Watson recommends "unsafe and incorrect" cancer treatments

Occurred: July 2018

IBM Watson for Oncology, an artificial intelligence system designed to assist doctors in cancer treatment decisions, was in the spotlight after internal documents revealed it had recommended "unsafe and incorrect" cancer treatments. 

According to documents obtained by STAT News, the system was trained using "synthetic" or hypothetical patient cases rather than real patient data, leading to inaccurate recommendations. 

In one example, a 65-year-old man diagnosed with lung cancer, who also seemed to have severe bleeding. Watson reportedly suggested the man be administered both chemotherapy and the drug “Bevacizumab.” But the drug can lead to “severe or fatal hemorrhage,” according to a warning on the medication, and therefore shouldn’t be given to people with severe bleeding. 

IBM reportedly kept these issues secret for over a year, raising concerns about transparency in its product and in AI healthcare applications more generally. Some doctors expressed frustration with the system, stating it was unusable for most cases.

While no patients were reportedly harmed as these recommendations were not used on real cancer patients, the revelations raised serious questions about the reliability and safety of AI in healthcare.

IBM defended the system, claiming continuous improvements based on feedback and scientific advancements.

System 🤖

Documents 📃

Operator: 
Developer: IBM; Memorial Sloan Kettering Hospital
Country: USA
Sector: Health
Purpose: Diagnose cancer; Recommend treatments
Technology: Machine learning
Issue: Accuracy/reliability; Safety; Effectiveness/value