ChatGPT provides inaccurate medication query responses
ChatGPT provides inaccurate medication query responses
Occurred: December 2023
Report incident 🔥 | Improve page 💁 | Access database 🔢
The free version of ChatGPT provided inaccurate, incomplete or non-existent responses to medication-related questions, potentially endangering patients, according to a research study.
Pharmacists at Long Island University posed 39 medication-related questions to GPT-3.5, which powers ChatGPT. The bot gave inaccurate responses to 10 questions, and wrong or incomplete answers to 12.
It failed to directly address 11 questions, according to the study, and only provided references in eight responses, with each including sources that do not exist.
The study demonstrated that patients and health-care professionals should be cautious about relying on OpenAI’s viral chatbot for drug information and verify any of the responses with trusted sources, according to the study’s lead author Sara Grossman.
Operator: Sara Grossman
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide medication information
Technology: Chatbot; Generative AI; Machine learning
Issue: Accuracy/reliability
Grossman S. et al (2023). Study Finds ChatGPT Provides Inaccurate Responses to Drug Questions
Page info
Type: Issue
Published: December 2023