ChatGPT provides inaccurate medication query responses

Occurred: December 2023

Can you improve this page?
Share your insights with us

The free version of ChatGPT provided inaccurate, incomplete or non-existent responses to medication-related questions, potentially endangering patients, according to a research study.

Pharmacists at Long Island University posed 39 medication-related questions to GPT-3.5, which powers ChatGPT. The bot gave inaccurate responses to 10 questions, and wrong or incomplete answers to 12. It failed to directly address 11 questions, according to the study, and only provided references in eight responses, with each including sources that do not exist.

The study demonstrated that patients and health-care professionals should be cautious about relying on OpenAI’s viral chatbot for drug information and verify any of the responses with trusted sources, according to the study’s lead author Sara Grossman. 

Databank

Operator: Sara Grossman
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide medication information
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Accuracy/reliability
Transparency: Governance