MeQA drug safety app closed after producing inaccurate information
MeQA drug safety app closed after producing inaccurate information
Occurred: May 2025
Page published: May 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
An AI-powered Spanish drug safety app was shut down after it was found to be generating inaccurate information about medications, raising concerns about patient safety.
Designed to provide users with drug safety information and facilitate the reporting of adverse reactions, the MeQA app was closed following the discovery that it produced incorrect or misleading information about medicines.
Such inaccuracies in a healthcare context can lead to patients making unsafe decisions, potentially resulting in improper medication use, adverse health outcomes, and erosion of trust in digital health tools.
This mirrors broader concerns about generative AI and digital health platforms, which can "hallucinate" or fabricate convincing but false information, especially in critical areas like medical advice.
The closure was triggered by the app's reliance on generative AI technology, which, while capable of producing humanlike and coherent responses, is prone to errors and lacks the reasoning ability to consistently ensure accuracy in complex domains like medicine.
In the case of MeQA, these technological limitations led to the dissemination of false or misleading drug safety information, undermining the app’s core purpose.
For users - patients, caregivers, and healthcare professionals - the shutdown means the loss of a digital tool intended to enhance medication safety and reporting.
More broadly, the incident underscores the risks of deploying AI-driven applications in sensitive sectors without robust safeguards and oversight.
It highlights the need for stronger regulation, transparency, and validation of AI-powered health tools to protect public safety and maintain trust in digital health innovation.
The event also serves as a cautionary example for the wider adoption of generative AI in healthcare, emphasising that technological advances must be matched by rigorous quality assurance and ethical standards.
Generative artificial intelligence
Generative artificial intelligence (Generative AI, GenAI,[1] or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data.
Source: Wikipedia 🔗
MeQA 🔗
Developer: La Agencia Española de Medicamentos y Productos Sanitarios (AEMPS)
Country: Spain
Sector: Health
Purpose: Provide drug safety information
Technology: Generative AI; Machine learning
Issue: Accuracy/reliability; Mis/disinformation; Safety