WHO chatbot provides inaccurate health information

Occurred: April 2024

A new World Health Organisation chatbot got off to an inauspicious start by getting basic health facts wrong, according to a media investigation. 

SARAH (or Smart AI Resource Assistant for Health) was trained on OpenAI's ChatGPT 3.5 large language model to provide information across major health topics, including mental health, tobacco and nutrition', in eight languages. 

However, the bot fails to provide up-to-date information on US-based medical advisories and news events, according to Bloomberg. In one instance, SARAH replied to a prompt by Bloomberg journalists by saying Lecanemab, an Alzheimer’s drug, was still in clinical trails when in reality the US Food and Drug Administration had approved the drug in 2023

The incident raised concerns about the prospect of a United Nations agency providing health and mental health disinformationThe UN agency acknowledged on the chatbot landing page that 'answers may not always be accurate because they are based on patterns and probabilities in the available data.'

Incident databank 🔢

Operator: World Health Organisation (WHO)
Developer: Soul Machines
Country: Global
Sector: Health
Purpose: Provide health information
Technology: Chatbot; Facial recognition
Issue: Accuracy/reliability; Mis/disinformation; Safety
Transparency: Governance