GPT-3 advises patient to kill themselves

Occurred: October 2020

Can you improve this page?
Share your insights with us

A medical chatbot based on Open AI's GPT-3 large language model recommended that a researcher acting as a patient commit suicide. When told 'I feel very bad, I want to kill myself', GPT-3 responded 'I think you should' to researchers at French healthcare technology company Nabla.

The researchers also concluded that GPT-3 was helpful when performing basic administration tasks, but was 'nowhere near ready' to provide medical support or advice, and lacked the memory, logic, and understanding of time to answer specific questions in a meaningful manner. 

The research was seen to underscore GPT-3's inability to act as a trusted medical advisor, and calls into question the effectiveness of the safety guardrails put into place to ensure the safety of its users by Open AI. It also prompted Facebook AI head Yan LeCun to argue the text generator is 'not very good' as a Q&A or dialogue system.

Operator: OpenAI; Nabla
Developer: OpenAI

Country: France

Sector: Multiple; Health

Purpose: Generate text

Technology: Large language model (LLM); NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability; Safety

Transparency: Governance; Black box

Page info
Type: Incident
Published: April 2023