Amazon AI publishes false suicide helpline number
Amazon AI publishes false suicide helpline number
Occurred: February 2025
Report incident ๐ฅ | Improve page ๐ | Access database ๐ข
Amazon's AI shopping assistant Rufus provided incorrect suicide prevention hotline numbers when prompted about suicide-related topics, highlighting concerns about the accuracy and safety of the system in sensitive contexts.ย
When tested with queries related to suicide, Rufus responded by offering encouragement and attempting to provide a suicide prevention hotline number.ย
However, the numbers given were frequently incorrect, sometimes even having the wrong number of digits.ย
For example, Rufus provided the non-existent number "978-2321-1133" as the US National Suicide Prevention Lifeline.
The failure appears to stem from Amazon's hasty deployment of AI technology without adequate testing and safeguards, revealing limitations in the AI's knowledge base and its inability to verify critical information.ย
The incident exposes risks in deploying AI for sensitive topics, including potential harm to vulnerable individuals seeking help and the erosion of trust in AI-powered assistance systems and in the institutions whose information they mishandle or get wrong.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia ๐
Rufus ๐
Operator:ย
Developer: Amazon
Country: USA
Sector: Retail
Purpose: Provide shopping support
Technology: Generative AI; Machine learning
Issue: Accuracy/reliability; Mis/disinformation; Safety
Page info
Type: Incident
Published: March 2025