Man develops rare condition after following ChatGPT advice
Man develops rare condition after following ChatGPT advice
Occurred: May 2025
Page published: October 2025
A man developed bromism (bromide toxicity) after following dietary advice generated by ChatGPT, which led him to substitute table salt with sodium bromide over three months.
Seeking to eliminate sodium chloride from his diet based on perceived health risks, the 60-year-old man asked ChatGPT for alternatives and was advised to use sodium bromide, a chemical not intended for human consumption.
The substitution resulted in severe psychiatric symptoms including paranoia, hallucinations and insomnia, neurological issues, muscle co-ordination issues (ataxia), and a three-week hospital stay before the correct diagnosis and treatment were provided.
Laboratory analysis revealed over 200 times the normal limit of bromide in the man's blood. Sodium bromide is a toxic substance once used medicinally, but its dangers have been well established for decades.
The outcome stemmed from the chatbot’s reliance on internet-sourced information, its lack of built-in clinical judgment, and the known tendency for AI chatbots to “hallucinate” or offer decontextualised, inaccurate medical recommendations without properly disclosing the relevant risks.
Furthermore, the chatbot failed to provide any health warning regarding its suggested substitute; medical experts said that such advice would never be offered by a licensed professional.
The incident demonstrates the actual harms faced by people who use AI chatbots for medical advice without professional input, potentially leading to preventable health crises or lasting harm.
It also raises the question as to who is responsible when harm results from AI-generated medical advice. Current frameworks do not clearly allocate liability among users, developers (such as OpenAI), or healthcare systems, leading to gaps in protection for individuals harmed by chatbot errors.
For society, it raises urgent concerns about chatbot safety, the spread of health misinformation, and the need for stricter oversight, transparent design, and clear warnings when AI is used to answer health-related questions.
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Generate dietary advice
Technology: Generative AI; Machine learning
Issue: Accountability; Autonomy; Liability; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2058