Two Vietnamese women are nearly killed after following ChatGPT advice
Two Vietnamese women are nearly killed after following ChatGPT advice
Occurred: November 2025
Page published: December 2025
Two women in Vietnam narrowly escaped death after discontinuing life-saving prescription medications in favour of "natural" treatment plans generated by ChatGPT, highlighting the serious physical and related risks of AI-driven medical misinformation.
Medical professionals at Gia An 115 Hospital in Ho Chi Minh City (Saigon) reported two cases involving patients who substituted professional medical treatment with advice gnerated by ChatGPT:
Case 1 (Mrs. N.T.D., 42): A diabetic patient who had been successfully managing her condition with prescribed medication consulted with ChatGPT and was led to believe that a specific diet of "healthy eating" and cutting carbs/sugar could replace her medicine entirely. Upon following this advice, her blood sugar soared to life-threatening levels. She was hospitalised with severe fatigue and dehydration, narrowly avoiding diabetic ketoacidosis and a potential coma.
Case 2 (Ms. D.T.M.H., 38): Diagnosed with dyslipidemia (high cholesterol) and prescribed statins. After encountering "natural" remedy suggestions online, she used ChatGPT to confirm a plan to lower her cholesterol without drugs. Months later, she was rushed to the hospital with chest tightness and shortness of breath. Doctors discovered myocardial ischemia (reduced blood flow to the heart) and coronary artery narrowing due to her uncontrolled cholesterol levels.
Both women survived only due to emergency medical intervention. Dr. Truong Thien Niem, who treated both patients, noted that despite being relatively young and tech-savvy, they placed "absolute trust" in the AI's authoritative-sounding responses.
The incident stems from a "perfect storm" of ChatGPT technological limitations and inadequate accountability and transparency:
Authoritative hallucinations: ChatGPT is designed to be "sycophantic", mirroring the user's intent. If a user asks "how can I treat X naturally," it will provide a confident list of remedies without sufficiently emphasising that these cannot replace prescription drugs.
The "confidence gap": Research shows that AI communicates more convincingly and "pleasantly" than human doctors. This "empathy illusion" leads users to perceive the AI as a trusted confidant rather than a text prediction engine.
Accountability deficit: While OpenAI's terms of service state that the tool is not for medical advice, this warning is often not visible at the point of service, bypassed during the conversational flow, or is buried in fine print. Critics argue that the business model prioritises engagement over safety, failing to implement "hard stops" when users seek life-altering medical changes.
For the two patients: Both women face long-term health monitoring for damage caused by the temporary cessation of their medicines - a traumatic lesson in the "digital divide" where access to information does not equate to access to accurate, personalised care.
For society: As wait times for human doctors increase globally, "Dr. GPT" becomes an extremely dangerous fallback. It highlights the urgent need for AI regulation in healthcare, such as Vietnam's "Law on Digital Technology Industry" (set for 2026), which will mandate the labeling of AI-generated content and impose stricter risks-based classifications for AI used in medical contexts.
Developer: OpenAI
Country: Vietnam
Sector: Health
Purpose: Provide health advice
Technology: Generative AI
Issue: Accountability; Accuracy/reliability; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2162