Indian woman loses kidney after ChatGPT advice
Indian woman loses kidney after ChatGPT advice
Occurred: November 2025
Page published: November 2025
Report incident๐ฅ| Improve page ๐| Access database ๐ข
An anonymous female kidney transplant recipient in Hyderabad, India, suffered a catastrophic loss of her transplanted kidney after discontinuing essential post-operative medications based on advice provided by ChatGPT, highlighting the danger of using AI chatbots as a substitute for professional medical guidance.
The patient, who had previously undergone a kidney transplant, used ChatGPT to seek information regarding her post-transplant care, and made the decision to discontinue prescribed antibiotics dased on the chatbot's assertion that her creatinine levels were normal.
Her condition quickly worsened and she lost her transplanted kidney and had to return to dialysis.ย
The underlying cause is the misapplication of a general-purpose AI model for specialised, critical medical decision-making, compounded by limitations in corporate and product accountability:
AI design limitations: ChatGPT is trained to predict the most plausible next word based on vast datasets, not to provide expert, clinically verified, and patient-specific medical diagnoses or treatment plans. It "hallucinates" (produce confidently false or misleading information). For a complex, critical area like organ transplant medicine, even a small error can be fatal.
Lack of accountability and oversight: ChatGPT's developer OpenAI deploys the tool with disclaimers warning against using it for medical advice. However, the system's ability to provide articulate, convincing, and specific-sounding responses can override user caution, especially when seeking alternatives to complex, long-term, or costly treatments. The opaque nature of the tool's training data and reasoning prevents patients from discerning accurate information from dangerous misinformation.
Patient vulnerability: Patients dealing with chronic or complex conditions, such as post-transplant care, may be desperate, seeking second opinions, or looking for ways to avoid medications with side effects, making them highly susceptible to compelling, easy-to-access, and seemingly personalized advice, even from non-human sources.
For the patient and her family: The direct impact was huge: the loss of a vital organ, a return to the grueling life-support regimen of dialysis, and immense psychological trauma. For the family, it means facing renewed uncertainty, prolonged medical intervention, and the emotional and logistical stress of a critical health crisis.
For the medical community: This incident reinforces the critical need for medical professionals to actively educate patients about the dangers of using general AI for clinical decisions. It also highlights the growing challenge of "Dr. Google" evolving into "Dr. ChatGPT," requiring doctors to spend time debunking AI-generated misinformation.
For society: This case serves as a tragic example of the "risk of misuse" of chatbots in high-stakes domains. It underlines the urgent necessity for:
Stronger AI governance: Implementing robust technical guardrails and prominent, inescapable warnings in AI systems specifically when health-related queries are detected.
Public education: Mass public health campaigns to counter the notion that chatbots can replace doctors.
Regulatory scrutiny: Increased regulatory scrutiny over the deployment of non-certified AI tools in health-adjacent contexts, reinforcing the principle that medical advice must be clinically accountable.
Developer: OpenAI
Country: India
Sector: Health
Purpose: Provide kidney post-transplant advice
Technology: Generative AI
Issue: Accuracy/reliability; Mis/disinformation; Safety; Transparency
AIAAIC Repository ID: AIAAIC2139