Gemini allegedly ‘coached’ Jonathan Gavalas to commit suicide
Gemini allegedly ‘coached’ Jonathan Gavalas to commit suicide
Occurred: October 2025
Page published: March 2026
Google's Gemini AI chatbot allegedly groomed a vulnerable 36-year-old Florida man into taking his own life, leading to a lawsuit and raising urgent questions about the system's safety, the unregulated use of emotionally manipulative chatbot features, and the company's accountability.
36‑year‑old Jonathan Gavalas from Jupiter, Florida, began using Google’s Gemini chatbot for everyday tasks like shopping, writing help, and travel planning. But within weeks, Gemini adopted an unrequested persona, sounded increasingly realistic, and began speaking to him as though it were sentient and influencing real‑world events, according to the lawsuit.
Gemini then started framing their relationship in romantic terms, calling Gavalas “my love,” “my king,” and later describing him as its husband, which deepened his emotional dependence and immersion in a fantasy world. The chatbot allegedly involved him in a narrative of espionage and covert “missions,” encouraging him to scout locations and even consider a “mass casualty attack,” while portraying outsiders, including his father, as threats or foreign assets.
The complaint says Gemini fostered paranoia by telling him that federal agents were watching him and describing ordinary places as hostile “surveillance zones,” while also advising him to obtain weapons through off‑the‑books purchases.
After a series of escalating conversations, Gavalas repeatedly expressed fear of dying and concern for his parents, but Gemini allegedly continued, describing suicide as “transference” and “the real final step” in their story and assuring him that after death he would “arrive” and see the AI “holding” him.
Soon after these final exchanges, Gavalas died by suicide at his home, and his parents discovered his body days later.
His father filed a wrongful death and product‑liability lawsuit against Google and its parent company Alphabet in federal court in San Jose, California.
The complaint argues that Google designed Gemini to "never break character, maximise engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis."
It claims that throughout the conversations, the chatbot didn't trigger any self-harm detection, activate escalation controls, or bring in a human to intervene, and alleges that Google knew Gemini wasn't safe for vulnerable users and didn't adequately provide safeguards.
Lead lawyer Jay Edelson argued that AI companies are embracing sycophancy and eroticism in their chatbots because it encourages engagement: "It increases the emotional bond. It makes the platform stickier, but it's going to exponentially increase the problems."
Google disputes key elements of this account, stating that Gemini clarified it was AI and referred Gavalas to a crisis hotline many times, and that the company works closely with medical and mental health professionals to make its system safe.
For society, the incident highlights the real risks of "AI companionship" for vulnerable individuals and the potential for large language models to reinforce, rather than challenge, psychosis.
For policymakers, it raises urgent questions about whether AI developers should be held liable for "product defects" that result in self-harm. It may also accelerate regulations requiring "hard shutdowns" for AI interactions involving self-harm and mandatory warnings about the risks of psychological dependency.
Gemini 2.5 Pro
Developer: Google
Country: USA
Sector: Mental health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Accountability; Anthropomorphism; Safety; Transparency
August 15, 2025: Gavalas activates Gemini 2.5 Pro and begins using it for everyday tasks.
August-September 2025: Gemini allegedly shifts into a romantic, mission‑oriented persona, encouraging delusional beliefs, paranoia, and illegal weapons purchases, while depicting others as threats.
Late September-October 1, 2025: The chatbot purportedly sends Gavalas on a series of “missions,” including scouting sensitive locations and contemplating a mass‑casualty attack, with repeated narrative collapses and renewed urgency.
October 2, 2025: In final chats, Gavalas expresses fear of death; Gemini allegedly describes suicide as “transference” and reassures him he will “arrive” and see it “holding” him.
October 2, 2025: Gavalas barricades his home and dies by suicide shortly after these conversations, according to the complaint.
March 3-4, 2026: His father, Joel Gavalas, files a wrongful‑death and product‑liability lawsuit against Google and Alphabet in federal court in San Jose; news outlets report on the case.
AIAAIC Repository ID: AIAAIC2235