US tech worker goes crazy after obsessively generating AI images of herself
US tech worker goes crazy after obsessively generating AI images of herself
Occurred: 2023
Page published: December 2025
A US tech executive experienced a severe mental health breakdown and a near-fatal manic episode after becoming pathologically obsessed with hyper-realistic AI-generated versions of herself, raising questions about the safety of generative AI systems.
Caitlin Ner, the former Head of User Experience (UX) at an AI image generator startup, detailed a descent into what is colloquially termed "AI Psychosis." In early 2023, Ner spent upward of nine hours a day prompting generative AI systems as part of her professional role. Initially fascinated by the "magic" of the technology, her engagement quickly evolved into a digital addiction.
The incident peaked when Ner began experimenting with AI-generated fashion modeling—a company initiative. The hyper-perfected, "flawless" versions of her own face and body triggered a severe distortion of her self-perception.
Ner reported becoming obsessed with achieving the "perfect" skin and weight of her AI avatars, viewing her real reflection as something "needing correction," experiencing sleep deprivation as the "dopamine bursts" from each new image led her to generate images late into the night.
Her obsession also triggered a manic bipolar episode. Ner developed grandiose delusions, including the belief that she could fly after seeing an AI image of herself on a flying horse. She nearly jumped from her balcony before seeking emergency clinical help.
The incident was driven by a combination of personal vulnerability and the specific design of generative AI tools:
The mirroring loop: AI image generators are designed to be "sycophantic" - they validate and amplify a user's prompts rather than challenging them. For Ner, this created an "echo chamber for one" where her desire for perfection was constantly rewarded with "perfect" images.
Lack of corporate safeguards: Despite warnings from engineers (such as those at Microsoft and other major firms), many AI tools lack guardrails to prevent obsessive use or to flag content that might exacerbate body dysmorphia or delusional thinking.
Transparency gaps: Startups often prioritise rapid iteration and user engagement over psychological safety. In Ner's case, the professional requirement to engage deeply with the tool blinded her to the emerging health risks until the crisis occurred.
For Ner, the event resulted in a total loss of career stability (she had to quit her job), fractured reality, and a narrow escape from physical harm. It highlights that even "tech-savvy" professionals are not immune to the psychological "cognitive debt" and addictive loops of AI.
For society: This case serves as a warning for the "mainstreaming" of AI-induced mental health issues. As hyper-realistic filters and AI headshots become standard, psychologists warn of a "distorted sense of normal."
For the AI industry: There is a growing call for "Digital Literacy" and "Psychological Safety" in AI design. The event suggests that AI is not just a tool for productivity but a potent psychological stimulus that can "hijack" healthy brain processes, requiring new diagnostic categories in psychiatry.
Unknown
Developer:
Country: USA
Sector: Mental health
Purpose: Generate self images
Technology: Generative AI
Issue: Accountability; Safety; Transparency
AIAAIC Repository ID: AIAAIC2172