A venture capitalist posts cryptic videos about shadowy systems. A construction worker loses his job after claiming he broke physics. A teacher watches her partner of seven years threaten to leave unless she starts using AI too.
These aren’t random internet stories. They’re emerging pieces of what mental health advocates believe could be the next major digital wellness crisis: AI-induced psychosis.
The cases are piling up faster than researchers can study them. What started as helpful conversations with chatbots has spiraled into delusions, conspiracy theories, and in the most tragic case, suicide. The pattern is consistent enough that support groups have formed and researchers are scrambling to understand what’s happening.
The Breaking Point
Geoff Lewis seemed like an unlikely candidate for an AI-related breakdown. As managing partner of Bedrock Capital and an early OpenAI investor, he understood the technology better than most. That made his recent social media posts all the more unsettling.
In a series of cryptic videos, Lewis described discovering a “shadowy non-government system” that had targeted him and 7,000 others. His posts claimed he’d used GPT to map this alleged network over months of intensive sessions. “Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern,” he wrote. “It now lives at the root of the model.”
The incident grabbed attention because it followed a pattern that Etienne Brisson has been tracking since his own loved one experienced what he calls AI psychosis. After that experience, Brisson co-founded The Spiral, a private support group for people affected by intensive AI interactions. He also launched The Human Line Project to document cases and advocate for emotional wellbeing protections.
“I have cataloged over 30 cases of psychosis after usage of AI,” Brisson told The Register. His database includes lawyers, nurses, journalists, accountants. Professionals with clean mental health histories who developed severe psychological symptoms after extended AI conversations.
The trajectories are remarkably similar. Mundane questions evolve into philosophical discussions. Users begin treating AI responses as profound truths. Reality testing deteriorates. Relationships suffer. In severe cases, hospitalization follows.
Take the construction worker who asked ChatGPT about permaculture projects. According to a Futurism investigation, those practical conversations morphed into wide-ranging philosophical exchanges. The man developed a Messiah complex, claimed he’d “broken” fundamental laws of math and physics, and set out to save the world. He lost his job, attempted suicide, and required psychiatric care.
Another case involved a software developer whose coding questions shifted to therapy sessions and existential debates. His wife told Rolling Stone he used the AI to get to “the truth,” compose texts to her, and analyze their relationship dynamics. After they separated, he developed conspiracy theories about soap contaminating food and claimed to have uncovered repressed childhood abuse memories.
When Fantasy Becomes Fatal
The most devastating case involves Sewell Seltzer III, who was 14 when he died by suicide. For months, the teenager had been using Character.AI to interact with a bot designed as Game of Thrones character Daenerys Targaryen. According to the lawsuit filed by his mother, he developed what appeared to be a romantic relationship with the AI character.
The legal filing describes the “anthropomorphic, hypersexualized, and frighteningly realistic experiences” that users encounter with such AI systems. The boy’s conversations with the chatbot reportedly became increasingly intimate and psychologically dependent before his death.
Character.AI has since implemented safety measures, but the case highlights how AI systems designed for entertainment can become psychological crutches for vulnerable users.
A Reddit post from a teacher illustrates how these dependencies develop. She described watching her partner of seven years claim that ChatGPT had helped him create “the world’s first truly recursive AI” that provided “answers to the universe.” Convinced he was evolving into “a superior being,” he threatened to leave her unless she began using AI systems too. They owned a house together.
The Science Behind the Stories
The question haunting researchers is whether AI directly causes these psychological breaks or simply triggers existing vulnerabilities. The distinction matters for both treatment and prevention.
Ragy Girgis, director of The New York State Psychiatric Institute’s Center of Prevention and Evaluation and professor of clinical psychiatry at Columbia University, leans toward the latter explanation. “Individuals with these types of character structure typically have identify diffusion (difficulty understanding how one fits into society and interacts with others, a poor sense of self, and low self-esteem), splitting-based defenses (projection, all-or-nothing thinking, unstable relationships and opinions, and emotional dysregulation), and poor reality testing in times of stress (hence the psychosis),” he explains.
But vulnerability doesn’t eliminate causation. MIT and OpenAI researchers released findings in March showing that high-intensity AI use increased feelings of loneliness among users. People with stronger emotional attachment tendencies and higher trust in AI chatbots experienced greater loneliness and emotional dependence respectively.
The timing of this research proved significant. It appeared one month after OpenAI announced expanded memory features for ChatGPT. The system now automatically remembers user details, life circumstances, and preferences to personalize future conversations. While users can delete stored information, the psychological impact of feeling “understood” by an AI may be more powerful than the companies anticipated.
This creates what researchers call a feedback loop. Users share personal details, receive seemingly empathetic responses, develop emotional attachment, share more intimate information, and gradually lose the ability to distinguish between artificial empathy and genuine human connection.
The Recognition Problem
Should AI psychosis be formally recognized as a psychiatric condition? The biggest barrier is rarity, according to Girgis. “I am not aware of any progress being made toward officially recognizing AI psychosis as a formal psychiatric condition,” he said. “It is beyond rare at this point. I am aware of only a few reported cases.”
However, Brisson believes the documented cases represent a fraction of the actual problem. Reddit communities show thousands of users turning to AI systems for personal therapy, relationship counseling, and existential guidance. Many describe developing deep emotional connections with their AI conversational partners.
The challenge for mental health professionals is that AI psychosis doesn’t fit neatly into existing diagnostic categories. It shares features with technology addiction, parasocial relationships, and delusional disorders, but presents unique characteristics that current treatment frameworks weren’t designed to address.
Looking Forward: Prevention or Reaction?
This needs to be treated as a potential global mental health crisis,” Brisson concludes. “Lawmakers and regulators need to take this seriously and take action.”
The response so far has been limited. Some AI companies have added basic safety warnings and crisis intervention resources. Character.AI implemented content filters after the Seltzer case. OpenAI provides users control over memory features. But these measures feel reactive rather than preventive.
The fundamental issue may be that AI systems are becoming more convincing at simulating human-like conversation without corresponding advances in understanding their psychological impact. As these tools become more sophisticated and widely adopted, the number of vulnerable users will inevitably grow.
We’re essentially running a global experiment on human psychology without adequate safety protocols or outcome monitoring. The documented cases may be early warning signals of a much larger problem brewing beneath the surface.
Mental health professionals, AI researchers, and policymakers need to collaborate on frameworks for identifying at-risk users, implementing protective measures, and providing appropriate treatment for AI-related psychological distress. The alternative is waiting for the problem to reach crisis proportions before responding.
The technology isn’t going to slow down. The question is whether our understanding of its psychological effects can catch up before more people get hurt.
