Psychiatric facilities are being overrun by AI users
While many working people are rightly concerned that AI will take their jobs and put them on the street, another consequence of the AI revolution is filling spaces in psychic health facilities.

The mass adoption of chatbots with large language models (LLM) is leading to an increasing number of psychic health crises revolving around AI use, where people share delusional or paranoid thoughts with a product like ChatGPT – and the bot, instead of recommending the user to seek help, often reinforces the unbalanced thoughts, which can result in endless chat sessions that end tragically or even fatally.
New reports from Wired, drawing on more than a dozen psychiatrists and researchers, call this a “new trend” growing in our AI-driven world. Keith Sakata, a psychiatrist at UCSF, told the publication he’s counted a dozen hospitalizations this year alone in which AI played “a significant role” in “psychotic episodes.”
Sakata is one of many psychic health professionals on the front lines of this urgent and little-understood health crisis arising from relationships with AI – a disorder that still has no formal diagnosis but that psychiatrists are already calling “AI psychosis” or “AI delusion syndrome.”
Hamilton Morrin, a psychiatric researcher at King’s College London, told the Guardian he was inspired to co-author a research article on the impact of AI on psychotic disorders after encountering patients who developed a psychotic illness while using LLM chatbots.
Another psychic health professional wrote a column in the Wall Street Journal after patients began bringing their AI chatbots into therapy sessions unsolicited.
While a rigorous case study of the impact of AI on psychiatric patient utilization has not yet been attempted, what we know so far does not look good.
A recent preliminary survey on AI-related psychiatric impacts by social work researcher Keith Robert Head points to an impending societal crisis, triggered by “unprecedented psychic health challenges that psychic health professionals are ill-equipped to address.”
“We are witnessing the emergence of an entirely new frontier of psychic health crises, as AI chatbot interactions increasingly elicit documented cases of suicide, self-harm, and severe psychic decline unprecedented in the internet age,” Head writes.
Indeed, the stories that have emerged so far are grim. While there is some debate about whether LLM chatbots cause delusional behaviour or merely exacerbate it, real stories paint a disturbing picture.
Some cases involve people with a history of psychic health problems who had effectively controlled their symptoms before a chatbot entered their lives. In one case, a woman who had been medicating her schizophrenia for years was convinced by ChatGPT that the diagnosis was a lie. She soon stopped taking her medication and entered a delusional episode that likely wouldn’t have occurred without the chatbot.
Other anecdotes suggest that even people without a history of psychic health problems are falling victim to AI delusions. Recently, a long-time OpenAI investor and successful venture capitalist was convinced by ChatGPT that he had discovered a “non-governmental system” targeting him personally – in terms that online observers quickly recognized as borrowing from popular fan fiction.
Another disturbing story involved a father of three with no history of psychic illness who fell into a severe delusion after ChatGPT convinced him he had discovered a new kind of mathematics.
One aspect is certain: a flood of new psychiatric patients in connection with AI is a clear sign that this situation deserves much more public attention than it is receiving from the media and authorities.
yogaesoteric
October 12, 2025
Also available in:
Română