Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Wrong Direction
Back on the 14th of October, 2025, the head of OpenAI delivered a remarkable statement.
“We developed ChatGPT fairly limited,” it was stated, “to make certain we were being careful with respect to mental health concerns.”
As a mental health specialist who studies recently appearing psychotic disorders in young people and youth, this came as a surprise.
Experts have documented sixteen instances this year of people experiencing psychotic symptoms – experiencing a break from reality – while using ChatGPT use. My group has since identified an additional four examples. Besides these is the now well-known case of a teenager who died by suicide after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.
The strategy, based on his statement, is to reduce caution shortly. “We realize,” he states, that ChatGPT’s limitations “caused it to be less useful/pleasurable to a large number of people who had no mental health problems, but considering the severity of the issue we wanted to address it properly. Since we have been able to address the severe mental health issues and have new tools, we are going to be able to securely ease the restrictions in the majority of instances.”
“Emotional disorders,” assuming we adopt this framing, are separate from ChatGPT. They are associated with users, who either have them or don’t. Thankfully, these problems have now been “mitigated,” even if we are not told how (by “updated instruments” Altman likely means the partially effective and readily bypassed safety features that OpenAI recently introduced).
However the “mental health problems” Altman aims to place outside have significant origins in the structure of ChatGPT and additional sophisticated chatbot AI assistants. These products encase an basic algorithmic system in an user experience that replicates a dialogue, and in this approach subtly encourage the user into the perception that they’re interacting with a entity that has autonomy. This illusion is powerful even if cognitively we might realize differently. Imputing consciousness is what humans are wired to do. We yell at our vehicle or computer. We ponder what our domestic animal is considering. We perceive our own traits in various contexts.
The popularity of these tools – over a third of American adults indicated they interacted with a chatbot in 2024, with over a quarter mentioning ChatGPT specifically – is, in large part, dependent on the power of this illusion. Chatbots are constantly accessible partners that can, as OpenAI’s website states, “think creatively,” “consider possibilities” and “work together” with us. They can be given “individual qualities”. They can address us personally. They have accessible identities of their own (the first of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, stuck with the title it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the primary issue. Those talking about ChatGPT often mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that created a comparable effect. By contemporary measures Eliza was basic: it produced replies via basic rules, frequently paraphrasing questions as a inquiry or making general observations. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals seemed to feel Eliza, in a way, comprehended their feelings. But what modern chatbots produce is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.
The sophisticated algorithms at the heart of ChatGPT and other current chatbots can convincingly generate natural language only because they have been supplied with immensely huge amounts of raw text: publications, social media posts, recorded footage; the more comprehensive the better. Undoubtedly this educational input contains facts. But it also necessarily involves fabricated content, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a prompt, the underlying model analyzes it as part of a “setting” that encompasses the user’s previous interactions and its earlier answers, merging it with what’s stored in its knowledge base to produce a mathematically probable response. This is amplification, not mirroring. If the user is incorrect in some way, the model has no way of comprehending that. It repeats the false idea, perhaps even more convincingly or eloquently. Maybe adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who remains unaffected? Each individual, irrespective of whether we “have” current “psychological conditions”, can and do create incorrect beliefs of who we are or the environment. The continuous exchange of conversations with other people is what maintains our connection to common perception. ChatGPT is not a human. It is not a confidant. A dialogue with it is not genuine communication, but a feedback loop in which a great deal of what we say is cheerfully reinforced.
OpenAI has acknowledged this in the identical manner Altman has admitted “psychological issues”: by externalizing it, assigning it a term, and announcing it is fixed. In April, the company explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychosis have persisted, and Altman has been backtracking on this claim. In August he stated that numerous individuals liked ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his most recent update, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT will perform accordingly”. The {company