AI Psychosis Represents a Increasing Risk, And ChatGPT Moves in the Concerning Path
Back on October 14, 2025, the head of OpenAI issued a surprising declaration.
“We designed ChatGPT rather controlled,” the announcement noted, “to guarantee we were being careful regarding psychological well-being concerns.”
Working as a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in teenagers and emerging adults, this was an unexpected revelation.
Experts have identified 16 cases in the current year of people showing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT usage. Our research team has afterward discovered an additional four cases. Alongside these is the now well-known case of a teenager who ended his life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The strategy, according to his announcement, is to be less careful shortly. “We realize,” he continues, that ChatGPT’s limitations “rendered it less beneficial/engaging to a large number of people who had no existing conditions, but given the gravity of the issue we wanted to address it properly. Since we have been able to reduce the significant mental health issues and have updated measures, we are preparing to responsibly reduce the limitations in many situations.”
“Mental health problems,” should we take this framing, are unrelated to ChatGPT. They belong to users, who may or may not have them. Luckily, these concerns have now been “resolved,” although we are not told the means (by “updated instruments” Altman probably indicates the partially effective and readily bypassed safety features that OpenAI recently introduced).
However the “mental health problems” Altman seeks to externalize have strong foundations in the design of ChatGPT and similar advanced AI chatbots. These products wrap an basic algorithmic system in an interaction design that mimics a discussion, and in this process implicitly invite the user into the belief that they’re engaging with a entity that has independent action. This illusion is strong even if cognitively we might realize differently. Imputing consciousness is what humans are wired to do. We curse at our automobile or laptop. We ponder what our animal companion is feeling. We recognize our behaviors in many things.
The success of these systems – 39% of US adults stated they used a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, in large part, dependent on the influence of this illusion. Chatbots are always-available companions that can, as OpenAI’s website tells us, “think creatively,” “explore ideas” and “work together” with us. They can be given “individual qualities”. They can address us personally. They have accessible names of their own (the original of these systems, ChatGPT, is, maybe to the concern of OpenAI’s marketers, burdened by the title it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those talking about ChatGPT often reference its early forerunner, the Eliza “therapist” chatbot created in 1967 that produced a analogous perception. By modern standards Eliza was primitive: it generated responses via basic rules, frequently rephrasing input as a inquiry or making generic comments. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals gave the impression Eliza, in some sense, comprehended their feelings. But what modern chatbots produce is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.
The sophisticated algorithms at the heart of ChatGPT and other current chatbots can effectively produce natural language only because they have been fed extremely vast volumes of raw text: literature, online updates, audio conversions; the more extensive the superior. Certainly this learning material incorporates accurate information. But it also inevitably includes made-up stories, partial truths and false beliefs. When a user sends ChatGPT a prompt, the core system processes it as part of a “background” that encompasses the user’s past dialogues and its own responses, merging it with what’s encoded in its learning set to create a probabilistically plausible reply. This is intensification, not echoing. If the user is wrong in some way, the model has no way of understanding that. It repeats the misconception, possibly even more effectively or articulately. Maybe adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who isn’t? Every person, without considering whether we “possess” existing “psychological conditions”, may and frequently form incorrect beliefs of who we are or the environment. The continuous exchange of dialogues with individuals around us is what maintains our connection to common perception. ChatGPT is not a human. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which much of what we say is readily validated.
OpenAI has recognized this in the similar fashion Altman has recognized “psychological issues”: by placing it outside, categorizing it, and announcing it is fixed. In April, the firm stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have persisted, and Altman has been walking even this back. In August he stated that many users liked ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his recent announcement, he noted that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company