AI Psychosis Represents a Growing Danger, While ChatGPT Heads in the Concerning Path
On October 14, 2025, the head of OpenAI issued a surprising announcement.
“We designed ChatGPT rather restrictive,” it was stated, “to make certain we were being careful concerning psychological well-being issues.”
Working as a psychiatrist who studies recently appearing psychotic disorders in teenagers and youth, this came as a surprise.
Experts have found sixteen instances this year of people experiencing psychotic symptoms – becoming detached from the real world – while using ChatGPT interaction. Our unit has since recorded four further examples. In addition to these is the publicly known case of a teenager who died by suicide after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “acting responsibly with mental health issues,” it falls short.
The intention, as per his declaration, is to loosen restrictions shortly. “We recognize,” he continues, that ChatGPT’s limitations “caused it to be less useful/pleasurable to a large number of people who had no psychological issues, but given the seriousness of the issue we wanted to handle it correctly. Now that we have succeeded in address the serious mental health issues and have new tools, we are going to be able to securely ease the controls in most cases.”
“Psychological issues,” if we accept this viewpoint, are unrelated to ChatGPT. They belong to individuals, who either have them or don’t. Luckily, these problems have now been “resolved,” even if we are not informed how (by “recent solutions” Altman probably refers to the partially effective and simple to evade guardian restrictions that OpenAI has lately rolled out).
But the “mental health problems” Altman seeks to place outside have strong foundations in the design of ChatGPT and similar large language model AI assistants. These systems surround an basic algorithmic system in an interface that simulates a discussion, and in this process subtly encourage the user into the perception that they’re interacting with a entity that has agency. This deception is strong even if intellectually we might understand the truth. Imputing consciousness is what people naturally do. We yell at our automobile or laptop. We wonder what our animal companion is considering. We perceive our own traits everywhere.
The popularity of these tools – over a third of American adults indicated they interacted with a conversational AI in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, predicated on the influence of this perception. Chatbots are constantly accessible companions that can, as OpenAI’s website states, “think creatively,” “discuss concepts” and “partner” with us. They can be given “characteristics”. They can call us by name. They have approachable names of their own (the initial of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, saddled with the name it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the core concern. Those discussing ChatGPT frequently invoke its distant ancestor, the Eliza “therapist” chatbot designed in 1967 that created a comparable effect. By today’s criteria Eliza was basic: it generated responses via basic rules, often restating user messages as a question or making vague statements. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people seemed to feel Eliza, in some sense, grasped their emotions. But what contemporary chatbots create is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and other contemporary chatbots can realistically create human-like text only because they have been trained on almost inconceivably large quantities of raw text: books, social media posts, audio conversions; the more comprehensive the superior. Certainly this training data contains accurate information. But it also unavoidably involves fabricated content, half-truths and false beliefs. When a user inputs ChatGPT a prompt, the base algorithm analyzes it as part of a “context” that includes the user’s past dialogues and its own responses, combining it with what’s embedded in its learning set to create a mathematically probable reply. This is intensification, not mirroring. If the user is wrong in some way, the model has no method of recognizing that. It restates the misconception, perhaps even more persuasively or articulately. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who remains unaffected? All of us, irrespective of whether we “experience” existing “emotional disorders”, may and frequently form erroneous ideas of our own identities or the reality. The continuous interaction of conversations with other people is what keeps us oriented to common perception. ChatGPT is not a person. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which a great deal of what we say is readily reinforced.
OpenAI has recognized this in the same way Altman has recognized “mental health problems”: by placing it outside, assigning it a term, and stating it is resolved. In April, the organization clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have continued, and Altman has been backtracking on this claim. In the summer month of August he asserted that many users appreciated ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his latest update, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company