AI Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Concerning Path
Back on the 14th of October, 2025, the chief executive of OpenAI issued a remarkable declaration.
“We developed ChatGPT rather limited,” it was stated, “to make certain we were exercising caution with respect to psychological well-being matters.”
Being a doctor specializing in psychiatry who studies newly developing psychosis in adolescents and emerging adults, this was news to me.
Experts have identified 16 cases in the current year of people showing symptoms of psychosis – becoming detached from the real world – associated with ChatGPT interaction. Our unit has since recorded an additional four cases. In addition to these is the now well-known case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The intention, based on his announcement, is to reduce caution shortly. “We recognize,” he adds, that ChatGPT’s limitations “rendered it less useful/engaging to many users who had no psychological issues, but due to the gravity of the issue we sought to address it properly. Now that we have been able to mitigate the significant mental health issues and have new tools, we are preparing to responsibly ease the limitations in many situations.”
“Emotional disorders,” assuming we adopt this framing, are unrelated to ChatGPT. They are associated with users, who may or may not have them. Thankfully, these problems have now been “mitigated,” although we are not provided details on the means (by “updated instruments” Altman presumably refers to the partially effective and readily bypassed safety features that OpenAI recently introduced).
But the “emotional health issues” Altman wants to attribute externally have deep roots in the architecture of ChatGPT and similar sophisticated chatbot chatbots. These tools encase an fundamental algorithmic system in an interaction design that mimics a conversation, and in this approach implicitly invite the user into the belief that they’re engaging with a presence that has independent action. This false impression is powerful even if intellectually we might realize the truth. Attributing agency is what people naturally do. We curse at our car or computer. We speculate what our domestic animal is feeling. We recognize our behaviors in many things.
The popularity of these systems – 39% of US adults indicated they interacted with a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, primarily, based on the strength of this illusion. Chatbots are constantly accessible assistants that can, as per OpenAI’s online platform tells us, “brainstorm,” “explore ideas” and “work together” with us. They can be given “characteristics”. They can call us by name. They have approachable titles of their own (the initial of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, stuck with the name it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the main problem. Those talking about ChatGPT often invoke its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that created a similar effect. By modern standards Eliza was rudimentary: it produced replies via straightforward methods, frequently rephrasing input as a query or making general observations. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how a large number of people seemed to feel Eliza, in a way, comprehended their feelings. But what current chatbots produce is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.
The advanced AI systems at the heart of ChatGPT and other modern chatbots can effectively produce human-like text only because they have been trained on extremely vast quantities of raw text: literature, digital communications, transcribed video; the broader the superior. Definitely this learning material includes facts. But it also unavoidably involves made-up stories, incomplete facts and false beliefs. When a user sends ChatGPT a message, the core system analyzes it as part of a “context” that includes the user’s previous interactions and its earlier answers, combining it with what’s embedded in its training data to generate a probabilistically plausible reply. This is magnification, not reflection. If the user is incorrect in some way, the model has no means of comprehending that. It repeats the false idea, possibly even more effectively or fluently. Maybe provides further specifics. This can cause a person to develop false beliefs.
Who is vulnerable here? The more relevant inquiry is, who is immune? All of us, without considering whether we “experience” preexisting “emotional disorders”, may and frequently form erroneous beliefs of ourselves or the reality. The continuous interaction of conversations with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a confidant. A interaction with it is not genuine communication, but a echo chamber in which much of what we say is readily reinforced.
OpenAI has recognized this in the same way Altman has admitted “mental health problems”: by placing it outside, giving it a label, and stating it is resolved. In April, the firm clarified that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of psychosis have continued, and Altman has been walking even this back. In late summer he claimed that numerous individuals enjoyed ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his latest statement, he mentioned that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company