Back on October 14, 2025, the CEO of OpenAI delivered a surprising statement.
“We made ChatGPT quite limited,” it was stated, “to ensure we were being careful concerning psychological well-being issues.”
Being a doctor specializing in psychiatry who researches emerging psychotic disorders in young people and youth, this was an unexpected revelation.
Researchers have documented sixteen instances in the current year of users experiencing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT usage. Our unit has since discovered an additional four instances. Besides these is the widely reported case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.
The intention, as per his announcement, is to loosen restrictions shortly. “We recognize,” he adds, that ChatGPT’s limitations “made it less effective/engaging to numerous users who had no psychological issues, but given the gravity of the issue we wanted to address it properly. Now that we have managed to mitigate the serious mental health issues and have updated measures, we are planning to safely reduce the controls in the majority of instances.”
“Psychological issues,” assuming we adopt this framing, are unrelated to ChatGPT. They are associated with people, who may or may not have them. Thankfully, these concerns have now been “addressed,” though we are not told the means (by “new tools” Altman presumably indicates the imperfect and simple to evade safety features that OpenAI has just launched).
Yet the “mental health problems” Altman seeks to externalize have deep roots in the architecture of ChatGPT and similar advanced AI conversational agents. These products surround an underlying algorithmic system in an interface that replicates a conversation, and in doing so subtly encourage the user into the illusion that they’re engaging with a presence that has agency. This deception is powerful even if rationally we might realize the truth. Imputing consciousness is what people naturally do. We get angry with our car or device. We wonder what our animal companion is considering. We see ourselves in various contexts.
The success of these tools – over a third of American adults reported using a chatbot in 2024, with 28% reporting ChatGPT by name – is, in large part, dependent on the strength of this illusion. Chatbots are always-available assistants that can, as per OpenAI’s official site tells us, “think creatively,” “consider possibilities” and “work together” with us. They can be assigned “individual qualities”. They can address us personally. They have approachable names of their own (the initial of these tools, ChatGPT, is, maybe to the concern of OpenAI’s marketers, burdened by the title it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the primary issue. Those analyzing ChatGPT frequently reference its early forerunner, the Eliza “counselor” chatbot developed in 1967 that created a analogous perception. By contemporary measures Eliza was rudimentary: it produced replies via simple heuristics, typically rephrasing input as a inquiry or making vague statements. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and worried – by how a large number of people seemed to feel Eliza, in a way, comprehended their feelings. But what modern chatbots produce is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The advanced AI systems at the heart of ChatGPT and other current chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast volumes of raw text: publications, online updates, transcribed video; the more comprehensive the more effective. Definitely this learning material incorporates truths. But it also necessarily involves fabricated content, half-truths and false beliefs. When a user sends ChatGPT a prompt, the core system reviews it as part of a “background” that encompasses the user’s recent messages and its own responses, merging it with what’s stored in its learning set to generate a mathematically probable response. This is amplification, not mirroring. If the user is incorrect in some way, the model has no means of recognizing that. It repeats the inaccurate belief, maybe even more effectively or fluently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who is immune? Every person, without considering whether we “possess” preexisting “psychological conditions”, can and do develop incorrect beliefs of our own identities or the world. The ongoing exchange of dialogues with other people is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a companion. A interaction with it is not truly a discussion, but a echo chamber in which a great deal of what we express is enthusiastically supported.
OpenAI has admitted this in the same way Altman has acknowledged “mental health problems”: by externalizing it, categorizing it, and stating it is resolved. In the month of April, the organization clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have persisted, and Altman has been walking even this back. In late summer he claimed that a lot of people enjoyed ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his most recent announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company
A tech enthusiast and writer passionate about exploring how innovation shapes our daily lives and future possibilities.