AI Psychosis Poses a Growing Danger, And ChatGPT Moves in the Wrong Path

Back on October 14, 2025, the chief executive of OpenAI delivered a surprising announcement.

“We made ChatGPT quite controlled,” the statement said, “to make certain we were exercising caution with respect to mental health matters.”

Working as a mental health specialist who studies emerging psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.

Experts have identified a series of cases recently of people experiencing symptoms of psychosis – becoming detached from the real world – associated with ChatGPT use. Our unit has since identified four further instances. In addition to these is the publicly known case of a adolescent who ended his life after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.

The intention, according to his announcement, is to be less careful in the near future. “We recognize,” he states, that ChatGPT’s controls “caused it to be less beneficial/engaging to numerous users who had no psychological issues, but given the severity of the issue we wanted to handle it correctly. Given that we have been able to address the severe mental health issues and have new tools, we are going to be able to responsibly relax the restrictions in many situations.”

“Emotional disorders,” should we take this framing, are unrelated to ChatGPT. They belong to users, who may or may not have them. Fortunately, these problems have now been “resolved,” even if we are not provided details on the method (by “recent solutions” Altman probably means the partially effective and readily bypassed guardian restrictions that OpenAI has lately rolled out).

However the “mental health problems” Altman wants to externalize have strong foundations in the architecture of ChatGPT and other advanced AI conversational agents. These products wrap an basic algorithmic system in an interface that mimics a dialogue, and in this approach subtly encourage the user into the belief that they’re engaging with a being that has autonomy. This false impression is strong even if intellectually we might realize otherwise. Attributing agency is what individuals are inclined to perform. We yell at our vehicle or computer. We wonder what our animal companion is considering. We see ourselves in various contexts.

The widespread adoption of these tools – nearly four in ten U.S. residents reported using a chatbot in 2024, with over a quarter reporting ChatGPT specifically – is, in large part, dependent on the strength of this perception. Chatbots are ever-present assistants that can, according to OpenAI’s website states, “generate ideas,” “consider possibilities” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have friendly titles of their own (the initial of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, burdened by the name it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the main problem. Those analyzing ChatGPT commonly mention its early forerunner, the Eliza “counselor” chatbot designed in 1967 that generated a similar illusion. By modern standards Eliza was primitive: it generated responses via straightforward methods, typically rephrasing input as a question or making general observations. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, in some sense, comprehended their feelings. But what current chatbots generate is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.

The advanced AI systems at the center of ChatGPT and similar contemporary chatbots can convincingly generate fluent dialogue only because they have been trained on immensely huge amounts of written content: books, social media posts, transcribed video; the more comprehensive the superior. Undoubtedly this educational input includes accurate information. But it also inevitably involves fiction, partial truths and misconceptions. When a user provides ChatGPT a query, the base algorithm processes it as part of a “context” that contains the user’s recent messages and its own responses, merging it with what’s embedded in its knowledge base to generate a mathematically probable reply. This is magnification, not echoing. If the user is mistaken in a certain manner, the model has no method of comprehending that. It reiterates the misconception, possibly even more convincingly or articulately. Maybe includes extra information. This can push an individual toward irrational thinking.

What type of person is susceptible? The better question is, who isn’t? All of us, irrespective of whether we “experience” current “psychological conditions”, may and frequently develop erroneous beliefs of our own identities or the reality. The constant interaction of conversations with other people is what maintains our connection to common perception. ChatGPT is not a human. It is not a confidant. A interaction with it is not genuine communication, but a feedback loop in which a great deal of what we say is readily supported.

OpenAI has admitted this in the same way Altman has acknowledged “emotional concerns”: by attributing it externally, giving it a label, and stating it is resolved. In April, the firm explained that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of loss of reality have continued, and Altman has been retreating from this position. In the summer month of August he claimed that many users liked ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his recent announcement, he commented that OpenAI would “launch a updated model of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company

Madison Rice
Madison Rice

Award-winning journalist with over a decade of experience in investigative reporting and political commentary.