AI Psychosis Poses a Increasing Threat, And ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the CEO of OpenAI delivered a extraordinary announcement.

“We made ChatGPT rather controlled,” the statement said, “to make certain we were exercising caution concerning mental health issues.”

As a psychiatrist who studies recently appearing psychotic disorders in adolescents and young adults, this was an unexpected revelation.

Experts have documented sixteen instances this year of people developing psychotic symptoms – losing touch with reality – in the context of ChatGPT usage. Our unit has subsequently recorded an additional four instances. In addition to these is the publicly known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.

The strategy, according to his declaration, is to loosen restrictions soon. “We recognize,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/engaging to numerous users who had no existing conditions, but considering the gravity of the issue we sought to get this right. Now that we have succeeded in address the severe mental health issues and have updated measures, we are going to be able to responsibly ease the limitations in the majority of instances.”

“Psychological issues,” should we take this perspective, are independent of ChatGPT. They are attributed to people, who may or may not have them. Fortunately, these issues have now been “addressed,” even if we are not told the method (by “recent solutions” Altman likely refers to the imperfect and easily circumvented parental controls that OpenAI has just launched).

However the “mental health problems” Altman aims to externalize have significant origins in the architecture of ChatGPT and similar advanced AI chatbots. These tools surround an underlying algorithmic system in an interaction design that replicates a conversation, and in this approach subtly encourage the user into the illusion that they’re communicating with a being that has independent action. This false impression is powerful even if intellectually we might realize the truth. Attributing agency is what individuals are inclined to perform. We yell at our car or device. We speculate what our animal companion is thinking. We recognize our behaviors everywhere.

The success of these products – 39% of US adults indicated they interacted with a conversational AI in 2024, with more than one in four specifying ChatGPT in particular – is, primarily, dependent on the strength of this perception. Chatbots are always-available companions that can, according to OpenAI’s official site tells us, “think creatively,” “consider possibilities” and “work together” with us. They can be assigned “individual qualities”. They can address us personally. They have friendly identities of their own (the first of these systems, ChatGPT, is, maybe to the concern of OpenAI’s brand managers, stuck with the name it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the main problem. Those analyzing ChatGPT often reference its historical predecessor, the Eliza “therapist” chatbot created in 1967 that produced a similar perception. By modern standards Eliza was primitive: it produced replies via simple heuristics, frequently rephrasing input as a query or making generic comments. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how many users appeared to believe Eliza, in a way, grasped their emotions. But what current chatbots produce is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.

The sophisticated algorithms at the center of ChatGPT and additional modern chatbots can convincingly generate fluent dialogue only because they have been fed immensely huge volumes of raw text: publications, social media posts, transcribed video; the more comprehensive the superior. Definitely this educational input includes truths. But it also necessarily includes fabricated content, half-truths and false beliefs. When a user sends ChatGPT a message, the base algorithm processes it as part of a “context” that includes the user’s previous interactions and its own responses, integrating it with what’s encoded in its knowledge base to create a statistically “likely” response. This is intensification, not reflection. If the user is wrong in any respect, the model has no method of understanding that. It restates the false idea, perhaps even more persuasively or articulately. Perhaps adds an additional detail. This can cause a person to develop false beliefs.

What type of person is susceptible? The more important point is, who isn’t? Each individual, irrespective of whether we “possess” current “psychological conditions”, are able to and often create erroneous conceptions of our own identities or the world. The continuous exchange of discussions with other people is what maintains our connection to common perception. ChatGPT is not a person. It is not a confidant. A dialogue with it is not a conversation at all, but a feedback loop in which a large portion of what we express is readily validated.

OpenAI has acknowledged this in the identical manner Altman has recognized “psychological issues”: by placing it outside, giving it a label, and declaring it solved. In April, the organization clarified that it was “dealing with” ChatGPT’s “sycophancy”. But cases of psychosis have persisted, and Altman has been walking even this back. In August he asserted that numerous individuals appreciated ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his recent announcement, he commented that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company

Michael Swanson
Michael Swanson

A tech enthusiast and digital strategist with a passion for exploring how technology shapes everyday life and future possibilities.