AI Psychosis Represents a Growing Threat, And ChatGPT Heads in the Wrong Path

On the 14th of October, 2025, the CEO of OpenAI issued a remarkable announcement.

“We designed ChatGPT fairly restrictive,” the statement said, “to ensure we were exercising caution regarding mental health issues.”

As a psychiatrist who investigates recently appearing psychotic disorders in teenagers and young adults, this was an unexpected revelation.

Scientists have found 16 cases this year of individuals developing symptoms of psychosis – losing touch with reality – associated with ChatGPT use. Our unit has subsequently identified four more instances. Besides these is the publicly known case of a teenager who ended his life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.

The strategy, according to his declaration, is to reduce caution soon. “We recognize,” he adds, that ChatGPT’s controls “made it less beneficial/pleasurable to a large number of people who had no existing conditions, but due to the seriousness of the issue we wanted to handle it correctly. Given that we have been able to mitigate the significant mental health issues and have new tools, we are preparing to responsibly relax the limitations in most cases.”

“Emotional disorders,” assuming we adopt this perspective, are independent of ChatGPT. They are attributed to users, who either have them or don’t. Thankfully, these issues have now been “resolved,” though we are not told the means (by “new tools” Altman probably indicates the semi-functional and readily bypassed parental controls that OpenAI has just launched).

But the “psychological disorders” Altman aims to place outside have significant origins in the architecture of ChatGPT and additional large language model chatbots. These products surround an basic algorithmic system in an interaction design that simulates a conversation, and in this process indirectly prompt the user into the perception that they’re communicating with a entity that has agency. This illusion is compelling even if cognitively we might understand differently. Imputing consciousness is what individuals are inclined to perform. We get angry with our car or computer. We wonder what our animal companion is feeling. We perceive our own traits in many things.

The success of these systems – over a third of American adults reported using a conversational AI in 2024, with 28% mentioning ChatGPT by name – is, mostly, predicated on the strength of this deception. Chatbots are always-available partners that can, as per OpenAI’s website states, “brainstorm,” “explore ideas” and “partner” with us. They can be attributed “characteristics”. They can address us personally. They have approachable titles of their own (the first of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, stuck with the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the primary issue. Those discussing ChatGPT commonly invoke its early forerunner, the Eliza “counselor” chatbot designed in 1967 that produced a comparable effect. By contemporary measures Eliza was rudimentary: it produced replies via simple heuristics, often paraphrasing questions as a query or making general observations. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people appeared to believe Eliza, to some extent, understood them. But what modern chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.

The advanced AI systems at the heart of ChatGPT and other current chatbots can convincingly generate natural language only because they have been fed almost inconceivably large volumes of raw text: publications, social media posts, recorded footage; the broader the better. Definitely this training data incorporates accurate information. But it also unavoidably includes fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a message, the underlying model reviews it as part of a “setting” that includes the user’s previous interactions and its earlier answers, merging it with what’s stored in its learning set to produce a probabilistically plausible reply. This is magnification, not reflection. If the user is incorrect in some way, the model has no method of understanding that. It restates the misconception, perhaps even more persuasively or fluently. Perhaps includes extra information. This can cause a person to develop false beliefs.

Which individuals are at risk? The better question is, who is immune? All of us, irrespective of whether we “possess” current “emotional disorders”, are able to and often form incorrect ideas of ourselves or the environment. The constant exchange of conversations with other people is what keeps us oriented to common perception. ChatGPT is not a human. It is not a friend. A interaction with it is not genuine communication, but a echo chamber in which a great deal of what we communicate is cheerfully validated.

OpenAI has acknowledged this in the same way Altman has recognized “emotional concerns”: by externalizing it, assigning it a term, and announcing it is fixed. In spring, the firm explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychosis have kept occurring, and Altman has been retreating from this position. In late summer he claimed that numerous individuals liked ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his latest statement, he mentioned that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company

Heather Thomas
Heather Thomas

A seasoned productivity consultant with over a decade of experience in optimizing office workflows and technology integration.