AI Psychosis Represents a Increasing Danger, And ChatGPT Heads in the Concerning Path
On the 14th of October, 2025, the CEO of OpenAI delivered a extraordinary announcement.
“We developed ChatGPT quite controlled,” the statement said, “to guarantee we were being careful with respect to psychological well-being concerns.”
Being a psychiatrist who investigates recently appearing psychosis in adolescents and emerging adults, this was news to me.
Scientists have found 16 cases in the current year of users showing psychotic symptoms – becoming detached from the real world – while using ChatGPT interaction. Our research team has subsequently recorded four more cases. Besides these is the publicly known case of a teenager who took his own life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, according to his announcement, is to reduce caution soon. “We recognize,” he continues, that ChatGPT’s controls “made it less useful/enjoyable to a large number of people who had no psychological issues, but considering the seriousness of the issue we wanted to handle it correctly. Given that we have succeeded in mitigate the significant mental health issues and have advanced solutions, we are going to be able to safely reduce the controls in the majority of instances.”
“Emotional disorders,” should we take this perspective, are independent of ChatGPT. They are associated with users, who either possess them or not. Thankfully, these problems have now been “addressed,” though we are not provided details on the means (by “recent solutions” Altman likely refers to the imperfect and readily bypassed guardian restrictions that OpenAI has lately rolled out).
Yet the “mental health problems” Altman seeks to externalize have significant origins in the structure of ChatGPT and additional large language model conversational agents. These tools surround an basic data-driven engine in an interaction design that simulates a discussion, and in doing so implicitly invite the user into the illusion that they’re interacting with a being that has agency. This deception is compelling even if cognitively we might realize otherwise. Imputing consciousness is what people naturally do. We curse at our automobile or computer. We wonder what our pet is feeling. We recognize our behaviors in many things.
The widespread adoption of these systems – 39% of US adults reported using a virtual assistant in 2024, with more than one in four reporting ChatGPT in particular – is, in large part, dependent on the influence of this perception. Chatbots are always-available assistants that can, according to OpenAI’s online platform states, “brainstorm,” “consider possibilities” and “partner” with us. They can be assigned “personality traits”. They can use our names. They have accessible identities of their own (the first of these products, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, stuck with the name it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the main problem. Those discussing ChatGPT often invoke its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that generated a comparable perception. By modern standards Eliza was rudimentary: it generated responses via basic rules, often restating user messages as a inquiry or making vague statements. Memorably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how many users appeared to believe Eliza, to some extent, grasped their emotions. But what current chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.
The large language models at the core of ChatGPT and additional contemporary chatbots can realistically create fluent dialogue only because they have been supplied with almost inconceivably large amounts of raw text: publications, social media posts, transcribed video; the more extensive the more effective. Definitely this training data includes truths. But it also unavoidably involves fiction, partial truths and false beliefs. When a user provides ChatGPT a query, the base algorithm analyzes it as part of a “context” that contains the user’s recent messages and its earlier answers, combining it with what’s stored in its knowledge base to produce a statistically “likely” answer. This is intensification, not echoing. If the user is incorrect in a certain manner, the model has no means of recognizing that. It repeats the inaccurate belief, perhaps even more effectively or fluently. Maybe includes extra information. This can push an individual toward irrational thinking.
Who is vulnerable here? The better question is, who is immune? All of us, without considering whether we “possess” existing “mental health problems”, can and do create erroneous beliefs of our own identities or the reality. The constant friction of discussions with individuals around us is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a friend. A interaction with it is not a conversation at all, but a feedback loop in which a large portion of what we communicate is readily validated.
OpenAI has recognized this in the identical manner Altman has acknowledged “mental health problems”: by attributing it externally, giving it a label, and stating it is resolved. In spring, the firm clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have kept occurring, and Altman has been walking even this back. In August he claimed that numerous individuals appreciated ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his latest announcement, he commented that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company