Skip to main content
Markkula Center for Applied Ethics Homepage

About Chatbot Psychosis

Illustration of people grabbing their heads in front of keyboards

Illustration of people grabbing their heads in front of keyboards

Join us for an online event on November 7

Irina Raicu

On October 27, OpenAI published a blog post titled “Strengthening ChatGPT’s responses in sensitive conversations.” In it, the company detailed is recent and ongoing efforts to address phenomena like “[p]sychosis, mania and other severe mental health symptoms” reflected in interactions between ChatGPT and its users. The company also noted that its researchers' analysis “estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.” Recently, OpenAI’s CEO had said that ChatGPT has 800 million weekly users.

A day after the blog post came out, the New York Times published an opinion article by Steven Adler, who until late last year led various safety-related research projects and products at OpenAI. Adler had already been writing about chatbot psychosis on Substack, at “Clear-Eyed AI”; in one of his posts, he had called for more disclosure from tech companies but noted that a “general challenge with relying upon AI companies' analyses and self-disclosures… is that they will be more inclined to publish materials that reflect positively upon the company. That is, AI companies tend to share information that suggests they are on top of mental health impacts.”

The subtitle of the OpenAI blog post, in fact, claims that the company’s efforts have succeeded in “reducing responses that fall short of our desired behavior by 65-80%.”

Over the past year, The New York Times and other publications had been covering with growing insistence phenomena like chatbot psychosis. In several articles, for example, journalist Kashmir Hill had detailed a variety of harmful interactions between ChatGPT and some of its users; in one of them, Hill wrote,

I knew that generative A.I. chatbots could be sycophantic and that they can hallucinate, providing answers or ideas that sound plausible even though they are false. But I had not understood the degree to which they could go into a fictional role-play mode that lasted days or weeks, and spin another version of reality around a user. Going into this mode, ChatGPT had caused some vulnerable users to break with reality, convinced that what the chatbot was saying was true.

She ended the piece with some questions: “How widespread is this phenomenon? What makes a generative A.I. chatbot go off the rails? What can the companies behind the chatbots do to stop this?”

Some suggested answers to at least the last of Hill’s questions above are now being proposed as part of a federal bill (the GUARD Act) that was recently introduced and co-sponsored by a bipartisan group of Senators

Hill asked those questions back in June, when ChatGPT had substantially fewer users than it does now. It is worth noting, also, that an earlier OpenAI blog post stated that “[b]y May 2025, ChatGPT adoption growth rates in the lowest income countries were over 4x those in the highest income countries.”

Chatbot psychosis and various responses to it (technical, regulatory, etc.) confront us with a whole range of ethical issues. Register now and join us (online) on November 7 as we aim to unpack at least some of them in a conversation with Steven Adler.

Image: Clarote & AI4Media - cropped / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Oct 31, 2025
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: