OpenAI has raised concerns about the mental health of its global user base, stating that a significant number of ChatGPT users may be experiencing psychological and emotional distress. OpenAI’s internal data highlighted that nearly 800 million- or about 0.07% active users tend to show symptoms which indicate conditions such as mania, psychosis and/or having suicidal thoughts.
OpenAI found that roughly 1.2 million users send texts and messages containing clear signs of suicidal thoughts, ideation or planning. This represents a concerningly large group of individuals in absolute terms, highlighting the potential mental strain emerging from extended interaction with AI chat platforms.
Emotional Dependence on AI: A Growing Concern
One of the troublesome findings is that over one million users show what OpenAI describes as “exclusive emotional attachment” to ChatGPT. These users replace real-life human interactions with virtual conversations, often forming an emotional bond with them. Although such connections might seem harmless initially, experts have cautioned that depending on a chatbot for emotional support can lead to unhealthy patterns, especially for those who suffer from loneliness, mental illness and emotional instability.
OpenAI has acknowledged this issue, stating that while many users use ChatGPT for helpful and creative purposes, a subset has begun to develop unhealthy emotional dependencies. The company emphasised that it is working to identify and respond sensitively to these behaviours, ensuring the AI does not inadvertently reinforce distress or isolation.
Also Read| Bharat Taxi Explained: How a Cooperative Cab Model Could Redefine Digital Mobility in India
Expert Collaboration to Improve AI Safety Response
OpenAI has initiated a major overhaul of its mental health response protocols in response to these findings. The company has formed a panel of more than 170 mental health specialists, psychiatrists, and clinical psychologists to help ChatGPT in handling users exhibiting signs of emotional crisis. The AI platform has retrained its most recent GPT-5 model to operate more safely and empathetically during sensitive conversations with the help of these health experts.
The move has led to achieving a 91% compliance rate with these new safety and response standards, which is a significant improvement from the 77% compliance of its earlier versions.
Mental Health Experts Urge Continued Caution
Although professionals in this field have welcomed these efforts, they have stressed that the problem remains complex. Dr Hamilton Morrin, a psychiatrist from King’s College London, commented that OpenAI’s collaboration with experts is “a step forward,” though significant challenges persist.
Similarly, Dr Thomas Pollak of the South London and Maudsley NHS Foundation Trust warned that even small percentages signify vast numbers of at-risk individuals given ChatGPT’s immense user base.
Experts further note that the understanding of the cause remains difficult. “It is still not clear whether interaction with AI directly causes mental health deterioration or merely reflects underlying struggles in users who are already vulnerable,” a health specialist stated.
Also Read| Seamless UPI Payments for NRIs via Paytm: All You Need to Know
Psychologists further speculate that social media platforms can intensify existing issues by offering constant engagement and feedback loops, leading to blurring the line between genuine companionship and artificial reassurance.
AI’s Dual Role: Reflection or Catalyst?
Researchers are debating whether AI acts as a mirror, revealing the existing mental health challenges of society, or as a catalyst, intensifying them in subtle ways. However, OpenAI has stated that there is no conclusive evidence proving a direct link between ChatGPT usage and worsening mental health outcomes. The company stated that such distress naturally occurs within a large online population and that AI tools can guide users toward professional help.
Balancing Innovation and Well-Being
The intersection of artificial intelligence and human emotion has always been delicate. While OpenAI’s technological advancements have helped millions access information, education, and creativity, its findings also underscore the importance of psychological ethics in AI design.
As human-AI interaction deepens, the company is urging a balanced approach—leveraging the benefits of ChatGPT without neglecting the mental health implications that come with it. OpenAI’s latest measures mark an effort to ensure that technology designed to assist humanity does not inadvertently harm the most vulnerable.

