OpenAI reveals over a million ChatGPT users in distress
OpenAI discloses that over a million ChatGPT users mention suicidal thoughts weekly, sparking an urgent debate on mental health and AI.
CULTUREAI, DATA & EMERGING


OpenAI has released this week figures that reveal the scale of psychological distress linked to the use of ChatGPT. According to the company’s official blog, more than one million active users each week send messages containing explicit indicators of suicide planning or intent. This proportion represents about 0.15 % of active users over a given period, while 0.05 % of exchanges contain explicit or implicit signs of suicidal thoughts.
At the same time, OpenAI’s internal analysis indicates that 0.03 % of messages reflect an excessive emotional attachment to the AI, an indicator often associated with digital‑dependency behaviours. Other mental‑health markers, such as episodes of mania or psychosis, are present in 0.07 % of active users each week. Although these percentages seem modest, they correspond to several hundred thousand people facing potential crises during everyday interactions with the chatbot.
In response to these findings, OpenAI announced several corrective measures. The company has consulted more than 170 mental‑health specialists to review the model’s responses and strengthen protocols for detecting at‑risk situations. Additional safety filters have been integrated into the latest GPT‑5 update, aimed at identifying dialogue related to self‑harm or suicidal ideation more quickly and directing users to professional help resources.
These revelations come as the Californian giant faces a lawsuit filed by the parents of a 16‑year‑old who allegedly disclosed suicide plans to ChatGPT before taking his own life. The case has reignited debate over the responsibility of AI providers when they become primary interlocutors for distressed individuals. Moreover, the Federal Trade Commission has opened a broader investigation into chatbot practices, particularly their impact on minors and young adults.
Public‑health experts stress that the increased visibility of these statistics is a first step toward better accounting for the psychosocial risks associated with conversational AI. However, they warn against over‑reliance on automated prevention mechanisms. “Screen warnings alone are insufficient to protect someone in crisis,” explains Dr Sophie Martin, a psychiatrist specializing in digital dependency. She recommends that platforms incorporate human‑in‑the‑loop emergency responses and partnerships with national helplines.
Practically, ChatGPT users are urged to stay vigilant and report any abnormal model behaviour. OpenAI stated it will continue publishing regular usage data to assess the effectiveness of its interventions and adjust algorithms based on expert feedback. Transparency of the numbers, even if limited, opens the door to a more informed dialogue among AI developers, health authorities and the public.
Sources:


