
New data released by OpenAI raises concerns about the prevalence of mental health issues among ChatGPT users. Approximately 0.15% of active users (over 1 million people) reveal suicidal plans or intentions to the AI each week, while a similar percentage display "high emotional attachment" to ChatGPT. Hundreds of thousands of users have exhibited signs of psychosis or mania in their conversations. While OpenAI emphasizes that such cases are "extremely rare" among its 800 million weekly active users, the scale of the problem remains significant.
The disclosure comes as OpenAI announces technological improvements. The company claims the new GPT-5 model meets guidelines for mental health-related responses 91% of the time (a 65% increase from the previous version) and includes new assessment metrics such as "emotional dependency." Developed by a team of 170 mental health experts, the model concludes that the current version's responses are "more appropriate and consistent." However, lawsuits and regulatory pressure have followed. The parents of a 16-year-old boy have accused ChatGPT of failing to effectively intervene in his suicidal tendencies, and attorneys general in California and other states have called for stronger youth protections.
Technological advancements come with risks. Despite the improved long-conversation safety of GPT-5, OpenAI still offers older models (such as GPT-4o) to paying users, potentially perpetuating potential risks. While the company is developing a child age prediction system to strengthen controls, CEO Altman also announced measures to relax restrictions on adult content, further highlighting the challenge of balancing innovation and responsibility.