HyperAIHyperAI

Command Palette

Search for a command to run...

Over a Million ChatGPT Users Discuss Suicide Weekly, OpenAI Says Amid Mental Health Safety Efforts

OpenAI has revealed that over a million people engage in conversations with ChatGPT each week that include explicit signs of suicidal thoughts or planning. The company shared this data as part of a broader update on its efforts to improve how ChatGPT responds to users experiencing mental health crises. According to OpenAI, 0.15% of ChatGPT’s more than 800 million weekly active users exhibit indicators of potential suicidal intent—equating to more than a million individuals per week. The company also reported that a similar percentage of users display heightened emotional attachment to the AI, while hundreds of thousands show signs of psychosis or mania during their interactions. OpenAI described these instances as rare but significant, noting that they affect hundreds of thousands of people weekly. The data comes amid growing scrutiny over AI’s impact on mental health. OpenAI said it has consulted with over 170 mental health professionals to refine how ChatGPT handles sensitive topics. The company claims the latest version of GPT-5 responds more appropriately and consistently than earlier models, with a 91% compliance rate on suicidal conversation benchmarks—up from 77% in the previous version. It also reports a 65% improvement in delivering desirable responses to mental health concerns. Despite these advancements, concerns remain. Several high-profile cases have highlighted the risks, including a lawsuit filed by the parents of a 16-year-old boy who shared suicidal thoughts with ChatGPT before his death. California and Delaware’s attorneys general have also warned OpenAI that it must strengthen protections for minors, potentially blocking the company’s planned corporate restructuring. In response, OpenAI has introduced new safeguards, including an age prediction system to detect underage users and apply stricter controls. The company is also expanding its safety evaluations to include metrics for emotional dependence and non-suicidal mental health emergencies. However, challenges persist. OpenAI continues to offer older, less-safe models like GPT-4o to paying subscribers, and some responses still fall short of desired safety standards. While the company says it is relaxing certain restrictions—such as allowing adult users to engage in erotic conversations—it maintains that mental health safety remains a top priority. Sam Altman, OpenAI’s CEO, previously claimed the company has mitigated serious mental health risks, though he offered no detailed evidence. The latest data suggests progress, but also underscores the scale and complexity of the issue as AI becomes deeply embedded in personal and emotional experiences.

Related Links