HyperAIHyperAI

Command Palette

Search for a command to run...

ChatGPT Enhances Mental Health Support with New Safety Upgrades in GPT-5

OpenAI has enhanced ChatGPT’s ability to respond appropriately in sensitive conversations, particularly those involving mental health distress. The latest update to ChatGPT’s default model, powered by GPT-5, includes significant improvements in recognizing signs of psychological distress, de-escalating tense exchanges, and guiding users toward professional support when needed. Working closely with mental health professionals from its Global Physician Network—a diverse group of nearly 300 clinicians across 60 countries—OpenAI refined the model’s behavior in three key areas: psychosis and mania, self-harm and suicide risk, and emotional reliance on AI. These enhancements were guided by detailed taxonomies that define ideal and problematic responses, helping train the model to respond with empathy, safety, and clinical awareness. The updated model shows substantial progress. In production traffic, the rate of non-compliant responses—those failing to meet safety standards—dropped by 65% to 80% across mental health-related domains. For conversations involving psychosis or mania, which are rare but serious, the model now performs significantly better, with experts noting a 39% reduction in undesired responses compared to GPT-4o. Automated evaluations of over 1,000 challenging cases show the new model is 92% compliant with safety guidelines, up from 27% in the prior version. In the area of suicide and self-harm, the model also improved. Despite the low prevalence of such conversations—estimated at 0.15% of active weekly users and 0.05% of messages—the new model reduced non-compliant responses by 65%. In expert evaluations, it showed a 52% improvement over GPT-4o, with a 91% compliance rate in difficult cases. To address emotional dependency on AI, OpenAI introduced a new taxonomy to distinguish healthy interaction from concerning patterns, such as replacing real-world relationships with AI. The model now gently encourages users to connect with people in their lives and avoids reinforcing delusional beliefs. The company also expanded access to crisis resources, rerouted sensitive queries from other models to safer ones, and added reminders to take breaks during long sessions. These features aim to promote well-being and prevent over-reliance. While expert agreement on ideal responses is strong—inter-rater reliability ranges from 71% to 77%—some differences remain, reflecting the complexity of mental health care. OpenAI continues to refine its models using feedback from clinicians and structured evaluations. These improvements are part of a broader commitment to safety, embedded in the updated Model Spec, which now explicitly includes standards for emotional reliance and non-suicidal mental health emergencies. Future model releases will be tested against these criteria. OpenAI acknowledges that progress is ongoing. As models evolve and user behavior shifts, measurement methods will continue to improve. The goal remains clear: to create a safer, more supportive experience for users in moments of emotional difficulty.

Related Links