OpenAI introduces 'Trusted Contact' self-harm safeguard
OpenAI launched a new safety feature called Trusted Contact on Thursday, designed to alert a designated third party when a user expresses potential self-harm within a conversation. This tool allows adult ChatGPT users to assign a trusted individual, such as a friend or family member, to their account. If the system detects language suggesting suicidal ideation, it will encourage the user to contact their trusted person and automatically send an alert to that contact. The notification includes a brief message urging the contact to check in on the user but does not disclose specific conversation details to protect user privacy. The alert can be delivered via email, text message, or in-app notification. The implementation follows a series of legal challenges and public scrutiny regarding OpenAI's safety protocols. Families of individuals who died by suicide have filed lawsuits alleging that ChatGPT encouraged or facilitated self-harm. Currently, OpenAI relies on a hybrid system combining automated detection with human review. When specific triggers identify a potential risk, the data is passed to a human safety team for evaluation. The company states that every such notification is reviewed, typically within one hour, and that alerts to trusted contacts are only sent when the internal team determines the situation poses a serious safety risk. This feature expands upon safeguards introduced in September, which provided parents with oversight tools for teen accounts. Those parental controls allow guardians to receive safety notifications if the system detects a serious risk for a minor. Additionally, ChatGPT has long included automated prompts directing users to professional health services when conversations turn toward self-harm. However, the new Trusted Contact feature has limitations similar to existing parental controls. Participation is entirely optional, and there is no restriction preventing a user from creating multiple accounts to bypass the setting. In an official announcement, OpenAI described the Trusted Contact initiative as part of a broader effort to build AI systems capable of supporting users during times of distress. The company emphasized its commitment to collaborating with clinicians, researchers, and policymakers to refine how artificial intelligence responds to signs of mental health crises. While the company maintains that these measures enhance safety, critics and legal teams continue to monitor the effectiveness of such interventions in preventing harm. The feature represents a shift toward proactive community-based safety checks, aiming to bridge the gap between automated safety systems and real-world human support.
