HyperAIHyperAI

Command Palette

Search for a command to run...

OpenAI Launches Safety Routing and Parental Controls for ChatGPT

OpenAI has rolled out new safety features for ChatGPT, including a testing phase of a safety routing system and full parental controls for teen accounts, in response to growing concerns over AI’s role in mental health crises. The changes follow a high-profile wrongful death lawsuit tied to the suicide of 16-year-old Adam Raine, who had spent months confiding in ChatGPT. His father, Matthew Raine, testified before a U.S. Senate panel, accusing OpenAI of enabling harm by prioritizing rapid deployment over safety, particularly with its agreeable and overly accommodating chat models like GPT-4o. The new safety routing system, tested over the weekend and officially introduced on Monday, automatically detects emotionally sensitive or high-risk conversations and switches the active model mid-chat to GPT-5-thinking—a version trained with a feature called “safe completions.” Unlike earlier models that often validated harmful or delusional statements, GPT-5-thinking is designed to respond safely and responsibly, even when users express distress or dangerous thoughts. The switch happens on a per-message basis and is temporary, with users being informed of which model is active. OpenAI’s Nick Turley explained this as part of a broader effort to strengthen safeguards through real-world testing, with a 120-day window for iteration and improvement. Alongside the routing system, OpenAI launched parental controls for all web users, with mobile support coming soon. Parents must create their own accounts and link them to their teen’s account, which requires the teen’s consent. Teens can disconnect at any time, with parents notified. Importantly, parents do not gain access to their teen’s conversations. Instead, OpenAI’s system uses AI and human reviewers to detect potential signs of serious harm—such as suicidal ideation—and may alert parents via email, text, or push notification if a serious risk is identified. In cases of imminent danger, OpenAI says it may contact emergency services or law enforcement if parents cannot be reached. The parental controls allow parents to customize their teen’s experience by enabling quiet hours, turning off voice mode, disabling image generation, and opting out of model training using their teen’s data. Additionally, sensitive content—including graphic material, romantic or violent roleplay, and extreme beauty ideals—is automatically reduced or filtered out by default. OpenAI emphasized that while the system isn’t perfect and may occasionally trigger false alarms, it’s better to err on the side of caution. The rollout has drawn mixed reactions. Many experts and users applaud the move as a necessary step toward protecting minors. However, some critics argue the system treats adults like children, potentially undermining the value of the service by over-censoring. Others note that OpenAI initially explored a “one-click emergency contact” feature but appears to have abandoned it in favor of automated alerts. The changes come after OpenAI faced intense scrutiny for its earlier philosophy of rapid deployment and feedback collection, even at high stakes. CEO Sam Altman acknowledged the need to balance safety, privacy, and freedom, and has since worked on age-prediction tools to better identify underage users. Ultimately, the new safety measures mark a significant shift in OpenAI’s approach, responding directly to real-world tragedies while navigating the complex ethical and technical challenges of AI safety. The company has made clear that this is not a final solution but a step in an ongoing effort to improve.

Related Links