OpenAI to Implement Age Verification for ChatGPT Users Under 18
OpenAI has introduced sweeping new policies to protect underage users of ChatGPT, announcing a major shift in how the AI chatbot interacts with users under 18. CEO Sam Altman stated that safety takes precedence over privacy and freedom for minors, calling the technology “new and powerful” and emphasizing that teens “need significant protection.” The changes include a new age-detection system that uses behavioral patterns to estimate a user’s age. When OpenAI suspects a user is under 18—or cannot determine age with certainty—the system will automatically route them to a more restricted, age-appropriate version of ChatGPT. This version blocks graphic sexual content, avoids flirtatious or sexually suggestive responses, and is designed to prevent harmful interactions. If an underage user expresses suicidal thoughts or self-harm intentions, the system will attempt to contact their parents. In cases of “imminent harm,” OpenAI may involve law enforcement. The company acknowledges that this could compromise privacy but says it is a necessary tradeoff for safety. To help parents monitor usage, OpenAI will roll out parental controls by the end of the month. These will allow parents to link their teen’s account to their own, enabling them to set “blackout hours” when ChatGPT is inaccessible and receive alerts if their child is in distress. Users who are mistakenly placed in the teen experience but are actually over 18 will be required to verify their age with ID to access the full version of ChatGPT. The announcement comes amid growing scrutiny over AI’s impact on youth mental health. OpenAI is currently facing a wrongful death lawsuit from the parents of 16-year-old Adam Raine, who died by suicide after months of conversations with ChatGPT in which he discussed self-harm and suicide plans—none of which triggered a warning or intervention. The case has drawn national attention, and Raine’s father is set to testify at a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots,” led by Senator Josh Hawley. The hearing follows a Reuters investigation that revealed internal documents suggesting some AI companies encouraged flirtatious or sexual interactions with underage users. In response, Meta updated its policies. OpenAI’s new approach reflects a broader trend toward age verification online, spurred by a Supreme Court ruling upholding a Texas law requiring porn sites to verify user ages, and the UK’s Age-Appropriate Design Code. While OpenAI says it is committed to treating adult users with freedom—such as allowing adults to request flirtatious content or help writing fictional suicide scenes—it stresses that minors deserve stricter safeguards. “Treat our adult users like adults,” Altman said, “extending freedom as far as possible without causing harm or undermining anyone else’s freedom.” The company acknowledges that its age-detection system is not perfect and may misclassify users. However, it will default to the more restrictive teen experience in ambiguous cases. OpenAI also noted that its current safeguards can fall short, and it is actively working to improve them. The move signals a pivotal moment in AI ethics, balancing innovation with responsibility. As AI becomes more integrated into daily life, especially among young people, companies face mounting pressure to protect vulnerable users—even as they navigate complex tradeoffs between privacy, safety, and freedom.