ChatGPT May Require ID Verification for Adults Amid Teen Suicide Lawsuit Concerns
ChatGPT may soon require adults to verify their age using government-issued ID, according to OpenAI CEO Sam Altman. The move comes as the company responds to growing scrutiny over its platform’s impact on minors, particularly following a high-profile lawsuit tied to a teen suicide. Altman announced that when ChatGPT cannot confirm a user’s age, the system will automatically default to the under-18 experience. This version will include stricter content filters, limited access to certain features, and reduced interaction capabilities to better protect younger users. The decision follows a lawsuit filed by the family of a teenager who died by suicide, alleging that ChatGPT contributed to the teen’s mental health decline. The case has drawn widespread attention and intensified calls for stronger safeguards around AI tools used by children and adolescents. OpenAI has not yet revealed the exact timeline for implementing ID verification, but the company says it is actively developing secure and privacy-preserving methods to authenticate user age without storing sensitive personal data. The goal is to balance safety with user privacy and compliance with regulations like the Children’s Online Privacy Protection Act (COPPA). The under-18 default setting is expected to be rolled out globally, with adjustments based on regional laws. While the feature will limit functionality for younger users, OpenAI emphasizes that it is part of a broader effort to make AI safer and more responsible. The company has also been working with researchers, educators, and policymakers to establish best practices for AI use among minors. Altman acknowledged that the platform’s rapid growth has outpaced the development of robust age verification systems, and that the new measures are a necessary step forward. As AI tools become increasingly integrated into education, mental health support, and daily life, the challenge of protecting vulnerable users while maintaining accessibility remains a central issue. OpenAI’s approach reflects a growing trend among tech companies to prioritize safety in the face of legal, ethical, and public pressure.
