Meta Unveils New AI Parental Controls to Enhance Teen Safety
Meta has announced new parental control features designed to help parents monitor and manage their teens’ interactions with AI characters on its platforms, starting with Instagram early next year. The updates, rolling out in English across the U.S., U.K., Canada, and Australia, aim to give parents greater oversight while maintaining age-appropriate AI experiences for teens. Under the new system, parents will be able to disable all chats with AI characters for their teens, though access to Meta AI—the company’s general-purpose chatbot—will remain available, as it is already restricted to age-appropriate content. Parents can also choose to block individual AI characters selectively, offering more granular control. Additionally, parents will receive insights into the topics their teens are discussing with AI, helping them stay informed about their children’s digital interactions. These tools are part of Meta’s broader effort to address growing concerns about teen safety on social media and the potential risks of AI. The company emphasized that AI is not meant to replace human connection or critical thinking, but rather to support learning and exploration in areas like coding, graphic design, and schoolwork. The goal is to make AI a helpful tool within safe boundaries. Meta has already implemented several safeguards for teens. AI interactions are guided by a PG-13 movie rating standard, meaning they avoid extreme violence, nudity, graphic drug use, and other mature themes. AI characters are programmed not to engage in conversations about self-harm, suicide, or disordered eating, and will instead direct teens to support resources when needed. Only a curated selection of age-appropriate AI characters—focused on education, sports, hobbies, and personal growth—are available to teens, with no access to romance or adult-oriented topics. Parents can also set daily time limits for AI interactions, as low as 15 minutes per day, which are included in overall app usage limits. To prevent teens from bypassing age restrictions, Meta is using AI to detect and flag accounts that may be underage, even if users claim to be adults. The announcement comes amid increasing scrutiny of tech companies over their role in teen mental health, with multiple lawsuits alleging that social media and AI platforms contributed to youth anxiety, depression, and even suicides. In response, Meta and other major platforms—including OpenAI and YouTube—have recently introduced new safety tools for teens. Meta’s leadership, including Instagram head Adam Mosseri and Meta AI head Alexandr Wang, stressed that while technology can’t replace real-life relationships, it can be a valuable supplement when used responsibly. They acknowledged the challenges parents face in guiding teens through the digital world and said these new tools are meant to simplify that process. The company is rolling out these changes carefully, with a focus on ensuring they work effectively across its global platforms. While the initial rollout is limited to English-speaking regions, Meta plans to expand the features as part of its ongoing commitment to teen safety and responsible AI development. This move reflects a broader industry trend: as AI becomes more embedded in social media, companies are under pressure to balance innovation with accountability. Meta’s new parental controls represent a significant step toward giving families more control while maintaining the benefits of AI for learning and creativity—within clear, enforceable boundaries.