Google Gemini Rated 'High Risk' for Kids Amid Safety Concerns Over Inappropriate Content and Developmental Mismatch
Common Sense Media has rated Google’s Gemini AI products as “High Risk” for children and teens in its latest safety assessment, citing serious concerns about the platform’s suitability for younger users. While the organization acknowledged that Gemini clearly identifies itself as an AI, not a human, it found that the product still poses significant dangers due to its design and content delivery. The assessment revealed that both the “Under 13” and “Teen Experience” versions of Gemini are essentially adult-focused AI models with only minimal safety additions layered on top. Common Sense argues that AI tools for young people should be built from the ground up with child development in mind—not simply repurposed adult versions. The lack of age-appropriate design, guidance, and content filtering raises red flags. One major concern is that Gemini can still generate or share inappropriate material with children, including content related to sex, drugs, alcohol, and harmful mental health advice. This is particularly troubling given growing evidence that AI interactions may contribute to mental health crises among teens. OpenAI is currently facing its first wrongful death lawsuit after a 16-year-old boy died by suicide following months of conversations with ChatGPT, during which he reportedly bypassed safety measures. Character.AI also faced a similar lawsuit over a teen’s death. The timing of the report is notable, as leaks suggest Apple may choose Gemini as the large language model behind its next-generation, AI-powered Siri, expected to launch next year. If true, this could expose millions of teens to the same risks unless Apple implements robust safety measures. Common Sense also criticized Gemini for failing to account for developmental differences between younger and older children. The report found that the same content and interaction patterns were applied across age groups, despite the fact that younger users require more structured, protective, and age-specific guidance. Robbie Torney, Senior Director of AI Programs at Common Sense Media, said, “Gemini gets some basics right, but it stumbles on the details. An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults.” Google responded by defending its safety protocols, stating it has specific policies and safeguards in place for users under 18, including red-teaming and consultations with external experts. The company acknowledged that some responses were not working as intended and said it has since added additional safeguards. It also pointed out that Gemini is designed to avoid creating the illusion of a personal relationship, a feature Common Sense had noted. However, Google questioned whether the assessment used data from features not available to under-18 users, noting it lacked access to the exact prompts used in the tests. Common Sense Media has previously evaluated other AI platforms, with Meta AI and Character.AI receiving “Unacceptable” ratings due to severe risks. Perplexity was labeled “High Risk,” ChatGPT “Moderate,” and Claude—intended for users 18 and older—“Minimal Risk.”