HyperAIHyperAI
Back to Headlines

Google Gemini erhält hohe Risikobewertung für Kinder und Jugendliche

vor 7 Tagen

Google’s Gemini AI has been rated “high risk” for children and teenagers in a new safety assessment by Common Sense Media, a nonprofit dedicated to evaluating media and technology for youth safety. While the organization acknowledged that Gemini clearly identifies itself as an AI, not a human companion—a key safeguard against fostering delusional thinking in emotionally vulnerable users—it found significant shortcomings in how the platform serves younger audiences. Both the “Under 13” and “Teen Experience” versions of Gemini were judged to be essentially adult-focused models with only superficial safety additions, lacking age-appropriate design from the ground up. This approach, Common Sense argues, fails to account for developmental differences between younger and older users. The report highlights that Gemini still occasionally generates inappropriate content, including material related to sex, drugs, alcohol, and harmful mental health advice—topics that may be overwhelming or damaging for children. These risks are particularly concerning given recent high-profile cases where AI interactions have been linked to teen suicides. OpenAI is facing its first wrongful death lawsuit after a 16-year-old boy reportedly used ChatGPT for months to discuss suicidal ideation, bypassing safety filters. Similarly, Character.AI was sued over a teen’s suicide, underscoring growing scrutiny of AI’s role in youth mental health crises. Adding urgency, reports suggest Apple may adopt Gemini as the underlying large language model for its next-generation, AI-powered Siri, expected in 2025. If implemented without robust child-specific safeguards, this could expose millions of teens to unmitigated risks. Common Sense emphasized that effective AI for kids must be tailored to their cognitive and emotional development, not just a repackaged adult product. “Gemini gets some basics right, but it stumbles on the details,” said Robbie Torney, Senior Director of AI Programs at Common Sense Media. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach.” Google responded by defending its safety measures, citing age-specific policies, red-teaming exercises, and expert consultations. It acknowledged some responses weren’t performing as intended and has since added further safeguards. However, Google noted that Common Sense’s assessment may have tested features not available to under-18 users and questioned the test questions used, though it did not provide access to the actual queries. This assessment is part of a broader series by Common Sense Media evaluating major AI platforms. Previous evaluations found Meta AI and Character.AI to be “unacceptable” due to severe risks, Perplexity labeled “high risk,” ChatGPT rated “moderate,” and Claude (intended for adults) deemed “minimal risk.” The findings underscore a growing consensus: AI designed for children must prioritize developmental appropriateness, not just technical safety features. As AI becomes embedded in everyday tools like search and virtual assistants, the pressure mounts on tech companies to build child-centric safeguards from the start.

Related Links

Google Gemini erhält hohe Risikobewertung für Kinder und Jugendliche | Schlagzeilen | HyperAI