HyperAIHyperAI
Back to Headlines

California Enacts First-in-Nation Law to Regulate AI Companion Chatbots, Mandating Safety Protections for Minors

4 days ago

California has become the first state in the U.S. to enact comprehensive regulations targeting AI companion chatbots, signing into law SB 243, which mandates safety measures to protect children and vulnerable users. The legislation, signed by Governor Gavin Newsom, requires companies operating AI chatbots—ranging from major tech firms like Meta and OpenAI to specialized startups such as Character AI and Replika—to implement strict safeguards. The law was introduced in January by state senators Steve Padilla and Josh Becker and gained urgency following high-profile tragedies, including the suicide of teenager Adam Raine, who reportedly had prolonged conversations with OpenAI’s ChatGPT about self-harm. It also responds to leaked internal documents revealing that Meta’s chatbots were permitted to engage in romantic and sensual interactions with minors. More recently, a Colorado family filed a lawsuit against Character AI after their 13-year-old daughter died by suicide following disturbing, sexually charged exchanges with the platform’s AI. In his statement, Newsom emphasized the need for accountability in emerging technology. “Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids,” he said. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.” SB 243 will take effect on January 1, 2026. Key provisions include mandatory age verification, clear disclaimers that interactions are AI-generated, and warnings about the risks of social media and AI companions. Companies must also establish protocols to detect and respond to suicide and self-harm, sharing data on crisis intervention efforts with the state’s Department of Public Health. Chatbots cannot impersonate healthcare professionals, and platforms must provide break reminders for minors and block access to sexually explicit content generated by the AI. Some companies have already taken steps to improve safety. OpenAI has rolled out parental controls, content filters, and self-harm detection for younger users. Character AI now includes a notice that all conversations are fictional and AI-driven. Senator Padilla described the law as a critical step toward responsible innovation. “We have to move quickly to not miss windows of opportunity before they disappear,” he said. “I hope other states will see the risk. I think many do. I think this is a conversation happening all over the country, and I hope people will take action. Certainly the federal government has not, and I think we have an obligation here to protect the most vulnerable people among us.” This marks California’s second major AI regulation in recent weeks. On September 29, Newsom signed SB 53, which requires large AI companies like OpenAI, Anthropic, Meta, and Google DeepMind to disclose safety practices and protect whistleblowers. Other states, including Illinois, Nevada, and Utah, have passed laws restricting or banning AI chatbots from serving as substitutes for licensed mental health care. TechCrunch has reached out to Character AI, Meta, OpenAI, and Replika for comment.

Related Links