HyperAIHyperAI

Command Palette

Search for a command to run...

California Bill to Regulate AI Companion Chatbots Nears Enactment

California has taken a significant step toward regulating artificial intelligence with the passage of SB 243, a bill aimed at protecting minors and vulnerable users from risks posed by AI companion chatbots. The legislation, which passed both the State Assembly and Senate with bipartisan support, now heads to Governor Gavin Newsom, who has until October 12 to sign it into law. If enacted, it will take effect on January 1, 2026, making California the first state to impose legal requirements on AI chatbot operators. SB 243 targets AI systems designed to simulate human-like, emotionally responsive interactions—such as those offered by Character.AI, Replika, and OpenAI’s models—that can engage users on sensitive topics like suicide, self-harm, and sexual content. The bill mandates that platforms issue recurring alerts every three hours to minors, clearly stating they are communicating with an AI, not a real person, and encouraging them to take breaks. It also requires annual transparency reports starting July 1, 2027, including data on user interactions with crisis-related content and referrals to mental health resources. The legislation was spurred by high-profile tragedies, including the suicide of teenager Adam Raine, whose final conversations with OpenAI’s ChatGPT involved detailed discussions of self-harm and death. Leaked internal documents also revealed that Meta’s chatbots were permitted to engage in romantic and sensual conversations with children, raising alarm about the potential for emotional manipulation and psychological harm. Under SB 243, individuals harmed by violations can file lawsuits seeking injunctive relief, up to $1,000 per violation, and attorney’s fees. This creates a legal accountability mechanism for companies that fail to meet safety standards. The bill originally included stricter provisions, such as banning “variable reward” features—like unlockable messages, storylines, and rare responses—that critics argue are designed to create addictive engagement loops. However, these were removed in amendments to balance effectiveness with feasibility, with lawmakers acknowledging that some requirements might be technically unworkable or overly burdensome. Senator Steve Padilla emphasized the need for swift action, stating that the potential harm from unregulated AI companions is too great to ignore. He also advocated for mandatory reporting of how often AI systems refer users to mental health services, to better understand and prevent harm before it occurs. The bill comes amid growing national scrutiny of AI’s impact on youth mental health. The Federal Trade Commission is preparing to investigate AI chatbots’ effects on children. Texas Attorney General Ken Paxton has launched probes into Meta and Character.AI, while Senators Josh Hawley and Ed Markey have initiated their own investigations into Meta’s practices. Meanwhile, tech companies are pushing back. OpenAI, Meta, Google, and Amazon oppose a separate California bill, SB 53, which would require broader transparency reporting. Only Anthropic has publicly supported SB 53. OpenAI has urged Newsom to reject it, advocating instead for federal or international standards. Despite industry resistance, California lawmakers argue that innovation and safety are not mutually exclusive. “We can support innovation and development that we think is healthy and has benefits… and at the same time, provide reasonable safeguards for the most vulnerable people,” said Padilla. As Silicon Valley invests heavily in pro-AI political action committees ahead of the midterms, SB 243 represents a pivotal moment in shaping how AI is governed. If signed, it could set a precedent for state-level AI regulation across the U.S., balancing technological advancement with ethical responsibility.

Related Links