California Enacts Landmark AI Safety Bill SB 53
California Governor Gavin Newsom has signed SB 53 into law, making it the first-in-the-nation legislation to impose transparency and safety requirements on large artificial intelligence companies. The bill, authored by Senator Scott Wiener, targets major AI labs including OpenAI, Anthropic, Meta, and Google DeepMind, mandating they disclose their safety and security protocols, protect whistleblowers, and report critical incidents to California’s Office of Emergency Services. SB 53 requires companies developing frontier AI models—defined by high training costs—to publicly publish frameworks detailing how they incorporate national, international, and industry-standard best practices into their safety processes. Any updates to these protocols must be shared with the public within 30 days, along with an explanation of the changes. The law also establishes a reporting mechanism for both companies and the public to flag serious safety risks, such as AI-driven cyberattacks or deceptive behavior not covered under the EU AI Act. While the bill aims to build public trust and ensure accountability, it has drawn mixed reactions from the tech industry. Anthropic publicly endorsed the legislation after negotiations helped refine its language. In contrast, Meta and OpenAI opposed it, with OpenAI even issuing an open letter urging Newsom not to sign it. OpenAI argued that state-level regulation could create a fragmented, inconsistent patchwork that hampers innovation and urged California to align with federal or international frameworks instead. The bill’s passage follows the veto of a more stringent version, SB 1047, last year due to industry pushback. In response, Newsom commissioned a 52-page report from AI researchers, which informed the revised SB 53. Some recommendations from that report, like whistleblower protections and public disclosure of safety measures, were incorporated. However, the final version does not require third-party audits—a key demand from critics and advocates of stronger oversight. Despite the controversy, the law is seen as a significant step in AI governance. It empowers the California Attorney General to enforce compliance with civil penalties and mandates annual updates to the law based on technological advances and stakeholder input. The state’s Department of Technology will lead this review process. The legislation comes amid a broader political effort to shape AI policy. Tech leaders at OpenAI and Meta have launched pro-AI super PACs to support candidates and legislation favoring light-touch regulation. These groups argue that overly strict rules could drive AI innovation out of California, a hub for tech development. Meanwhile, other states are watching California’s lead. New York has passed a similar bill awaiting Governor Kathy Hochul’s decision, signaling a growing trend toward state-level AI oversight. Governor Newsom praised the law as a balanced approach that protects public safety while fostering innovation. “California is not only here for it — but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation,” he said. SB 53 is now part of a larger regulatory push in California. The state is also considering SB 243, which would regulate AI companion chatbots by requiring safety protocols and holding developers accountable if their systems fail to meet them. As AI evolves rapidly, California’s new law sets a precedent for how governments can promote transparency and accountability without stifling progress. While not without flaws, SB 53 marks a pivotal moment in the national conversation on AI governance, potentially influencing future legislation across the U.S. and beyond.
