HyperAIHyperAI

Command Palette

Search for a command to run...

Anthropic Backs California’s SB 53 AI Safety Bill, Pushing for Frontier AI Accountability Amid Industry Pushback

Anthropic has officially endorsed California’s SB 53, a landmark AI safety bill introduced by state Senator Scott Wiener that would impose the first-of-its-kind transparency and safety requirements on the developers of the world’s most powerful AI models. The endorsement marks a significant moment for the legislation, especially as major tech industry groups like the Consumer Technology Association and the Chamber for Progress continue to oppose it. In a blog post, Anthropic acknowledged that while it supports federal oversight of frontier AI safety, the urgency of the technology’s development makes thoughtful state-level action necessary. “While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” the company wrote. “The question isn’t whether we need AI governance—it’s whether we’ll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former.” If passed, SB 53 would require companies developing frontier AI models—such as OpenAI, Google, Anthropic, and xAI—to establish formal safety frameworks and publish detailed safety and security reports before deploying models capable of posing catastrophic risks. The bill defines such risks as incidents resulting in at least 50 deaths or over $1 billion in damages. It specifically targets the most extreme dangers, including AI-assisted development of biological weapons or large-scale cyberattacks, rather than more immediate but less severe issues like deepfakes or AI-generated misinformation. The legislation also includes strong whistleblower protections for employees who report safety concerns, a key element in ensuring accountability. California’s Senate previously passed a version of the bill, but it still requires a final vote before moving to Governor Gavin Newsom’s desk. Newsom has not publicly taken a stance, though he previously vetoed a similar AI safety bill, SB 1047. The bill faces strong opposition from Silicon Valley and the Trump administration, both of which argue that state-level AI regulation could stifle innovation and harm America’s global competitiveness, especially in the race with China. Critics, including Andreessen Horowitz’s Matt Perault and Jai Ramaswamy, have claimed that such laws may violate the Constitution’s Commerce Clause by overstepping state authority. However, Anthropic co-founder Jack Clark argues that waiting for federal action is no longer viable. “We have long said we would prefer a federal standard,” Clark said on X. “But in the absence of that, this creates a solid blueprint for AI governance that cannot be ignored.” OpenAI’s Chief Global Affairs Officer Chris Lehane previously urged Newsom not to sign any AI regulation that could drive startups out of California, though his letter did not name SB 53. Miles Brundage, OpenAI’s former Head of Policy Research, dismissed Lehane’s concerns as “filled with misleading garbage,” noting that SB 53 targets only the largest AI companies—those with over $500 million in gross revenue. Policy experts say SB 53 is more balanced than earlier proposals. Dean Ball, a senior fellow at the Foundation for American Innovation and former White House AI advisor, praised the bill’s drafters for incorporating technical expertise and showing legislative restraint. He believes SB 53 has a real chance of becoming law. Notably, the bill was shaped by an expert panel convened by Governor Newsom, co-led by Stanford researcher and World Labs co-founder Fei-Fei Li. While many AI labs already publish safety reports, these efforts are voluntary and inconsistently followed. SB 53 would make such transparency mandatory, with potential financial penalties for noncompliance. In September, lawmakers removed a controversial provision requiring third-party audits after pushback from tech companies, which argued such audits would be overly burdensome. The revised bill now focuses on internal safety standards and public reporting, making it more palatable to industry while still enforcing accountability.

Related Links