HyperAIHyperAI

Command Palette

Search for a command to run...

California Senator Scott Wiener Pushes for AI Safety with SB 53, Seeking Transparency and Accountability from Big Tech

Senator Scott Wiener has been at the forefront of California’s efforts to regulate AI safety, and his latest bill, SB 53, marks a significant shift from his earlier, more confrontational approach. Unlike SB 1047, which sought to make tech companies legally liable for harms caused by their AI systems and was ultimately vetoed by Governor Gavin Newsom, SB 53 focuses on transparency and accountability through mandatory safety reporting. SB 53 would require large AI companies—those generating more than $500 million in revenue—to publish detailed safety assessments for their most advanced models. The bill targets the most severe risks, including AI’s potential to enable mass casualties, cyberattacks, and the creation of chemical or biological weapons. While many AI labs currently issue voluntary safety reports, they vary widely in quality and scope. SB 53 standardizes this process, ensuring consistent public disclosure. The bill also creates protected channels for AI employees to report safety concerns directly to state officials, addressing a growing worry about whistleblowers being silenced within tech companies. Additionally, it establishes CalCompute, a state-run cloud computing cluster designed to provide AI research resources to academic institutions and smaller labs, reducing dependence on major tech platforms. Support for SB 53 has been notable. Anthropic publicly endorsed it, and Meta has said the bill represents a step toward balanced regulation, even if it has concerns about certain provisions. Former White House AI advisor Dean Ball called it a “victory for reasonable voices” and believes Newsom is likely to sign it. Wiener attributes the bill’s broader acceptance to its more measured approach. SB 1047 was seen as punitive, while SB 53 emphasizes transparency over liability. It also targets only the largest AI firms, sparing startups from undue burden. Still, opposition persists. OpenAI has argued that AI regulation should be federal, not state-based, while venture firm Andreessen Horowitz has suggested some California bills could violate the dormant Commerce Clause. Wiener dismisses these concerns, citing a lack of meaningful federal action on AI safety. He criticizes the Trump administration’s pro-growth stance, which has downplayed safety in favor of rapid innovation, and views the current political climate as one where states must act when federal leadership fails. For Wiener, the goal is not to stifle innovation but to ensure it happens safely. He emphasizes that he’s not anti-tech—his district includes the heart of Silicon Valley—but that powerful companies cannot be trusted to self-regulate, especially when their profits are tied to unchecked advancement. He sees SB 53 as a response to direct feedback from AI researchers and technologists who fear the consequences of uncontrolled development. His message to Governor Newsom is clear: the bill reflects lessons from the past, incorporates the governor’s own recommendations, and offers a pragmatic path forward. He hopes Newsom will sign it, not just as a policy win, but as a signal that California remains a leader in shaping the future of AI.

Related Links