California Lawmaker Reintroduces Bill Mandating AI Safety Reports and Whistleblower Protections
California State Senator Scott Wiener has rekindled his push for AI safety regulations with new amendments to his latest bill, SB 53. This updated legislation requires the world’s largest AI companies, such as OpenAI, Google, Anthropic, and xAI, to publish safety and security protocols and issue reports in the event of safety incidents. If enacted, California would become the first state to implement meaningful transparency mandates for AI developers. Wiener's previous bill, SB 1047, faced strong opposition from Silicon Valley, leading to its veto by Governor Gavin Newsom. Following this, Governor Newsom tasked a group of AI experts, including Fei-Fei Li from Stanford, to form a policy group and provide recommendations for AI safety. Their final report emphasized the need for industry transparency to foster a robust evidence environment. Wiener's office stated that the new amendments to SB 53 were heavily influenced by this report. Under SB 53, AI companies would be required to disclose their safety measures and protocols, ensuring the public and government are informed about steps taken to mitigate risks. The bill also includes whistleblower protections for employees who believe their company’s AI technology poses a "critical risk" to society, defined as causing harm to over 100 people or resulting in more than $1 billion in damage. Additionally, the legislation aims to create CalCompute, a public cloud computing resource to support startups and researchers working on large-scale AI projects. Unlike SB 1047, which imposed liability on AI model developers for any harms caused by their technologies, SB 53 avoids this, focusing instead on transparency and accountability. The bill is carefully crafted to avoid placing undue burdens on smaller startups and researchers who fine-tune or use open-source AI models. SB 53 is now set to be reviewed by the California State Assembly Committee on Privacy and Consumer Protection. If approved, it will move through several more legislative bodies before reaching Governor Newsom for final approval. The bill's fate is closely watched, especially since similar legislation, the RAISE Act, is being considered in New York. These state-level efforts come amid a failed attempt by federal lawmakers to impose a 10-year moratorium on state AI regulation, which was rejected in a 99-1 Senate vote in July. Industry insiders have mixed reactions to SB 53. Nathan Calvin, VP of State Affairs for the AI safety group Encode, supports the bill, calling it a "bare minimum, reasonable step" toward ensuring companies address the risks associated with AI. Geoff Ralston, former president of Y Combinator, echoed this sentiment, emphasizing that states must step up in the absence of federal action. However, major AI companies have been cautious or resistant to state-mandated transparency. For instance, Google did not publish a safety report for its advanced Gemini 2.5 Pro model immediately upon release, and OpenAI decided against publishing one for GPT-4.1. Later, a third-party study revealed potential misalignments in GPT-4.1, highlighting the need for consistent and thorough safety reporting. Despite the resistance, SB 53 offers a balanced approach compared to earlier bills and could set a precedent for more stringent AI oversight. Leading AI model developers generally publish safety reports, but inconsistency in recent months underscores the importance of legislative action to standardize this practice. As Senator Wiener navigates the legislative process, the AI industry will be closely monitoring the potential impact of these new transparency requirements. Senator Wiener’s efforts are seen as a necessary and thoughtful response to the growing concerns around AI safety and ethical development. His aim is to foster a responsible AI ecosystem where transparency and accountability are integral components, striking a balance that promotes innovation while mitigating potential harms. If successful, SB 53 could influence similar regulations at both state and federal levels, potentially shaping the future landscape of AI governance in the United States.