الSenator سكوت ويونر يدافع عن مبادرته لفرض شفافية في مخاطر الذكاء الاصطناعي على الشركات الكبرى
Senator Scott Wiener of California is once again at the forefront of the national debate on AI safety, this time with SB 53 — a bill that seeks to mandate transparency from the largest AI companies rather than impose liability. Unlike his 2024 effort, SB 1047, which was vetoed by Governor Gavin Newsom amid fierce opposition from Big Tech, SB 53 has gained surprising support from major players like Anthropic and Meta, signaling a shift in industry sentiment. SB 53 targets AI labs generating over $500 million in revenue, requiring them to publish detailed safety reports for their most advanced models. These reports must assess risks such as AI-facilitated bioweapons, cyberattacks, and threats to human life — the most catastrophic potential harms. While some AI labs already issue voluntary safety reports, SB 53 would standardize and enforce this practice, ensuring consistency and accountability. The bill also creates protected channels for AI employees to report safety concerns directly to state authorities, a crucial safeguard against internal suppression of risk warnings. Additionally, it establishes CalCompute, a state-run cloud computing cluster designed to democratize access to AI research resources, reducing reliance on corporate-controlled infrastructure. Wiener attributes the bill’s broader acceptance to its more measured approach. Unlike SB 1047, which introduced strict liability for AI harms, SB 53 focuses on transparency and reporting — a compromise aimed at balancing innovation with public safety. It also targets only the largest firms, sparing startups from regulatory burden. Despite this progress, resistance remains. OpenAI argues that AI regulation should be federal, not state-level, while venture firm Andreessen Horowitz has raised constitutional concerns about state laws affecting interstate commerce. Wiener dismisses these claims, citing a lack of federal action and accusing the Trump administration of prioritizing tech industry growth over safety — a shift he sees as a reward to corporate donors. He emphasizes that California must lead, especially as federal efforts have stalled. His vision is not to stifle innovation but to ensure it is safe and accountable. Drawing from conversations with AI engineers and startup founders in San Francisco, he stresses that even well-intentioned developers must be held to rigorous standards when creating systems with existential risks. As Newsom weighs a final decision, Wiener’s message is clear: the bill reflects lessons from SB 1047, incorporating the governor’s own recommendations. He calls on Newsom to sign SB 53, framing it not as a confrontation with tech, but as a responsible step toward a future where powerful AI serves humanity — not the other way around.
