بسبب قانون جديد، قد تُصبح كاليفورنيا مُراقبًا فعّالًا لشركات الذكاء الاصطناعي الكبرى
California’s Senate has advanced SB 53, a new AI safety bill that could mark a significant step in holding major artificial intelligence companies accountable. The legislation, authored by Senator Scott Wiener, now heads to Governor Gavin Newsom for a final decision—after he previously vetoed a broader version, SB 1047, last year. Unlike its predecessor, SB 53 is narrowly focused on large AI developers with annual revenue exceeding $500 million, aiming to avoid stifling smaller startups while targeting industry giants like OpenAI and Google DeepMind. The bill introduces several key requirements: AI companies must publish detailed safety reports for their models, disclose any significant incidents involving their systems, and establish a protected channel for employees to report safety concerns to regulators—without fear of retaliation, even if they’ve signed confidentiality agreements. These provisions are designed to increase transparency and create internal accountability, offering a rare regulatory check on companies that have operated with minimal oversight. Max Zeff, a TechCrunch reporter, argues that the bill’s targeted approach increases its chances of becoming law. By focusing on well-funded AI labs rather than the entire ecosystem, it sidesteps concerns that previous legislation raised about harming innovation and California’s thriving startup scene. The endorsement by Anthropic, a prominent AI company, further signals that the bill is seen as balanced and pragmatic. Kirsten Korosec emphasizes the strategic importance of California as the epicenter of AI development. Most major players have significant operations in the state, making it a natural testing ground for regulation. While other states may follow, California’s influence means its laws often set a de facto national standard. Still, critics note the bill includes several exceptions—particularly around startup exemptions and limited reporting obligations for smaller firms. Yet, as Zeff points out, even startups must share some safety data, ensuring a baseline of accountability. The timing is also critical. With the federal government under a new administration taking a hands-off stance—potentially blocking state-level AI rules through funding legislation—state-level action becomes even more vital. SB 53 could emerge as a key battleground in the growing conflict between federal deregulation and state-level efforts to ensure AI safety. If signed, SB 53 would represent one of the first meaningful attempts to impose real-world accountability on the most powerful AI companies, offering a model for how democratic oversight might keep pace with technological disruption.
