New York Passes Landmark AI Safety Bill, Aiming to Prevent Major Disasters without Stifling Innovation
On Thursday, New York state lawmakers passed the Responsible Artificial Intelligence Systems Examination (RAISE) Act, marking a significant step in regulating frontier AI models developed by companies like OpenAI, Google, and Anthropic. The bill aims to prevent AI-driven disasters, such as mass casualties or over $1 billion in damages, by establishing transparency standards for large-scale AI labs. This initiative comes as a victory for AI safety advocates, who have been pushing for stricter regulations amid rapid technological advancements and a focus on innovation over safety. Key figures in the AI community, including Nobel laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio, have endorsed the RAISE Act. They argue that the risks associated with advanced AI, such as uncontrolled behavior or misuse, are highly probable and necessitate immediate action. Senator Andrew Gounardes, a co-sponsor of the bill, stressed the urgency, stating, "The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving." The RAISE Act differs from California's SB 1047, which was criticized for potentially stifling innovation among startups and academic researchers. To avoid similar backlash, Gounardes designed the bill to regulate only the largest AI companies. If enacted, the law would mandate these companies to publish detailed safety and security reports for their AI models, especially those trained using more than $100 million in computing resources. It also requires reporting of any safety incidents, such as concerning AI behavior or theft of models. Non-compliance could result in civil penalties of up to $30 million, enforced by New York's attorney general. Companies affected by the RAISE Act include those based in the U.S., like OpenAI and Google, and those in China, such as DeepSeek and Alibaba. The bill's transparency requirements would apply to any AI model made available to New York residents, regardless of the developer's location. Nathan Calvin, vice president of State Affairs and general counsel at Encode, highlighted that the RAISE Act addresses previous criticisms by not requiring a "kill switch" on AI models or holding post-training companies accountable for critical harms. Despite its targeted approach, the RAISE Act has faced strong opposition from the tech industry. Anjney Midha, a general partner at Andreessen Horowitz, described the bill as "yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead." Both Andreessen Horowitz and Y Combinator were vocal critics of SB 1047 in California, fearing that the regulatory burden would stifle innovation and drive companies away from the state. Jack Clark, co-founder of Anthropic, which previously called for federal transparency standards, expressed concerns about the bill's breadth, particularly its potential impact on smaller companies. When questioned about Anthropic's stance, Gounardes maintained that the bill is designed to exempt small firms, focusing instead on large entities with the capacity to pose significant risks. Tech giants OpenAI, Google, and Meta have not commented on the RAISE Act, leaving their positions ambiguous. Critics argue that stringent regulations could lead companies to withdraw their advanced AI models from New York, a scenario that has played out in Europe due to its tough tech laws. However, Assemblymember Alex Bores, another co-sponsor of the bill, believes the regulatory burden is minimal. He noted that New York's position as the third-largest GDP state in the U.S. makes it economically unwise for companies to abandon the market. The RAISE Act is now awaiting Governor Kathy Hochul's signature. If she signs it into law, the act will set a precedent for AI regulation, balancing the need for innovation with essential safeguards. Industry insiders suggest that the bill's approach could inspire similar legislation nationwide, fostering a more proactive and safety-conscious AI development landscape. Evaluation and Industry Insights The RAISE Act is seen as a balanced effort to mitigate the risks of advanced AI while maintaining a supportive environment for innovation. Supporters praise its targeted approach, which focuses on large, high-risk AI labs rather than over-regulating smaller players. This could help build public trust in AI technologies and encourage responsible development practices. However, critics from the tech industry voice concerns over potential economic impacts and regulatory burdens. Despite the opposition, the bill's passage indicates growing awareness and concern among policymakers about the ethical and safety implications of AI. Companies like Anthropic and major industry players will likely continue to engage in the debate, influencing future regulatory frameworks.