HyperAIHyperAI

Command Palette

Search for a command to run...

New York’s AI Safety Bill Weakened After Tech and Universities Back Ad Campaign Against It

A major revision of New York’s landmark AI safety legislation, the RAISE Act, has significantly weakened its original provisions, following a coordinated campaign by a coalition of tech companies and academic institutions. The bill, formally known as the Responsible AI Safety and Education Act, was initially designed to require developers of large AI models—including OpenAI, Anthropic, Meta, Google, and DeepSeek—to submit detailed safety plans and report major incidents to the state attorney general. However, the version signed into law by Governor Kathy Hochul last week diverges sharply from the version passed by both the Senate and Assembly in June. The revised bill removes key safeguards, including a provision that would have prohibited releasing frontier AI models if they posed an “unreasonable risk of critical harm,” defined as the potential for 100 or more deaths, serious injuries, or $1 billion in property or financial damage—especially in cases involving weapons or autonomous criminal behavior. This clause, which aimed to prevent catastrophic misuse, was eliminated in the final version. Additional changes include extended deadlines for incident reporting and reduced penalties. A coalition known as the AI Alliance, which includes major tech firms like Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face, spent between $17,000 and $25,000 on an advertising campaign opposing the original bill. Ads, running from November 23, framed the legislation as a threat to innovation and job growth, claiming it would “stifle job growth” and harm New York’s tech ecosystem, which supports around 400,000 high-tech jobs. The campaign reportedly reached over two million people, according to Meta’s Ad Library. Notably, the AI Alliance also includes more than a dozen prominent universities such as New York University, Cornell, Dartmouth, Carnegie Mellon, Northeastern, Louisiana State University, the University of Notre Dame, Penn Engineering, and Yale Engineering. When contacted, most institutions declined to comment, with Northeastern offering no response by publication time. While many of these academic members are not directly tied to AI companies through formal partnerships, several have established deep ties. Northeastern provides access to Anthropic’s Claude AI for its 50,000 students and staff across 13 campuses. NYU received funding from OpenAI for a journalism ethics initiative in 2023. Carnegie Mellon has ongoing collaborations with OpenAI, including a faculty member serving on its board, and Anthropic has funded programs at the university. The AI Alliance has a history of opposing AI regulation, including California’s SB 1047 and President Biden’s AI executive order. It claims to promote “collaborative, transparent, and ethical” AI development through working groups and initiatives like dataset curation and safety prioritization. However, critics argue that its influence reflects a broader pattern of industry-aligned groups shaping policy in ways that favor innovation over accountability. Meanwhile, another group—Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz, Joe Lonsdale, and OpenAI’s Greg Brockman—also ran targeted ads against the bill’s key sponsor, Assemblymember Alex Bores. Unlike the AI Alliance, this group is explicitly political, with a clear agenda. The AI Alliance, by contrast, presents itself as a nonprofit dedicated to ethical AI development, despite its powerful industry backing. The weakened RAISE Act marks a significant shift in New York’s approach to AI governance, raising concerns about the influence of corporate and academic interests in shaping critical safety regulations.

Related Links