Universitäten und Tech-Firmen untergraben New Yorks AI-Sicherheitsgesetz
New York’s ambitious AI safety legislation, the RAISE Act (Responsible AI Safety and Education Act), has been significantly weakened after a high-stakes lobbying campaign that included a targeted ad blitz by a coalition of tech firms and academic institutions. Originally passed in June by both the New York State Senate and Assembly, the bill would have required companies developing frontier AI models—such as OpenAI, Anthropic, Meta, Google, and DeepSeek—to submit detailed safety plans and report large-scale incidents to the state attorney general. Crucially, the original version included a strict prohibition on releasing models that posed an “unreasonable risk of critical harm,” defined as potential death or serious injury to 100 or more people, or damage exceeding $1 billion from AI-enabled weapons or criminal acts. This clause was removed in the final version signed by Governor Kathy Hochul, which instead extended reporting deadlines and reduced penalties. The shift came amid a coordinated effort by the AI Alliance, a nonprofit group comprising over 150 members including Meta, IBM, Oracle, AMD, Hugging Face, and major universities like NYU, Cornell, Dartmouth, Carnegie Mellon, and the University of Notre Dame. Between November 23 and the bill’s signing, the group spent an estimated $17,000 to $25,000 on digital ads, according to Meta’s Ad Library, reaching over two million people. The ads, titled “The RAISE Act will stifle job growth,” argued the law would harm New York’s tech ecosystem, which supports 400,000 high-tech jobs. The campaign framed the legislation as a threat to innovation, despite its focus on safety and accountability. The AI Alliance, which claims to promote “open, trustworthy” AI development through collaboration, has previously opposed similar measures, including California’s SB 1047 and President Biden’s AI executive order. While the group positions itself as a neutral, research-driven forum, its actions—especially the ad campaign and direct lobbying—raise questions about the influence of corporate and academic ties on public policy. Some member universities have direct partnerships with AI companies: Northeastern provides 50,000 students access to Anthropic’s Claude model, while OpenAI has funded ethics initiatives at NYU and supported research at Carnegie Mellon, where a professor serves on its board. The lack of response from most academic institutions to inquiries about their involvement underscores the opacity of such coalitions. Meanwhile, another pro-AI super PAC, Leading the Future—backed by OpenAI, Andreessen Horowitz, and Palantir’s Joe Lonsdale—targeted the bill’s main sponsor, Assemblymember Alex Bores, with its own ad spending. Unlike the AI Alliance, this group operates with a clear political agenda. Industry observers note that the dilution of the RAISE Act reflects a broader trend: as AI regulation gains momentum, powerful players are using financial and institutional influence to shape laws in their favor. While the final version still requires some safety disclosures, the removal of the “unreasonable risk” clause significantly weakens the law’s enforcement power. Experts warn this sets a dangerous precedent, where the very institutions meant to ensure public safety are part of the effort to avoid it. The episode highlights the growing tension between innovation, accountability, and the real-world impact of AI—especially when the line between research, policy, and profit becomes increasingly blurred.
