HyperAIHyperAI

Command Palette

Search for a command to run...

Federal vs State AI Regulation Battle Heats Up as Industry Pushes for Preemption Amid Growing State Action

The race to regulate artificial intelligence has ignited a fierce clash between federal and state governments, centering not on how to govern AI, but who should have the authority to do so. With no comprehensive federal AI framework in place, states have taken the lead, introducing dozens of laws aimed at protecting consumers from emerging risks like deepfakes, algorithmic bias, and misuse of AI in government and healthcare. California’s SB-53, for example, mandates safety evaluations for powerful AI systems, while Texas passed the Responsible AI Governance Act to prohibit intentional harm from AI deployment. In response, tech giants and Silicon Valley-backed advocacy groups are pushing hard for a federal preemption—essentially blocking states from enacting their own AI laws. They argue that a patchwork of conflicting regulations would stifle innovation and slow the U.S. down in its global competition with China. Josh Vlasto, co-founder of the pro-AI PAC Leading the Future, warned that state-level rules could hinder progress, saying, “It’s going to slow us in the race against China.” This push has gained traction in Congress, where lawmakers are reportedly attempting to insert language into the National Defense Authorization Act (NDAA) that would prohibit states from regulating AI. Majority Leader Steve Scalise confirmed discussions are underway, though efforts are reportedly narrowing the scope to preserve state authority on issues like child safety and transparency. Meanwhile, a leaked draft of a White House executive order reveals even stronger intentions to centralize control. The proposed order would establish an AI Litigation Task Force to challenge state AI laws in court, direct federal agencies to assess state regulations as “onerous,” and empower the FTC and FCC to create national standards that override state rules. Notably, the order would give David Sacks—Trump’s AI and Crypto Czar and co-founder of Craft Ventures—significant authority in shaping this national framework, bypassing traditional science and technology policy channels. Sacks has long advocated for minimal federal oversight, favoring industry self-regulation to maximize growth. His stance aligns with that of major AI firms and their allies in Washington. Leading the Future, backed by OpenAI, Palantir, Andreessen Horowitz, and Perplexity, has raised over $100 million to influence elections and push for preemption. The group recently launched a $10 million campaign urging Congress to pass a national AI policy that supersedes state laws. Critics argue this effort is less about innovation and more about avoiding accountability. Alex Bores, a New York Assembly member running for Congress and sponsor of the RAISE Act, emphasized that regulation is essential for building trustworthy AI. “The AI that wins in the marketplace will be trustworthy AI,” he said, noting that markets often undervalue safety investments. States have moved faster than Congress. By November 2025, 38 states had passed over 100 AI-related laws, primarily targeting deepfakes, transparency, and government use. In contrast, Congress has seen hundreds of AI bills introduced with little progress—Rep. Ted Lieu has introduced 67 since 2015, with only one becoming law. Lieu is now drafting a comprehensive 200-page federal AI bill that would address fraud, deepfakes, child safety, transparency, and require large AI labs to test and disclose model performance. He aims to introduce it in December, acknowledging it won’t be perfect but is designed to pass in a divided Congress and White House. Unlike similar proposals from Senators Hawley and Blumenthal that call for government-led AI evaluations, Lieu’s bill avoids direct federal oversight, making it more palatable to Republicans. Opponents of preemption argue that states serve as vital laboratories for democracy, capable of responding quickly to new threats. Nearly 40 state attorneys general and hundreds of lawmakers have opposed efforts to block state regulation. Experts like cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders argue that companies already comply with stringent EU rules and can adapt to varying state laws—suggesting the real motive behind preemption is to avoid legal and ethical responsibility.

Related Links