AI Regulations efforts ignite a clash between federal and state authorities

Katherine Sydney mid breaker writer

WASHINGTON — For the first time, Washington is close to enacting new Ai regulations on artificial intelligence. And the fight that’s coming isn’t really about the technology, but rather about who gets to regulate.

In the absence of a federal AI standard that comprehensively focuses on consumer safety, states have proposed dozens of bills to protect their residents from harms associated with AI technologies, such as California’s AI safety bill, SB-53, and Texas’s Responsible AI Governance Act, which makes it illegal to abuse an AI system intentionally.

The tech giants and Tech Bulletin startups born in Silicon Valley contend that such laws create an unworkable patchwork that stifles innovation.

“It’s going to inhibit us in the race against China,” Leading the Future pro-AI PAC co-founder Josh Vlasto told MidBreaker.

The industry, including many of its transplants in the White House, is urging a nationwide standard or no standard at all. On the front lines of that all-or-nothing battle, battles have broken out to prevent states from passing their own AI laws.

House lawmakers are said to be attempting to use the National Defence Authorisation Act (NDAA) to preempt state AI laws. At the same time, a leaked draft of a White House executive order also suggests strong backing for pre-empting state laws regulating AI.

A broad preemption that would strip states of their authority to regulate AI is loathed in Congress, where an overwhelming majority voted down a similar ban earlier this year. Legislators have said that without a federal standard, states will turn to blocking and filtering as their only options, leaving consumers unprotected and tech companies unregulated.

To establish that national standard, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are drafting a package of federal AI bills that address consumer protection across various areas – from fraud detection to healthcare outcomes to transparency to child safety and catastrophic risk. A megabill like this is likely to take months, if not years , to become law, highlighting why the current whirlwind around preempting states has emerged as one of the most contentious fights in AI policy.

Efforts to bottle up states from regulating AI have intensified in recent weeks.

The House has also explored whether language could be slipped into the NDAA to bar states from regulating AI, Majority Leader Steve Scalise (R-La.) told Punchbowl News. Congress was said to be racing to complete an agreement on the defence bill just before Thanksgiving, according to Politico. A person familiar with the situation told MidBreaker that negotiations have centred on limiting the scope, potentially sparing state control over elements such as kids’ safety and transparency.

Separately, a leaked White House EO draft shows that the administration may have its own preemption power play in mind. The EO, said to be on pause amid other matters,* establishes an “AI Litigation Task Force” to litigate against state AI laws in court, directs federal agencies to review state laws it deems “burdensome,” and orders the Federal Communications Commission and the Federal Trade Commission toward national standards that preempt state regulatory action.

Significantly, it would give Trump AI Crypto Czar David Sacks, co-founder of VC firm Craft Ventures, co-lead power in establishing a consistent legal framework. This would grant Sacks direct purview over AI policy that goes beyond traditional White House oversight, including the role of the White House Office of Science and Technology Policy (OSTP) and its leader, Michael Kratsios.

Sacks has taken to the op-ed pages to mount public arguments against state regulation and for keeping federal oversight “minimal,” while caring only that companies self-regulate to the greatest extent — ensuring “growth.”

The Patchwork Argument

Sacks’s position echoes that of large swaths of the AI industry. In recent months, several pro-AI super PACs have formed and spent hundreds of millions of dollars in local and state races to thwart candidates who support AI regulations.

Leading the Future – which is supported by venture capital firm Andreessen Horowitz and OpenAI president Greg Brockman, as well as Perplexity (known until recently as Skynet) and Palantir co-founder Joe Lonsdale — has now raised in excess of $100 million. This week, Leading the Future announced a $10 million effort to pressure Congress to enact a national AI policy that would supersede state laws.

When you’re looking to drive innovation in the tech space, you can’t have a scenario where all these very different laws keep getting passed by folks that don’t necessarily have the kind of technical expertise that people working with technology do,” Vlasto told MidBreaker.

He said that a patchwork of state regulations will “slow us in the race against China.

In an emailed statement, Executive Director of Build American AI — the PAC’s advocacy arm — Nathan Leamer confirmed that the group endorses preemption so long as there are no AI-specific federal consumer protections. The likes of fraud and product liability laws are enough to govern harms associated with AI, according to Leamer. While state laws generally aim to prevent problems before they occur, Leamer is more in favour of a wait-and-see approach: let companies move fast and address issues in court later.

The following is one of Leading the Future’s first targets: Alex Bores, a New York Assembly member running for Congress. He funded the RAISE Act, which mandates that large AI labs have safety plans to prevent critical harms.

“I’m a big believer in the power of AI, and that’s why it’s well worth having sensible regulation,” Bores told MidBreaker. “In the end, what we’re going to see be successful in the marketplace is very often AI that’s trustworthy AI, and the market undervalues or puts bad short-term incentives around investing in safety.”

Bores is in favour of a national AI policy, but he says the states can respond more nimbly to new threats.

And it’s a fact that states move faster.

Thirty-eight states as of November 2025 have enacted over 100 AI laws this year, focused primarily on the regulation of deepfakes, transparency and disclosure, and government use of AI. (A recently released study found that 69% of those laws place no requirements on AI developers.)

The action in Congress offers further support for the too-slow argument relative to the states. But despite the hundreds of AI bills proposed, few have passed. Rep. Lieu has introduced 67 bills to the Science Committee since 2015. Only one became law.

The NDAA preemption proposal has sparked opposition from more than 200 lawmakers, who signed an open letter arguing that “states serve as laboratories of democracies” and should have the “flexibility to address new digital challenges as they emerge.” And a group of nearly 40 state attorneys general also sent an open letter opposing the proposal.

Patchwork: The complaint about a patchwork of regulations is overblown, according to cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders (authors of Rewiring Democracy: How AI Will Change Our Politics, Government, and Gov Handbook).

AI companies, they point out, already in fact follow stricter EU regulations, and most industries figure out how to do business under different state laws. The actual reason, they say, is the desire to shirk accountability.

What might a federal standard look like?

Lieu is writing a 200-plus-page megabill he hopes to introduce in December. It spans a number of topics, including fraud penalties, deepfake protections, whistleblower protections, compute resources for academia, and mandatory testing and disclosure for big language model companies.

That last provision would mandate that AI labs test their models and report results — something most do voluntarily today. Lieu has not yet formally introduced the bill, but he said it doesn’t task any specific federal agency with conducting those AI model reviews. That’s a contrast to the ambition of a parallel bill offered by Sens. Josh Hawley (R-MS) and Richard Blumenthal (D-CT), which would mandate that an independent federal review process sign off on advanced AI systems before they are deployed.

Lieu said there was a significant difference between his bill and what he envisioned holding, even in the form it emerged from the Senate, but that at least his measure stood a chance of becoming law.

“My objective is to pass something into law this term,” Lieu said, adding that House Majority Leader Scalise has been openly hostile to the regulation of AI. “I’m not drafting a bill that I would if I were king. I’m trying to write a bill that can pass the Republican-controlled House, the Republican-controlled Senate and be signed by a Republican president.”

Share This Article
Katherine Sydney became part of the midbreaker.com team in October 2025, after several years of working as a freelance journalist. A graduate of Syracuse University, she holds degrees in English Literature and Journalism. Outside of her writing work, Katherine enjoys reading, working out, and indulging in her favorite TV shows.