For the first time, Washington is on the verge of deciding how to regulate artificial intelligence. And the fight that is brewing is not about technology, but about regulatory responsibility.
In the absence of a meaningful federal AI standard focused on consumer safety, states have introduced dozens of bills to protect residents from AI-related harm, including California’s SB-53 AI safety bill and Texas’ Responsible AI Governance Act, which prohibits intentional misuse of AI systems.
Tech giants and dynamic startups born in Silicon Valley say such laws create an unworkable patchwork that threatens innovation.
“It’s going to slow us down in the race against China,” Josh Vlasto, co-founder of pro-AI PAC Leading the Future, told TechCrunch.
The industry and many of its grafts in the White House, is pushing for a national standard, if at all. In the trenches of this all-or-nothing battle, new efforts have emerged to prohibit states from passing their own AI legislation.
Lawmakers in the House of Representatives are reportedly trying to use the National Defense Authorization Act (NDAA) to block state AI laws. At the same time, a leaked draft executive order from the White House also demonstrates strong support for preempting state efforts to regulate AI.
A sweeping preemptive measure that would strip states of the right to regulate AI is unpopular in Congress, which voted overwhelmingly against a similar moratorium earlier this year. Lawmakers argued that without a federal standard in place, state blocking would expose consumers to harm and tech companies would be free to operate without oversight.
Techcrunch event
San Francisco
|
October 13-15, 2026
To create this national standard, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are preparing a set of federal AI bills that cover a range of consumer protections, including fraud, health care, transparency, child safety, and catastrophic risk. It will likely take months, if not years, for a megabill of this type to pass, highlighting why the current rush to limit state authority has become one of the most contentious fights in AI policy.
The Battle Lines: The NDAA and the EO
Efforts to prevent states from regulating AI have intensified in recent weeks.
The House has considered inserting language into the NDAA that would prevent states from regulating AI, said Majority Leader Steve Scalise (R-LA). Punchbowl News. Congress reportedly worked to finalize a deal on the defense bill before Thanksgiving, Politico reported. A source familiar with the matter said negotiations with TechCrunch focused on narrowing the scope to potentially preserve state authority in areas such as child safety and transparency.
Meanwhile, a White House EO leaked The draft reveals the administration’s potential preemption strategy. The EO, which was reportedly suspended, would create an “AI Litigation Task Force” to challenge state AI laws in court, order agencies to evaluate state laws deemed “onerous,” and push the Federal Communications Commission and Federal Trade Commission toward national standards that override state rules.
Notably, the EO would make David Sacks – Trump’s AI and crypto czar and co-founder of venture capital firm Craft Ventures – co-responsible for creating a uniform legal framework. That would give Sacks direct influence on AI policy, going beyond the typical role of the White House Office of Science and Technology Policy and its chief Michael Kratsios.
Sacks has publicly advocated blocking state regulation and maintaining subservient federal oversight, favoring industry self-regulation to “maximize growth.”
The patchwork argument
Sacks’ position reflects the views of much of the AI industry. Several pro-AI super PACs have emerged in recent months, pumping hundreds of millions of dollars into local and national elections to oppose candidates who support AI regulation.
Leading the Future – backed by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity and Palantir co-founder Joe Lonsdale – has raised more than $100 million. This week, Leading the Future launched a $10 million campaign push Congress to develop a national AI policy that overrides state laws.
“When you’re trying to drive innovation in the tech sector, you can’t have a situation where all these laws keep popping up from people who don’t necessarily have the technical expertise,” Vlasto told TechCrunch.
He argued that a patchwork of state regulations would “slow us down in the race against China.”
Nathan Leamer, executive director of Build American AI, the PAC’s advocacy arm, confirmed that the group supports preemption without AI-specific federal consumer protections in place. Leamer argued that existing laws, such as those dealing with fraud or product liability, are sufficient to manage the harm caused by AI. While state laws often seek to prevent problems before they arise, Leamer favors a more reactive approach: letting companies act quickly and resolve problems in court later.
No preemption without representation
Alex Bores, a New York Assembly member running for Congress, is one of Leading the Future’s first targets. He sponsored the RAISE Act, which requires large AI labs to have safety plans to prevent critical harm.
“I believe in the power of AI, and that’s why it’s so important to have reasonable regulations,” Bores told TechCrunch. “Ultimately, the AI that is going to win in the market will be trustworthy AI, and often the market undervalues or places low short-term incentives to invest in security.”
Bores supports a national AI policy, but argues that states can act more quickly to address emerging risks.
And it’s true that States are moving faster.
As of November 2025, 38 states had adopted more than 100 AI-related laws this year, primarily targeting deepfakes, transparency and disclosure, and government use of AI. (A recent study found that 69% of these laws impose no requirements on AI developers.)
Congressional activity provides more evidence for the “slower than the states” argument. Hundreds of AI bills have been introduced, but few have been passed. Since 2015, Rep. Lieu has introduced 67 bills to the House Science Committee. Only one became law.
More than 200 lawmakers signed a open letter opposing preemption in the NDAA, arguing that “states serve as laboratories for democracies” that must “retain the flexibility to address new digital challenges as they arise.” Nearly 40 state attorneys general also sent an open letter oppose a national ban on AI regulation.
Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders – authors of Reconfiguring democracy: How AI will transform our politics, our government and our citizenship – say the disparate complaint is overblown.
AI companies are already complying with stricter European regulations, they note, and most industries are finding a way to operate under various national laws. The real motive, they say, is to avoid accountability.
What could a federal standard look like?
Lieu is drafting a 200-plus page megabill that he hopes to introduce in December. It covers a series of questions, such as penalties for fraud, protections against deepfakesprotection of whistleblowers, computing resources for academia, and mandatory testing and disclosures for large language model companies.
The latter provision would require AI labs to test their models and publish the results – something most now do voluntarily. Lieu has not yet introduced the bill, but he said it does not require any federal agency to directly review AI models. It differs from a similar Invoice introduced by Senators Josh Hawley (R-MS) and Richard Blumenthal (D-CN), which would require a government-run evaluation program for advanced AI systems before they are deployed.
Lieu acknowledged his bill wouldn’t be as strict, but he said it had a better chance of passing.
“My goal is to get something passed into law this quarter,” Lieu said, noting that House Majority Leader Scalise is openly hostile to AI regulation. “I’m not writing a bill that I would have if I were king. I’m trying to write a bill that could pass a Republican-controlled House, a Republican-controlled Senate and a Republican-controlled White House.”