In a surprising move that could reshape how AI is regulated across the country, the U.S. Senate has voted overwhelmingly to reject a 10-year freeze on state-level AI rules. Why did it fail — and what’s next for tech companies?
What Just Happened in the Senate
Early Tuesday morning, senators voted 99-1 to strip a federal ban on state AI regulation from President Trump’s tax and spending bill, according to the Financial Times.
The ban would’ve prevented states from crafting their own AI laws for a decade — something Big Tech has pushed hard to avoid conflicting rules. But lawmakers pushed back.
Sen. John Thune (R-S.D.), a key proponent, argued that a “light touch” approach was the best way to support AI innovation in the U.S.
Not everyone agreed. Sen. Josh Hawley (R-Mo.) blasted the proposal, calling it a “huge giveaway” to irresponsible tech companies.
His concerns echoed a letter sent to the House last week, signed by artists, academics, civil society groups, and tech workers. They warned that the moratorium would let companies off the hook, even in cases of foreseeable harm.
“This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm… the company making that bad tech would be unaccountable to lawmakers and the public,” the letter said.
The overwhelming vote to remove the ban shows lawmakers are increasingly cautious about giving Big Tech too much leeway, especially as AI systems grow more powerful and less transparent.
What This Means for Tech and Policy
The Senate’s decision leaves states free to move ahead with their own AI laws. That’s a big shift.
California and other tech-forward states have already started regulating algorithmic systems in hiring, surveillance, and policing. Without a federal cap, those efforts are likely to expand — and fast.
For companies, this means adapting to a growing web of local rules. It’s similar to what happened with privacy laws, where firms now deal with everything from CCPA in California to GDPR in Europe.
And as agentic AI enters the spotlight, public concern is only rising. These are systems that can act on their own, without human prompts.
PYMNTS recently reported that while most CFOs at large companies know what agentic AI is, only 15% are considering its use.
In other words, the tech is hot — but trust is low.
What Lawmakers and Advocates Are Saying
“We want to be the leaders in AI and quantum and all these new technologies. And the way to do that is not to come in with a heavy hand of government; it’s to come in with a light touch.” — Sen. John Thune (R-S.D.)
“I think it’s terrible policy. It’s a huge giveaway to some of the worst corporate actors out there.” — Sen. Josh Hawley (R-Mo.)
“This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm… the company making that bad tech would be unaccountable to lawmakers and the public.” — Joint Letter from Artists, Civil Groups, and Tech Workers
What Comes Next
States now have a clear path to set their own AI rules. Expect a rush of proposals around algorithm accountability, transparency, and worker rights.
This could also push federal lawmakers to re-engage with national AI policy — but for now, the door is wide open for localized action.
Will 50 sets of AI rules push innovation forward — or slow it down? We’d love to hear your take.
