With Congress stalled, dozens of states have moved ahead on their own artificial intelligence rules — setting child-safety requirements, demanding more transparency from tech firms and creating whistleblower protections. But the White House is pressing federal action, arguing the growing state “patchwork” hampers innovation.
President Trump and aides, including AI and crypto adviser David Sacks, have warned state laws could be burdensome to industry. Michael Kratsios, head of the White House Office of Science and Technology Policy, said the administration wants a single national framework so innovators “have certainty” and a patchwork of state rules can be avoided.
The administration released a short regulatory framework it wants Congress to adopt. Officials and the president have repeatedly criticized state efforts and even intervened in some Republican-led state work on AI.
In Utah, Republican state Rep. Doug Fiefia proposed a transparency bill aimed at protecting consumers. The bill never reached a vote after what Fiefia described as a one-line memo from the White House saying it opposed the measure and saw it as incompatible with the administration’s AI agenda. A White House official, speaking on background, told NPR the White House has not told a state it cannot enact child safety protections, though they did not comment specifically on the memo to Fiefia.
Fiefia, a former Google employee, said he had already been hearing the administration’s concerns and argued that both state and federal lawmakers should be involved in AI regulation. He noted, however, that Congress is gridlocked and often unable to act — leaving states like Utah to act to protect constituents, especially on child safety.
Other Republican state lawmakers have taken similar views. Pennsylvania State Sen. Tracy Pennycuick said federal action often takes too long and that states can spot problems early and respond quickly. Pennycuick sponsored the SAFECHAT Act in Pennsylvania, which requires AI chatbots to include safeguards against content encouraging self-harm or violence. In Texas, State Sen. Angela Paxton said she prefers strong federal legislation to avoid a regulatory patchwork but supports states’ ability to pass laws until Congress acts.
The White House framework lists principles it wants Congress to follow, including protecting children and shielding consumers from rising data center costs. Reaction to the framework has been mixed. Some lawmakers and experts appreciate the idea of a single national standard but say the White House document lacks detail.
Riki Parikh, policy director at the nonprofit Alliance for Secure AI, said the framework doesn’t adequately address issues like job displacement or company accountability. “A federal standard is better than a 50-state patchwork,” Parikh said, “but what they are proposing here is not sufficient. It does not earn the right to replace the good work states are doing.”
Tennessee Attorney General Jonathan Skrmetti called the framework a positive step and said it was preferable to last year’s White House push for a 10-year moratorium on state AI laws, which many saw as favorable to tech companies. That moratorium effort, backed by allies including Sen. Ted Cruz, ultimately failed. Still, Skrmetti voiced concern about the administration’s closeness to the AI industry.
Public opinion reflects skepticism about the administration’s relationship with Big Tech. A Morning Consult and Tech Oversight Project survey found a majority believe the Trump administration is too close to tech companies, and a Vanderbilt University poll showed bipartisan support for AI regulation, with more Republicans than Democrats favoring limits.
On Capitol Hill, some Republicans have embraced the White House framework but concrete legislative movement remains pending. Sen. Marsha Blackburn, a Tennessee Republican, said she’s coordinating with the White House on her TRUMP AMERICA AI Act, which expands on the administration’s four-page framework. The White House says it continues to have “productive conversations” with lawmakers.