After the House of Representatives passed its version of the One Big Beautiful Bill Act in May, Rep. Marjorie Taylor Greene of Georgia was surprised to learn that it included a provision to ban state governments from regulating artificial intelligence systems.
“I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there,” she tweeted last month. Hers was the first significant vocal opposition from a Republican member to the provision that would divide congressional Republicans and prove to be one of several obstacles on the way to the bill’s passage. The moratorium would have prevented states from passing legislation or regulations designed to restrict artificial intelligence models or systems.
The Senate tied a five-year moratorium on state AI regulations to critical broadband funding, but the provision didn’t end up in the final version of the bill, having failed on a 99-1 vote. The moratorium also faced opposition from 17 Republican governors, led by Gov. Sarah Huckabee Sanders of Arkansas. They sent a letter to Speaker Mike Johnson and Senate Majority Leader John Thune urging them to strike the provision, and a bipartisan group of 40 state attorneys general also opposed it.
What form regulation of AI should take is a matter that has divided congressional Republicans. On one side are those who want a light regulatory touch so that American companies can create new AI models to compete with China’s. On the other are those who worry about the revolutionary economic and social changes, such as job losses, that AI is set to bring and see a need for stringent regulations in some areas.
President Donald Trump’s administration thus far has signaled a more hands-off approach to AI regulation. One of his first acts in office was to rescind the Biden administration’s regulatory framework that Trump’s executive order said contained “barriers to American AI innovation.”
As it stands, lawmakers in all 50 states have introduced AI-related legislation this year, with dozens of bills passed. But there is no federal framework to govern AI, and removing the ability of states to regulate it in the absence of a national proposal could subject American citizens to an environment of unconstrained change that could transform society. But Texas Sen. Ted Cruz, the moratorium’s main proponent in the Senate, is still bullish on the idea of preventing states from enacting an array of potentially conflicting AI regulations. He told reporters this week that the moratorium would “absolutely” come back, though “time will tell” in what form.
The goal, according to its proponents, is to avoid impeding AI innovation with a maze of regulations. “We’re going to make a huge mistake if we have a patchwork of state-level regulations, because the last time I checked, in China, they’re not doing this on a province level,” North Carolina Sen. Thom Tillis, who cast the lone vote to keep the moratorium in the bill, told The Dispatch. “The [Chinese Communist Party] is going to put their foot on the accelerator, and what we’re doing is actually putting a little bit of water in our gas tank.”
AI industry heads and lobbyists have argued for the federal government to set the standard for regulation. That would spare companies the headache of spending resources on trying to navigate a collection of disparate laws throughout the country.
“If you have this patchwork of 50 different state laws, it’s going to be much more difficult for AI adopters to use AI, and that’s going to make them less competitive nationally and globally,” said Aaron Cooper, the senior vice president for global policy at the Business Software Alliance, an industry trade group whose members include OpenAI and Oracle.
In addition to hindering efficiency and economic growth, an environment that throttles companies’ ability to innovate on AI has dire national security implications, as officials have warned that America must keep up with China. As Vice President J.D. Vance told the New York Times in May, “If we take a pause, does the People’s Republic of China not take a pause? And then we find ourselves all enslaved to PRC-mediated AI?”
Still, populist and socially conservative Republicans have also warned that technological advancements from AI could lead to revolutionary economic and social consequences, so it requires prudent regulation. Sen. Josh Hawley of Missouri was one of the most vocal GOP opponents of the moratorium. “Are we going to have any more assistants in America? Are we going to have any more accountants in America? Are we going to have any more lawyers in America? Are we going to have any more factory workers, pretty soon, any more truck drivers?” he told The Dispatch. “If these AI enthusiasts are right, we’re looking at a transformation of our economy on par with the Industrial Revolution, which will mean massive and potentially severely dislocating change.”
In his Times interview, Vance downplayed the downstream economic consequences of AI but expressed serious concern about the potential for people to replace genuine social interactions with digital companionship. “There’s a level of isolation, I think, mediated through technology, that technology can be a bit of a salve,” he said. “It can be a bit of a Band-Aid. Maybe it makes you feel less lonely, even when you are lonely. But this is where I think AI could be profoundly dark and negative. I don’t think it’ll mean three million truck drivers are out of a job. I certainly hope it doesn’t mean that.”
Thus far, the social concerns have been more pronounced. A week after Grok, X’s AI chatbot, began spewing antisemitic messages, even calling itself “MechaHitler” at one point, X owner Elon Musk unveiled AI “Companions” that premium Grok users can converse with. One of them, a female anime avatar, has a mode that has been described as “NSFW.”
The tensions around AI were bound to come to a head in Trump’s second term given his coalition. In one corner are some traditional Republicans who are generally pro-innovation and anti-regulation, as well as hawks who want to compete with China. In the other are populists who worry about the job losses, plus other New Right types who fear the social consequences. Also in the mix with them are the Silicon Valley tech bros who are developing new AI models.
That’s the state of play as Congress considers whether to take up a moratorium on state AI regulation again. Industry heads and lobbyists have pointed out that such a measure would not stop the federal government from making its own regulations. “What I think we all want is having a good framework at the federal level that applies across the country,” Cooper told The Dispatch.
But there was no federal regulatory framework attached to the moratorium in the recently passed megabill, and Congress does not appear close to formulating one now. There have been bills introduced this year that create a public awareness campaign about AI, develop resources for small businesses to utilize it, and make strategies to secure the technologies related to AI. Still, only one of the bills has made it through committee, and none of them represents a comprehensive framework.
“I think there are a lot of fast-moving developments, and there doesn’t seem to be a lot of appetite on the Republican side for having any guardrails,” Sen. Martin Heinrich of New Mexico, a Democrat who serves as a chair of the Senate Artificial Intelligence Caucus, told The Dispatch. “So I think that’s a risky approach, but it seems to be where we are right now.”
In crafting an AI framework, several points of contention exist among different interests within and related to the industry. Sen. Mike Rounds of South Dakota, who serves as the Republican chair of the AI Caucus, highlighted the importance of honoring patents and copyrights in AI development.
“That’s going to require national attention,” he told The Dispatch, “but it also means that you’re probably going to have to work through two different committees in the Senate, both the Judiciary Committee and and the Commerce Committee, for jurisdiction. I think that may be something that will have to be worked on. But I’ve suggested, and we’ve actually tried to coordinate, to bring in the different parties to maybe sit down and offer a package that both sides could live with and ask Congress to bless it. And I think that’d be the best type of legislation we could do.”
For him, the moratorium controversy served as a heads up for Congress to form long-term plans to govern AI as it develops.
“I don’t think it would take that long to develop it,” Rounds said of a federal framework, “but I think it had to come to a focal point to say there’s a whole lot of states out there that would like to do their own. And so, I think it behooves both the AI developers and the patent folks to find a path forward that’s better than the current system of state by state. Everybody wins if you have a single policy that people have agreed to in advance.”
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.