Welcome back to Techne! During the couple of years I lived in Chicago, I became somewhat obsessed with the 1892 World’s Columbian Exposition, also known as the Chicago World’s Fair. Coming just 20 years after the Great Conflagration burned significant parts of the city, the exposition was built in roughly three years, covered 690 acres and included well over 200 structures. But the main feature was a massive pool surrounded by 14 buildings in the neoclassical style designed by prominent architects. The fair was a defining moment for the nation and featured the original Ferris wheel, the first moving walkway, the introduction of Quaker oats, shredded wheat, and peanut butter, and electrical devices developed by Nikola Tesla. But it ended in tragedy when the mayor was assassinated. The Library of Congress has some great resources on the event, including high-quality maps, newspaper clippings, and so much more.
The Best AI Law May Be One That Already Exists
I find myself increasingly sympathetic to William F. Buckley Jr. and his inclination to stand “athwart history, yelling Stop, at a time when no one is inclined to do so.” In AI regulation, we desperately need restraint.
The amount of proposed legislation aimed at AI is staggering. Multistate.ai, a government relations company tracking AI legislation, identified 636 state bills in 2024. It’s not even February and there are already 444 state-level bills pending.
Legislators are trying to get ahead of AI by passing bills. It’s an effort to right the wrong of supposedly taking a hands-off approach to social media regulation. Although I’ve always been skeptical of this simple narrative, the result has been a lot of ill-conceived AI bills.
I’ve been paying close attention to bills in Texas and Virginia that would grant extensive new power to regulate AI. But instead of new laws, leaders could make sure that consumer protection and anti-discrimination laws apply to AI by plugging any gaps. The Massachusetts Attorney General made clear in an advisory that the state would extend its expansive policing power to AI systems. Meanwhile, the federal government alone has issued more than 500 advisories, notices, and other actions to extend regulatory power over AI. The Federal Trade Commission has opened an investigation into AI companies, and dozens of copyright cases are being adjudicated. But to legislators, none of that is as satisfying as a new statute.
In our haste to regulate American innovation, we risk sacrificing the very technological preeminence that has defined our nation’s modern character. Early adoption of tech has traditionally resulted in higher incomes, more manufacturing jobs, and growth in related industries. In Texas and especially Virginia, data centers are being built, jobs are being created and tax bases are growing, so it is unclear why leaders would want to jeopardize that, just to be the first out of the gate. I feel, as my dad would often say, that we’re cruising for a bruising.
The pacing problem.
Here’s a common question I’m asked: Government runs slow and AI businesses are running fast, so is the government keeping up? While our technological capabilities sprint ahead, our social and legal frameworks merely power-walk behind. The pacing problem lies at the heart of AI regulation. Analyst Adam Thierer put together the graph below that captures the idea.
But I think people are looking at the pacing delta wrong. Changing regulatory regimes for every new innovation could mean that you get governance wrong. There is value in waiting to see where problems arise. In nascent markets and with new technologies, oftentimes the best response to a widening pace gap is to wait and maintain regulatory options, rather than rushing to close the gap with potentially premature regulation.
Finance has a concept that captures this flexibility. Real options are a kind of investment choice that a company can undertake to respond to changing economic, technological, or market conditions. To be specific, a real option gives a firm’s management the right, but not the obligation, to undertake certain business opportunities or investments. Real options create value beyond the immediate investment by assigning a value to flexibility in the face of uncertainty.
As economists Bronwyn H. Hall and Beethika Khan explained,
The most important thing to observe about this kind of [investment] decision is that at any point in time the choice being made is not a choice between adopting and not adopting but a choice between adopting now or deferring the decision until later.
This same principle applies to regulation. Regulators can act now or hold back their authority in reserve for future use when there is new information. The total value of a new regulation, therefore, includes both its immediate net benefits and the value of preserving future regulatory flexibility. Just as businesses use real options to manage uncertainty in fast-changing markets, regulators should think strategically about their option to wait.
Still, what I’ve presented is the best case for those worried about the pacing problem. AI regulation on the ground is different than the law books might suggest. There are more than 500 AI-relevant regulations, standards, and other governance documents at the federal level; countless algorithmic discrimination cases to rely upon; an open FTC investigation that’s looking into the dealings of Alphabet, Amazon, Anthropic, Microsoft, and OpenAI; consumer protection authority; product recall authority; as well as a raft of court cases, and on and on.
An explicit statute is just one means of governance and it is often the least efficient in dynamic industries. Option theory suggests that regulators should wait to gather more evidence. That’s not what’s happening in Texas and Virginia.
Texas and Virginia.
I’ve been keeping a close watch on two state bills, one in Texas and another in Virginia, because I think they could be bellwethers for other states.
The Texas Responsible AI Governance Act, or TRAIGA, is the more confusing of the two. You’d think red-state legislators would hesitate to model an AI bill after the European Union’s AI Act, the regulatory burden of which is known to add 17 percent to the total cost of AI deployment, and yet TRAIGA was filed.
TRAIGA imposes a number of obligations for developers, distributors, and deployers of AI systems regardless of their size. Everyone along the pipeline is now subject to new restrictions, including model developers, cloud service providers, and deployers. Stargate—the AI venture backed by OpenAI, Oracle, and Japan’s SoftBank—is slated to be built in Texas and would be affected.
In a first for state-level regulation, TRAIGA would require AI distributors to take reasonable care to prevent algorithmic discrimination, even though companies are already subject to anti-discrimination laws in finance, housing, education, and the like. It also bans AI systems deemed to pose unacceptable risks, particularly those that identify human emotions or capture biometric data without explicit consent. While enforcement would primarily rest with the state’s attorney general, private litigants could pursue legal action over banned AI systems.
The bill would birth yet another regulatory body, the Texas Artificial Intelligence Council, armed with broad powers to issue binding rules on “ethical AI development and deployment.” If those vague terms make you nervous, they should. The legislation would give unelected officials carte blanche to define ethics in AI, all while cases about AI’s basic legal status are still working their way through the courts.
Among other concerns, TRAIGA’s construction feels actively blind to the precedents set in algorithmic discrimination cases in the past couple years, including the Department of Justice’s win over Meta on bias in housing ads and the Federal Trade Commission’s settlement with Rite Aid on algorithmic unfairness. Both confirmed that AI systems would be subject to anti-discrimination law.
Dean Ball of the Mercatus Center, who is great on AI policy, also points out that TRAIGA’s compliance requirements are particularly burdensome:
On top of this, TRAIGA requires developers and deployers to write a variety of lengthy compliance documents—“High-Risk Reports” for developers, “Risk Identification and Management Policies” for developers and deployers, and “Impact Assessments” for deployers. These requirements apply to any AI system that is used, or could conceivably be used, as a “substantial factor” in making a “consequential decision.” … The Impact Assessments must be performed for every discrete use case, whereas the High-Risk Reports and Risk-Identification and Management Policies apply at the model and firm levels, respectively—meaning that they can cover multiple use cases. However, all of these documents must be updated regularly, including when a “substantial modification” is made to a model. In the case of a frontier language model, such modifications happen almost monthly, so both developers and deployers who use such systems can expect to be writing and updating these compliance documents constantly.
Kafka would be proud.
Virginia’s House Bill 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, shares commonalities with the Texas bill. Like its Lone Star State cousin, HB 2094 borrows heavily from the EU’s regulatory playbook. The bill also has wobbly language that would need to be defined in court like “consequential decisions,” “substantial factors,” and “high-risk” applications. And like the Texas law, the Virginia law seems blissfully unaware that we already have many tools to address their stated concerns, from consumer protection laws to civil rights statutes, and even state-level privacy laws. Why not start there?
What should states be doing?
So what type of regulation should states be pursuing instead? When it comes to Virginia, Thierer has the right idea:
Rather than adopting HB 2094 and creating new, burdensome regulatory requirements, Virginia should instead look to modify existing laws as needed to ensure they cover algorithmic systems. For example, measures like HB 2411 would give the Department of Law a Division of Consumer Counsel the ability to “establish and administer programs to address artificial intelligence fraud and abuse.” Another proposal, HB 2554, would require new disclosure requirements for AI-generated content such that any generative artificial intelligence system produced content includes “a clear and conspicuous disclosure.” While these laws would add some new regulatory requirements and budgetary expenditures, these measures at least have the benefit of being somewhat more focused in scope and intent than the open-ended nature of HB 2094.
In seeking appropriate AI regulation, legislators should follow three guiding principles.
First, they should focus on actual harms rather than theoretical boogeymen. The courts and existing consumer protection frameworks are already handling algorithmic discrimination cases. The system is working, maybe not as fast as some would like, but it’s working. Adding another layer of state-specific rules doesn’t solve real problems if those problems don’t exist.
Second, legislators should be leveraging existing legal frameworks. They don’t need to reinvent the legal wheel for every new technology. The beauty of common law is its adaptability. Courts have been handling new technologies for centuries without needing special AI councils or novel regulatory frameworks. Massachusetts showed the way by simply clarifying that existing consumer protection laws apply to AI. Sometimes the best solution is the one you already have.
Third, state lawmakers shouldn’t outsource the hard work of legislating to a new agency, as TRAIGA does. When legislators punt their responsibilities to unelected bureaucrats, the result can be a regulatory mess, especially if an agency head comes along and tightens the screws on everyone.
The ghost of social media regulation haunts our statehouses, driving legislators to action when patience might serve them better. In their rush to avoid past mistakes, they risk making entirely new ones. The bills in Texas and Virginia are just two such examples. But we don’t need new regulatory bodies or endless paperwork requirements to govern AI. We need the wisdom to recognize that our existing legal framework is more robust and adaptable than we give it credit for and the patience to let it work. Let’s hope our state legislators can learn that lesson before they regulate American innovation right into the ground.
Until next week,
🚀 Will
Notes and Quotes
- On Tuesday, the Boom Supersonic XB-1 jet became the first civilian aircraft to go supersonic over the continental United States. Even when the supersonic Concorde aircraft was active in the 1970s, it never made trips over the United States; its routes were transatlantic. The Concorde was a joint venture between the French and British governments.
- From the Washington Post: “Ruptures of undersea cables that have rattled European security officials in recent months were likely the result of maritime accidents rather than Russian sabotage, according to several U.S. and European intelligence officials. The determination reflects an emerging consensus among U.S. and European security services, according to senior officials from three countries involved in ongoing investigations of a string of incidents in which critical seabed energy and communications lines have been severed.”
- Genomic research is upending what we think we know about archaeology. A recent report, for example, found that one Iron Age society, which had been assumed to be male-dominated, actually centered on women. If you’re looking for a podcast that will get you up to speed on these developments, check out this conversation between Dwarkesh Patel and David Reich.
- The conversation over ChatGPT’s water and energy usage is out of proportion. Andy Masley, the director of Effective Altruism DC, has the receipts: “Sitting down to watch 1 hour of Netflix has the same impact on the climate as asking ChatGPT 300 questions. I suspect that if I announced at a party that I had asked ChatGPT 300 questions in 1 hour I might get accused of hating the Earth, but if I announced that I had watched an hour of Netflix or that I drove 0.8 miles in my sedan the reaction would be a little different. It would be strange if we were having a big national conversation about limiting YouTube watching or never buying books or avoiding uploading more than 30 photos to social media at once for the sake of the climate.”
- Eric Berger, author of two bestselling books on the space industry, has profiled K2, a Californian satellite company. Instead of going smaller with its satellites, “the company is now building its first ‘Mega Class’ satellite bus, intended to have similar capabilities to Lockheed’s LM2100: 20 kW of power, 1,000 kg of payload capacity, and propulsion to move between orbits. … The biggest difference is cost. K2 aims to sell its satellite bus for $15 million.” Why is this important? As Berger explains, “About a month ago, K2 announced that it had signed a contract with the U.S. Space Force to launch its first Mega Class satellite in early 2026. The $60 million contract for the “Gravitas” mission will demonstrate the ability of K2’s satellite bus to host several experiments and successfully maneuver from low-Earth orbit to middle-Earth orbit (several thousand km above the surface of Earth).”
- DeepSeek’s attention in the last week drove people to the site, crashing the services of its AI model. I’ve been wondering whether DeepSeek is structurally forced to be open source. The company seems to have most of China’s advanced chips, which are limited, and the company seems to be straining on the inference side, which comes from users making lots of queries. Does anyone know the best estimates for China’s total compute capacity? Let me know in the comments.
- The New York Times recently profiled Curtis Yarvin, who first came to fame as the pseudonymous writer Mencius Moldbug. Yarvin’s basic schtick is that democracy has failed and that we should strive for monarchy, citing corporate structures as his model. “These things that we call companies are actually little monarchies,” he states, pointing to Apple as an example. The economist Alex Tabarrok had a great critique of Yarvin, which basically comports with my view. Apple may operate internally like a monarchy, but it can only do so because it exists within a democratic institutional environment that provides the rule of law, property rights, and contract enforcement. There is an important distinction to be made between institutional environments (the broader legal and social framework) and institutional arrangements (how specific organizations structure themselves). Yarvin’s argument is like claiming a saltwater fish tank is the ocean. He mistakes the contained system for the ecosystem that makes it possible.
- I am not at all a fan of this move: The Trump administration has dismissed a security board investigating Chinese intrusions into major U.S. internet service providers. Dissolving an active investigation into state-sponsored cyberattacks against our telecommunications infrastructure seems particularly ill-timed given the growing sophistication of such threats from China.
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.