Skip to content
Welcome to the Techlash
Go to my account

Welcome to the Techlash

The U.S. needs to return to lighter, more flexible regulation when it comes to AI.

(Getty illustration)

Welcome back to Techne! Something I just learned: The Susquehanna River, which drains a large part of eastern Pennsylvania and parts of New York and Maryland, is one of the oldest rivers in the world. It was formed 320 million to 340 million years ago, making it older than the Appalachian mountains through which it flows. 

How Tech Regulatory Approaches Have Changed, and Not for the Better

As president, Bill Clinton had failures, both personal and professional, but one thing he got right during his time in office was the Framework for Global Electronic Commerce. Released in July 1997, the framework served as both a statement and guidance for internet policy in those early days of the technology.

Back then, the tone was optimistic:

Many businesses and consumers are wary of conducting extensive business electronically, however, because the Internet lacks a predictable legal environment governing transactions and because they are concerned that governments will impose regulations and taxes that will stifle Internet commerce.

And so the Clinton administration developed the framework to foster “increased business and consumer confidence in the use of electronic networks for commerce.” Ira Magaziner, who formerly led the administration’s disastrous health care initiative, spearheaded the effort, which presented five principles based on the administration’s “consultation with industry, consumers groups, and the Internet community.”

1. The private sector should lead. The Internet should develop as a market driven arena not a regulated industry. Even where collective action is necessary, governments should encourage industry self-regulation and private sector leadership where possible.

2. Governments should avoid undue restrictions on electronic commerce. In general, parties should be able to enter into legitimate agreements to buy and sell products and services across the Internet with minimal government involvement or intervention. Governments should refrain from imposing new and unnecessary regulations, bureaucratic procedures or new taxes and tariffs on commercial activities that take place via the Internet.

3. Where governmental involvement is needed, its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce. Where government intervention is necessary, its role should be to ensure competition, protect intellectual property and privacy, prevent fraud, foster transparency, and facilitate dispute resolution, not to regulate.

4. Governments should recognize the unique qualities of the Internet. The genius and explosive success of the Internet can be attributed in part to its decentralized nature and to its tradition of bottom-up governance. Accordingly, the regulatory frameworks established over the past 60 years for telecommunication, radio and television may not fit the Internet. Existing laws and regulations that may hinder electronic commerce should be reviewed and revised or eliminated to reflect the needs of the new electronic age.

5. Electronic commerce on the Internet should be facilitated on a global basis. The Internet is a global marketplace. The legal framework supporting commercial transactions should be consistent and predictable regardless of the jurisdiction in which a particular buyer and seller reside.

Confronted with the transformative potential of AI, the Biden administration implemented the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” a sprawling document that spans 13 sections, extends more than 100 pages, and lays out nearly 100 deliverables for every major agency of the executive branch. 

The consensus that produced the Clinton-era framework no longer exists. The techlash is real. The sense that we got this wrong and we didn’t regulate quick enough has been driving the politics of AI. You can see it at every level of government, in statehouses, in government agencies, and in Congress. Tech needs to be reined in.

I might have had occasional quibbles, but up until recently, I thought the Federal Trade Commission and the Department of Justice did a fairly good job of rectifying bad behavior, like policing fraud, without going too far. But under the Biden administration, that’s changed. Jonathan Kanter, assistant attorney general for the Antitrust Division, and Chair Lina Khan of the FTC are trying to redirect the focus of each agency. 

There is a lot of good evidence to suggest that internet platforms and e-commerce sites flourished in the United States because the government kept to its core competencies, working to “ensure competition, protect intellectual property and privacy, prevent fraud, foster transparency, and facilitate dispute resolution, not to regulate.” 

That older era, which is giving way to something new, might be thought of as the era of minimally viable regulation. In the startup world, the minimally viable product (MVP) is the simplest form of a product that solves a core problem and allows for quick user feedback. Like a MVP in business, thinking about the minimally viable regulation focuses us to seek the least amount of regulatory intervention necessary to achieve a specific policy goal. The idea is to start with simple, essential rules and adjust them based on outcomes, feedback, and technological or societal shifts, rather than imposing a complex, fully developed regulatory framework from the outset. Issues largely got worked out in the courts but in some rare cases, a case might make it to the Supreme Court or catch the eye of a leader in Congress.  

The experience of the last decade with social media has irrevocably shifted the politics of tech regulation at all levels of government. Constraint has given way to more muscular policy agendas.

A bit of the backstory. 

In 1997, the United States was on the cusp of the internet boom. The internet had been fully commercialized and eBay had been founded only two years earlier; Amazon was only just up and running. The Clinton administration’s framework reflected what was then a bipartisan understanding: Technologies and markets could offer immense potential for economic expansion and innovation, so long as they weren’t hindered by premature or excessive regulatory oversight.

From the very beginning, the internet was populated by “the granola-eating utopians, the solar-power enthusiasts, serious ecologists and the space-station crowd, immortalists,  Biospherians, environmentalists, [and] social activists,” observed Howard Rheingold, the first executive editor of Wired magazine.

From this crucible came the hacker ethics, an impulse that “expresses itself via a constellation of minor acts of insurrection, often undertaken by individuals, creatively disguised to deprive authorities of the opportunity to retaliate.” The internet boosters saw themselves as vanguards resisting government interference. 

John Perry Barlow’s “A Declaration of the Independence of Cyberspace” was written in 1996 at the height of internet optimism. It opened with a salvo:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

Barlow was eclectic. He was a lyricist for the Grateful Dead and a founding member of the Electronic Frontier Foundation, an advocacy organization. But it makes sense that he had a hand in both; the two cultures, San Francisco and Silicon Valley, were twinned. 

Now, just two years after the public release of ChatGPT 3.5, we are on the eve of another technological revolution, even more disruptive, and the expectations have shifted completely. It is inconceivable to think that any leader would call for another framework along the lines of Bill Clinton’s. Nor can I imagine an advocacy group making another declaration like Barlow’s. 

Congress poised for AI action.

If the 1996 cyberspace declaration and the 1997 e-commerce framework are products of their time, then the production function has changed. One of the big changes is that tech became big. Apple, Google, Facebook, and Amazon rose to the top of the stock market and moved up on political agendas. They are now in the crosshairs of the Federal Trade Commission, the Department of Justice, and Congress. In becoming big, technology companies were bound to court political controversies, engendering a reaction. The techlash is reshaping the politics of tech. 

In California, state Sen. Scott Wiener got a sweeping AI safety bill passed only for it to be vetoed by Gov. Gavin Newsom. As Wiener explained it, the bill was “an opportunity to apply hard lessons learned over the last decade, as we’ve seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding, or mitigating the risks.” He’s not alone in wanting more stringent AI regulation: There are hundreds of bills being proposed in the states.  

You can see the techlash in the executive branch with Biden’s order, you can see it in the states, and you can see it in Congress with the “Roadmap for Artificial Intelligence (AI) Policy in the United States Senate,” spearheaded by Senate Majority Leader Chuck Schumer along with Sens. Mike Rounds, Todd Young, and Martin Heinrich. It is a 31-page document dotted with action items for regulating uses of AI. Fully implemented, it could cost $32 billion.

While negotiations are in flux, it seems like Schumer will be pushing to secure an AI bill package in the lame duck session of Congress this year. A number of measures have already passed, including nine bills out of the House Science Committee, and 10 bills from the Senate Committee on Commerce, Science, and Transportation. But from everything I’ve heard on Capitol Hill, Schumer wants to establish the National Artificial Intelligence Research Resource (NAIRR) program and pass the Future of AI Innovation Act.

NAIRR would provide academic and nonprofit researchers with the compute power and government datasets needed for education and research. Given that there would be new spending involved, the bill would need to come through the omnibus or the National Defense Authorization Act. That could be a tall order. 

On paper, the Future of AI Innovation Act establishes artificial intelligence standards, metrics, and evaluation tools. But in practice, it formally codifies what the Biden administration was already doing with his executive order and the AI Risk Management Framework. I’ve written before about my concerns with its approach but I like how tech policy guru Adam Thierer described this catch-22,

While there is nothing wrong with federal agencies being encouraged through the EO to use NIST’s AI Risk Management Framework to help guide sensible AI governance standards, it is crucial to recall that the framework is voluntary and meant to be highly flexible and iterative—not an open-ended mandate for widespread algorithmic regulation. The Biden EO appears to empower agencies to gradually convert that voluntary guidance and other amorphous guidelines into a sort of back-door regulatory regime (a process made easier by the lack of congressional action on AI issues).

I’m just old enough to remember that bygone world of internet optimism. I’m not nostalgic for much, but I do think we’re losing something uniquely American as the tech debate has soured. 

Until next week,

🚀 Will

Notes and Quotes

  • The Federal Trade Commission announced a new “click to cancel” rule to make it easier to cancel subscriptions and memberships for any type of media. It will also ban misrepresentations in negative option marketing, such as automatic renewals; require the disclosure of key information before collecting billing details; and obtain explicit consumer consent before charging for features.
  • Luxury fashion house Prada and space flight company Axiom Space have unveiled their collaborative design: the outer layer of the spacesuit NASA astronauts will wear during the Artemis lunar landing mission.
  • AMD and Intel, longtime rivals in the processor industry, announced a historic partnership to create an x86 advisory board. The board will help standardize development for x86, the computer processor architecture used in most PCs and servers. This collaboration aims to improve consistency and accelerate the rollout of updates for x86-based systems.
  • Lewis Lapham, an influential American writer and longtime editor of Harper’s, passed away in July. Elias Altman remembers his legacy in this moving essay.
  • The AI investment boom has led to an increase in computing demand, with hundreds of billions of dollars going into data center facilities and power plants. Economist Joseph Politano has a great review of the numbers: “Right now, US data center construction is at a record-high rate of $28.6B a year, up 57% from last year and 114% from only two years ago. For context, that’s roughly as much as America spends on restaurant, bar, and retail store construction combined.”
  • According to a new study on Zoom fatigue, remote work has drastically increased our cognitive burden by forcing our brains to process an endless stream of virtual meetings, while simultaneously blurring the boundaries between work and personal life. The digital nature of our communications—from video calls to text messages—creates a psychological distance that leads us to subconsciously devalue these interactions compared to face-to-face conversations, the study found.
  • The National Highway Traffic Safety Administration (NHTSA) is investigating 2.4 million Tesla vehicles after four crashes involving the self-driving feature, including one fatality in 2023.
  • The Federal Trade Commission has been investigating John Deere’s restrictive repair practices for at least three years. The farm equipment is notorious for being nearly impossible to fix independently, forcing farmers to rely on expensive manufacturer repairs.
  • Cuba’s national power grid failed four times in 48 hours this past weekend, including a major crash Friday that left 10 million people in the dark. 
  • After Starliner’s troubled test flight left astronauts stranded in space, NASA is pivoting to SpaceX’s Dragon capsule for upcoming ISS missions while reassessing the future of Boeing’s spacecraft program.
  • The small town of Green Bank, West Virginia, located within a National Radio Quiet Zone, has become a refuge for a community of electrosensitive people. 

AI Roundup 

  • Amazon is investing in U.S. nuclear company X-energy and plans to collaborate on deploying small modular reactors.
  • The Hoover Institute recently unveiled the Digitalist Papers, an essay and public discussion series on AI regulation.
  • Anthropic released an AI tool this week that can take over the user’s mouse cursor and perform computer tasks.
  • OpenAI and Microsoft are funding AI journalism tools to the tune of $10 million

Will Rinehart is author of Techne and a senior fellow at the American Enterprise Institute.

Share with a friend

Your membership includes the ability to share articles with friends. Share this article with a friend by clicking the button below.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.

With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.