The Worst Possible Reason to Support New AI Regulation

Sam Altman prepares to testify at an oversight hearing by the Senate Judiciary Subcommittee on Privacy, Technology, and the Law to examine AI, focusing on rules for artificial intelligence in Washington, D.C., on May 16, 2023. (Photo by Nathan Posner/Anadolu Agency/Getty Images)

Dear Capitolisters,

One of this newsletter’s persistent themes is that we should generally be skeptical of easy policy solutions because a lot of policy is—like the real world—difficult, complicated, and replete with real trade-offs. There are many examples of this reality throughout history, but a new one relates to whether and how to regulate artificial intelligence. On the one hand, some industry experts and technologists familiar with the speed of AI development and the technology’s potential—especially in the wrong hands—have concerns about what it might become without some sort of government oversight. The risk of Skynet or whatever, so they say, may be low, but since it’s an existential risk it merits regulation.On the other side, and certainly where my sympathies lie, are other technologists, many (most?) economists, and people familiar with the long history of hysterical technophobia and innovation-stifling regulation who warn against heavy-handed government intervention in this space. Overall, these debaters are smart, well-intentioned people discussing a tough issue, and reasonable people can disagree based on legitimate evidence and argument provided. 

Judging from the headlines and various politicians’ statements, however, the most talked-about reason to regulate AI—because industry leaders themselves demand it—could very well be the worst.

The Long, LONG History of Incumbents Demanding They Be Regulated

Concerns over AI’s economic and existential risks are not exactly new, but they’ve certainly increased since Microsoft-backed OpenAI released the fourth version of its AI chatbot, ChatGPT, this March. Shortly thereafter, for example, bajillionaire Elon Musk joined several AI experts and industry executives to pen an open letter calling on all AI labs to take a six-month pause in developing systems more powerful than GPT-4, citing AI’s “profound risks to society and humanity.” The concern hit a fever pitch last week when OpenAI CEO Sam Altman told the Senate Subcommittee for Privacy, Technology and the Law that he supports wide-reaching AI regulation (e.g., licensing by a new government or international agency).

This content is available exclusively to Dispatch members
Try a membership for full access to every newsletter and all of The Dispatch. Support quality, fact-based journalism.
Already a paid member? Sign In
Comments (41)
Join The Dispatch to participate in the comments.
 
Load More