Last year, the latent and conflicting undercurrents of artificial intelligence policy ignited in California. SB-1047, introduced by state Sen. Scott Wiener, called for developers of major AI models to adopt stringent safety measures, including auditing, reporting, and kill switches. It went through rounds and rounds of markups, saw fervent advocacy from both sides, and was ultimately vetoed by Gov. Gavin Newsom.
But even though SB-1047 was vetoed, California, home of the major AI developers, remains a testing ground for new AI regulation. Earlier this year, it saw the introduction of SB-813, the first bill to put a new and important idea into policy writing: private AI governance.
Private AI governance means what it sounds like: AI companies would be audited by private regulatory organizations, called Multistakeholder Regulatory Organizations (MROs). SB-813, and future legislative proposals like it, offers to provide AI developers a liability shield from all tort claims that might arise from improper or harmful use of their systems. To receive this liability shield, they have to comply with standards set by a third-party regulatory organization licensed by the Attorney General.
SB-813 is the first major piece of legislation that picks up on this idea, but it might well not be the last—California could serve as model state legislation for other states behind the frontier, or even as a model for a similar federal approach. And because the Senate struck down the recently proposed moratorium on AI legislation at the state level, state lawmakers will likely be reinvigorated in their pursuit of new state-level regulation. Against the backdrop of Tuesday’s stunning 99-1 Senate vote, advocating for private regulation might even be the next best way for opponents of tech regulation to get their voice heard in upcoming state-level battles.
Private governance isn’t new, though its application to AI is novel; you might find the MROs similar to the private credit rating agencies to which the government outsources risk assessments of debtors. This premise is simple: Lean, private organizations might better track the complexities of their regulation subjects than the government. But it also raises the same old questions around private regulators, namely: Can we trust them to carry out their task conscientiously? Or do we risk another 2008?
The answers that state legislators find to these questions matter. Indeed, SB-1047 had national repercussions: Its death served as a final nail in the coffin to Biden-era attempts to introduce wide-ranging stipulations on the development and deployment of advanced AI— such as the controversial executive order on “safe, secure, and trustworthy” AI, which was rescinded by the Trump administration. SB-813 will offer no such clarity anytime soon: The appropriations committee has decided to hold the bill for now. But supporters of the bill—like Fathom, a policy organization that has sponsored SB-813—will not hesitate to pitch private AI governance again, in Sacramento and elsewhere. For just one recent example, consider Fathom’s submission one month ago to the federal government’s request for contributions to its AI Action Plan, in which Fathom again called for certifying third-party private governance entities. What to make of it?
Advanced AI systems present unique challenges to our existing legal and regulatory frameworks. Copyright law has trouble parsing the legality of training AI models on large amounts of somewhat public data; liability law has trouble figuring out how much responsibility for misusing such AI models can be placed on developers. Faced with these shortcomings, one approach might be to reform legal institutions like copyright and tort to fit this new era of technology. But the odds are admittedly stacked against this: Reforming rich bodies of established jurisdiction can take a long time, and such a campaign would face entrenched interests at every step—all while the pace of AI diffusion throughout societies and economies continues to accelerate.
The advocates of private AI governance see a shortcut: to allow alternative regulatory models to supplant existing legal norms. Under a private model, all the important work happens away from the established legal institutions anyways—their function gets reduced to a backstop, a stick to force AI developers toward the carrot of being overseen by private regulatory organizations instead. The force of this incentive, paradoxically, depends on our legal institutions’ unpredictability: the reason that current AI developers ought to be worried about liability is because it could be applied to them in unforeseen ways, exposing them to major financial risks.
Consider an AI developer who has just developed a new model they believe to be largely safe, but susceptible to some unavoidable instances of nefarious misuse. They now face a choice: They could submit their product for assessment by a private governance body in exchange for a liability shield, with all the potential bureaucratic requirements that might come with it, or they could roll the dice on facing liability. For them to choose private governance, they have to believe that susceptibility to liability is simply too risky even after they’ve exercised reasonable care. This is exactly what supporters of private governance consider its virtue—shielding from unreasonable liability. But if liability was tenable for responsible developers, agreeing to private governance would be much less attractive to the developer.
Fundamentally, this entire perspective takes a fairly negative view of governments’ incremental reform ability—in a way that perhaps fits the spirit of our time, as the political current continues to move power from Congress to the executive, and as DOGE and the broader administration take the chainsaw to perceived administrative bloat. But totally giving up on our legal and political institutions in favor of external, private approaches risks becoming self-fulfilling. On the legal side, private AI governance asks liability law to serve as a plausible threat: Being subjected to liability needs to be so inconvenient as to motivate participating in private governance—so reforming it becomes counterproductive. On the political side, we further atrophy state capacity: Governments would be less and less incentivized to understand what’s happening in detail, to stay in close touch with experts and industry. Lawmakers are invited to grow complacent, to stay off the ball—and finally risk losing touch with what the movement at the technology frontier requires. In that sense, private AI governance risks speeding up what it’s trying to fix; frustration about the law’s current capacity might lead to its further hamstringing.
Private AI governance mostly hits after the fact. The mechanisms by which compliance is ensured are delayed and apply well after deployment. At no point during potential reckless development and deployment does anyone stop a company—victims of such behavior are just now empowered to attempt to sue that company into bankruptcy after all is said and done. The resulting lack of binding oversight is not news: The critical failure of private governance in ’08, where rating agencies had provided good ratings to highly risky loans, was litigated in great detail after the fact. But it took a catastrophe to motivate critical reevaluation of the private governance model—scrutiny of either the organizations or the risky loan practices arrived too late to prevent the crisis itself.
In a lower-stakes market for technology products, ex-post enforcement would not be an issue. Any responsible developer would appreciate the substantial financial risks of rushing out reckless deployments, and be incentivized in the strongest ways not to do it. But Silicon Valley’s AI developers are different, or so they frequently profess. They are motivated by their unwavering belief in their products’ transformative power—and genuinely perceive themselves to be ushering in a new era of technology. To their minds, the deployment of a sufficiently powerful model can be a watershed moment in all of human history. In this view, being exposed to liability does not matter much; to them, the world will be so different and they might be so much richer and more powerful that a drawn-out liability lawsuit doesn’t matter much. If these developers believe what they say, financial risk is not sufficiently binding incentive to them.
This is particularly dangerous in conjunction with AI’s outsized risks. Just as readily as advanced AI models can be used for good, they’re susceptible to sometimes outsized misuse. Some of the most promising use cases cut close to some of the most alarming risks:
Generating convincing video content is a mere stone’s throw from creating illicit sexual content of real people; a productive lab assistant AI might be very useful for creating biological and chemical weapons; and an automated software engineer might just as easily be employed for cybercrime. On risks like these, a miss or two could already cause a lot of harm.
The potential harms of applying the often maximalist Silicon Valley mindset to risks like these can sometimes be stopped only through a priori intervention—a watchful state that has the ability to actually intervene. Private governance has no legal basis for intervention: Its regulatory organizations are not part of the executive, and not empowered to truly act. That might be too far a pendulum swing away from the past threat of overregulation.
Private AI governance is a smart way to sidestep our latent problems in search of an acute solution to the questions asked by advanced AI. But these problems are too deep and too important to sidestep. Technology will only get wilder, AI will only get bigger—and not getting our governments’ capacity and our laws’ applicability up to speed will only become a greater burden. Governments must be on top of what’s happening in AI—if only because, if push comes to shove, only the government is empowered to act on the most serious concerns and security implications.
What does that mean for current AI governance, written relatively small in the context of state legislation proposals? Looking back to California, concerns on the long-term sufficiency of private AI governance should not necessarily mean dismissing its merits altogether. Indeed, policymakers who recognize that the moment requires regulatory innovation have good reason to pursue it. But they should be mindful to not let it become the thin end of the wedge that drives advanced AI away from the informed oversight of governments. Private governance can and should only happen in parallel to serious maintenance and expansion of relevant state capacity. If it is to serve as a genuine model for a new governance paradigm instead, it still needs to answer a lot of questions.
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.