Algorithms are the engines that make the modern internet work. They have enabled the internet to evolve beyond simple online forums and human-powered content moderation. Today, the internet comprises nearly 2 billion discrete websites, some of which, like Facebook, handle billions of new pieces of uploaded content per day. Only algorithms can manage content at that immense scale, releasing the internet from the natural limits imposed by human finitude.
However, algorithms have been under fire recently at the U.S. Supreme Court. In Gonzalez v. Google, families of victims from the 2015 Bataclan terrorist attack alleged that a Google algorithm had “promoted” ISIS content and thus radicalized the attackers. Since Google promoted particular pieces of content through its algorithm—so their argument goes—the internet search engine should be held liable for any damage incurred by those consuming that content.
Fortunately, last week the Supreme Court vacated and remanded the case back to the 9th Circuit, issuing a unanimous and unsigned ruling stating that the “plaintiffs’ complaint … states little if any claim for relief.” The high court’s decision was based on a relatively narrow ruling in Twitter Inc. v. Taamneh, a related case involving Twitter’s use of algorithms. In that ruling, Justice Clarence Thomas held that social media companies and the terrorist organizations that use them lack a “concrete nexus” that would attach liability to the platform. Twitter had not consciously attempted to aid these organizations, and, although the algorithm passively surfaced radicalizing content for users, it did so in a content “agnostic” fashion.
The Twitter ruling maintains a liability shield for the user-driven, content-neutral algorithms that undergird every major website and social media platform, which is a good thing. However, Thomas’ opinion in the case did not address the question of liability for algorithmic content moderation, or cases where platforms do indeed actively choose to either remove or promote particular content (e.g. hate speech, obscenity, terroristic propaganda, etc).
This was somewhat surprising, as Thomas has previously signaled his interest in overturning Section 230 of the Communications Decency Act, which shields platforms from liability for their more active content moderation policies. Between Twitter and Gonzalez, court watchers had expected any ruling to touch on Section 230, although whether the justices would strike or uphold the law was a hotly debated question. By not ruling on Section 230, the court allowed the beneficial status quo—which has helped make America the global leader in software innovation over the past quarter century—to remain, but it also left the door open for future legal challenges.
To return to first principles, the mere act of organizing content for public consumption is not equivalent to editorializing in favor of that content. This becomes apparent when you think of the internet as an expansion of one of its meatspace precursors, the bookstore. After all, bookstores are expressly in the business of organizing content for public consumption. They choose which books to sell and how prominently to display them. The difference between a book becoming a bestseller or a flop can come down to whether a bookseller gives it pride of place on a top shelf in the front of the store or relegates it to a bottom shelf in the back.
Yet it would strike most people as bizarre and unfair if a family were able to sue a bookstore for merely stocking or displaying a book that was found in the possession of a domestic terrorist. This is why a recent attempt by a fringe conservative lawmaker in Virginia to sue Barnes and Noble for carrying “obscene” books for minors had no chance of success in court. Booksellers are not civilly liable for the content of books written by others. Carrying such a book does not make a bookstore the equivalent of the book author or its publisher.
Likewise, an online platform—which operates as an informational intermediary like a bookstore, doing so quite literally in the case of Amazon—should not be considered a publisher of user created content simply because it decided, via an algorithm, how prominently to display any piece of content.
If we were to hold the internet to a different standard than we do traditional print mediums, it would open the door to censorship and chill user speech. As argued by Justices Elena Kagan and Brett Kavanaugh during oral arguments for Gonzalez v. Google, if the Supreme Court were to expose companies to civil liability based on their algorithms, “lawsuits will be nonstop.” Even if algorithmic recommendations were based on a user’s expressed interests, parents or even users themselves could run to court to claim they were unduly influenced by content fed to them by the likes of Google, TikTok, Instagram, Facebook, and Twitter. The temptation to blame online platforms for users’ own crimes or those of others—and to seek a windfall payout—would prove irresistible, and the resulting flood of lawsuits would have negative consequences for both platforms and users.
Platforms would naturally respond to this new set of incentives in one of two ways: either turning up their content moderation filters to remove even remotely controversial content or getting rid of algorithmic moderation entirely. The former option would result in a wave of false positives as the algorithm removed both radicalizing and deradicalizing content. It is hard, after all, for software to tell the difference between racism and anti-racism, or between fascism and anti-fascism. The ability of ordinary people to freely speak their minds would be restricted significantly.
Other platforms would choose to remove content moderation altogether, betting that doing so would alleviate them from legal liability since they could no longer be accused of revealing a preference for any particular piece of content. Of course, that would make more of the internet resemble mostly unmoderated forums like 8chan, sites notorious for inculcating hate speech and otherwise objectionable content.
By holding platforms liable as publishers of user-produced content for merely applying algorithms that rank or surface content, the Supreme Court would turn swaths of the internet into either overpoliced, sanitized walled gardens or free-for-all cesspools.
It would also overturn judicial precedent. In 2019, the U.S. Court of Appeals for the 2nd Circuit upheld Facebook’s right to make friend suggestions without being considered a publisher. In the ruling, the court stated that algorithms simply “take the information provided by Facebook users and ‘match’ it to other users … based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers.”
To return to the bookstore metaphor, the decision of an intermediary to cluster similar content —“Science Fiction,” “Religion,” etc.— to help consumers find what they are looking for is not a form of editorializing. The fact that the internet allows for finer-grained clustering than pre-digital bookstores could ever manage—with categories like “Space Amish Vampire Romance,” “Flying Spaghetti Monster Memes,” etc.—does not obviate this core principle.
It would have been an overreach for the U.S. Supreme Court, a panel of nine justices with inadequate firsthand knowledge of the intricacies of the internet, to redesign our system of algorithmic content moderation. As Kagan herself previously pointed out, if we want a different system, “Isn’t that something for Congress to do, not the Court?” Legislation has the benefit of going through an extensive public hearing and congressional debate process, giving any resulting policy a degree of democratic legitimacy.
The Supreme Court should not create a novel distinction between active and passive algorithmic moderation. Taking the deep well of jurisprudence regarding liability for bookstores and other offline intermediaries, and formally applying it to our online platforms, makes the internet freer, allows for better user experiences, and gives companies the space to innovate.
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.