A Win for Algorithms at the Supreme Court

Algorithms are the engines that make the modern internet work. They have enabled the internet to evolve beyond simple online forums and human-powered content moderation. Today, the internet comprises nearly 2 billion discrete websites, some of which, like Facebook, handle billions of new pieces of uploaded content per day. Only algorithms can manage content at that immense scale, releasing the internet from the natural limits imposed by human finitude.
However, algorithms have been under fire recently at the U.S. Supreme Court. In Gonzalez v. Google, families of victims from the 2015 Bataclan terrorist attack alleged that a Google algorithm had “promoted” ISIS content and thus radicalized the attackers. Since Google promoted particular pieces of content through its algorithm—so their argument goes—the internet search engine should be held liable for any damage incurred by those consuming that content.
Fortunately, last week the Supreme Court vacated and remanded the case back to the 9th Circuit, issuing a unanimous and unsigned ruling stating that the “plaintiffs’ complaint … states little if any claim for relief.” The high court’s decision was based on a relatively narrow ruling in Twitter Inc. v. Taamneh, a related case involving Twitter’s use of algorithms. In that ruling, Justice Clarence Thomas held that social media companies and the terrorist organizations that use them lack a “concrete nexus” that would attach liability to the platform. Twitter had not consciously attempted to aid these organizations, and, although the algorithm passively surfaced radicalizing content for users, it did so in a content “agnostic” fashion.
The Twitter ruling maintains a liability shield for the user-driven, content-neutral algorithms that undergird every major website and social media platform, which is a good thing. However, Thomas’ opinion in the case did not address the question of liability for algorithmic content moderation, or cases where platforms do indeed actively choose to either remove or promote particular content (e.g. hate speech, obscenity, terroristic propaganda, etc).