Skip to content
Breaking Down Texas' Disruptive New Social Media Law
Go to my account

Breaking Down Texas’ Disruptive New Social Media Law

A Dispatch explainer.

Are you ready for foundational shocks to the internet as we know it? That’s the sort of question tech-policy watchers are mulling following a shock decision at the 5th Circuit Court of Appeals last week, concerning a law Texas Republicans passed last year that would ban social media companies from “censoring” user content based on the viewpoint it expressed. A lower court blocked the law from taking effect on the grounds that it violated private platforms’ First Amendment rights to decide what sort of speech to host on their own platforms.

But the 5th Circuit overturned that emergency injunction, permitting the law to take effect while challenges to it work their way through the courts. That means that for the first time, a state law has gone into operation compelling big social media companies like Facebook and Twitter to host content that violates their own terms of service. (Florida passed a related bill last year as well, which a different court slapped down before it could take effect.)  

HB 20, the Texas bill in question, compels social media companies to do a number of things to supposedly make their content moderation more transparent and fairer. It requires a platform to disclose the content management algorithms by which it “curates and targets content to users,” “places and promotes content, services, and products,” “moderates content,” and so on, as well as publish an “acceptable use policy” detailing its content moderation standards and practices and a “biannual transparency report” detailing how it has put those standards into practice.

But the real kernel of the legislation is its prohibitions on content censorship: “A social media platform”—limited in the text of the bill to sites with 50 million active monthly users—“may not censor a user” based on “the viewpoint of a user or another person,” “regardless of whether the viewpoint is expressed on a social media platform or through any other medium.” Under the auspices of this law, many common content moderation practices on sites like Facebook and Twitter—banning accounts that engage in hate speech or other types of forbidden content, for instance, or suspending them with reinstatement conditional on deleting objectionable posts—are illegal to enforce.

Internet companies have long enjoyed broad legal discretion over their own content moderation practices thanks to both the First Amendment and Section 230 of the 1996 Communications Decency Act, which states that “no provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

That’s a pretty strong legal protection for internet companies, which is why nobody was particularly surprised lower courts initially halted Texas and Florida’s laws. That goes even for many enthusiastic partisans of such anti-big tech efforts, who have focused on passing federal revisions to Section 230.

“We obviously encourage any and all efforts to address Big Tech censorship, including by passing laws in the states,” said Jon Schweppe, policy director for the conservative group American Principles Project, which supports legislative efforts to compel tech companies to permit more speech. “But the courts have long interpreted Section 230 in a way that invalidates many of these efforts. While we object to that interpretation—and we made this case in our amicus brief to the 5th Circuit—this reality has led us toward focusing more on federal solutions.”

But the 5th Circuit’s unexplained 2-1 decision rattles the landscape. Suddenly, if nothing else changes, companies could face prosecution for censoring any content for expressing a particular viewpoint.

Social media companies have argued that attempting to comply with the new law will both harm users’ experience of their products and be functionally impossible, given the vagueness of parts of the statute. The trade groups representing them in court have already appealed to the Supreme Court to reinstate the injunction blocking the law. Failing to do so “would compel platforms to disseminate all sorts of objectionable viewpoints,” tech trade association NetChoice argued in a filing last Friday, “such as Russia’s propaganda claiming that its invasion fo Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders.” 

The companies additionally argue that forbidding them from removing such content would undercut their advertising-based business models: Customers would balk at placing ads on their sites. 

In the meantime, companies are bracing for the worst as they scramble to minimize their own liability. “Covered businesses are already facing the possibility of liability,” Chris Marchese, policy counsel for NetChoice, told The Dispatch. “I’m not privy to specific changes members may make, but I know their legal and policy teams are working hard to figure out a path forward as best they can, given that the law is impossible to comply with.”

It’s a public-policy 101 sort of question: Who has the right to decide what viewpoints can be expressed on a given website? According to a once-mainstream line of conservative thought, the answer was simple: Whoever runs the thing, dummy. Crabby that the moderators of your go-to forum keep deleting your proofs demonstrating that the Illuminati control the weather? Go find one with looser content standards, or better yet, start your own!

In recent years, however, a growing contingent of Republicans have begun to sour on this way of thinking. What may have worked just fine in the Wild West days of the early web, they argue, doesn’t cut it in the social-media era, when vast portions of online discourse take place according to the regulatory whims of a few giant corporations. Behemoths like Twitter and Facebook aren’t just humble websites putting up their shingle for people to come hang out—in a sense, they control the contours of what’s being talked about online, period. The time has come, then, to regulate these giants the way we regulate phone companies—as “common carriers” of speech—and pass laws forbidding them from censoring posts based on their political content.

Such arguments had already been gaining steam among Republicans, thanks to growing anger about supposedly left-leaning censorship among both the right-wing commentariat and among the rank-and-file. But the pitchforks really came out following the 2020 election, which saw President Donald Trump kicked off Twitter in the wake of the January 6 attack on the Capitol and a sentiment taking root among Republican voters that tech companies had put their thumb on the scales in favor of Joe Biden by, for instance, suppressing embarrassing stories about his son Hunter in the days leading up to the election.

Some opponents of the law have pointed to recent events like the white supremacist mass shooting in Buffalo as evidence that such regulations are likely to harm user experiences and the public interest, arguing companies could feel restrained from taking down content like the shooter’s livestreamed video of his attack or his alleged 180-page ideological manifesto. This seems unlikely, given that the bill contains explicit carve outs permitting censorship of, among other things, content that “directly incites criminal activity or consists of specific threats of violence” targeted against various protected groups.

The likelier harm to user experiences consists of more quotidian objectionable content: racial slurs, targeted harassment, the promulgation of hateful and fringe ideologies. The Buffalo shooter’s manifesto may still be removable under the Texas law, but an identical document stripped of its calls to violence—an academic screed claiming a Jewish plot to replace America’s white population with minorities—would leave platforms powerless to remove it from their servers.

This, of course, gets back to the basic question: Do we want a regulatory regime in which the government dictates to companies what forms and categories of speech they must accept and reject?

“One of the good things about the way the system works is that even though the government cannot and should not be able to decide what is misinformation and hate speech, the platforms are able to say, you know, we don’t want to be a venue for replacement theory,” Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy and Section 230 expert, told The Dispatch. “I don’t think that—unless it’s constitutionally unprotected speech—the government should get involved in saying, you need to take down this … I also don’t think the government should be saying, you’ve got to keep it up.”

Andrew Egger is a former associate editor for The Dispatch.

Share with a friend

Your membership includes the ability to share articles with friends. Share this article with a friend by clicking the button below.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.

With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.