The Coming Wave of Disinformation

Voting in the 2020 U.S. elections will come to a close in 70 days, and disinformation experts are working around the clock to safeguard the democratic process. “One of the things you can be pretty confident of,” said Ben Nimmo, the director of investigations at Graphika, a network analysis firm, “is that somebody is going to start posting a video which allegedly shows ballot box stuffing, or it shows people carousel voting or it shows … some kind of electoral irregularity.” 

“An awful lot of the time, if you reverse search the video or the photo, you find out it happened five years ago in a different country.”

But what if such a video gets picked up by an irresponsible media outlet—or worse, the president—before Facebook’s much-improved security teams can determine its origin and stop its dissemination? What if Joe Biden’s anticipated vote-by-mail advantage never materializes, and someone claiming to be a Philadelphia postal worker tweets that she saw her co-worker throw away a bin full of ballots? 

Welcome to democracy in 2020.

“I think 2020 will be more heavily contested than 2016 was,” predicted Klon Kitchen, director of the Center for Technology Policy at The Heritage Foundation. “And I think both political candidates will be able to levy believable and legitimate complaints about foreign interference such as to call into question the outcome.”

This sentiment was ostensibly confirmed earlier this month, when William Evanina, director of the U.S. National Counterintelligence and Security Center, released a rare statement detailing “covert and overt influence measures” from foreign adversaries meant to “sway U.S. voters’ preferences and perspectives, shift U.S. policies, increase discord in the United States, and undermine the American people’s confidence in our democratic process.”

China and Iran, per Evanina’s summary of intelligence community assessments, prefer President Trump not win re-election. Russia, meanwhile, is “using a range of measures to primarily denigrate former Vice President Biden” and is “seeking to boost President Trump’s candidacy on social media and Russian television.”

Experts who spoke with The Dispatch in recent weeks were hesitant to weigh in on Evanina’s conclusions directly. “Those of us who are outside of the [intelligence] community and don’t have clearances don’t really have visibility into the specifics of what they’re talking about,” Stanford Internet Observatory technical research manager Renee DiResta cautioned in an interview.

But Kitchen outlined what he saw as Russia and China’s differing strategic postures. “Russia is deliberately blatant and does not fully hide its activities, because a part of what it wants to do is demonstrate that it’s acting with impunity and put forward the idea that the Russian bear isn’t afraid to stand up to the Americans,” he said. “The Chinese strategic posture is much, much more clandestine, narrower in scope, and aware that certain portions of the U.S. government are going to know that they’re doing things, but not so blatant in their work so as to make it obvious to the public at large.”

Kitchen, DiResta, Nimmo, and Nate Persily, co-director of the Stanford Cyber Policy Center, have been researching online disinformation—and its repercussions on the democratic process—for years. Just months from November 3, they see a political ecosystem simultaneously more cognizant of these threats than it was in 2016 and more susceptible to propaganda campaigns.

“The main difference between 2016 and 2020,” Nimmo said, “is that in 2020, you have a widespread recognition that actually foreign interference is a thing and online interference is a thing. Neither of those was widely accepted in 2016.”

Persily is most worried about what happens when the polls close on Election Day. “This is a different kind of election than previous elections,” he said. “We may have several days where we don’t know the victor, and there will be a lot of attention being paid to the process of counting votes. And that provides a news vacuum into which all kinds of disinformation can have their audience.”

He noted that social media companies like Facebook, Google, and Twitter have learned plenty of lessons from 2016, when Russian intelligence officers and troll farms ran rampant on the platforms. “The low-hanging fruit has been picked,” Persily said. “But it’s not as if the adversaries just then give up.”

In fact, the Trump administration has done little to dissuade any foreign actors from trying again in 2020. The Treasury Department imposed sanctions on Russian intelligence officials “involved in cyber operations to interfere with the 2016 election” in December 2018, but Trump himself famously accepted Russian President Vladimir Putin’s denial of any wrongdoing. 

“The barriers to entry for this type of activity are so low—and the potential rewards are so high while the relative risks are so low—that we have not decisively changed the political calculus of foreign leaders who choose to use this kind of activity,” Kitchen argued. “And in fact, in the 2020 context, I think it’s an even more crowded and contested space than it was in 2016.”

Days after Trump’s victory in 2016, Facebook CEO Mark Zuckerberg dismissed the notion of his platform having played any role in the outcome, calling such accusations “pretty crazy.” Ten months later, the company’s then-chief security officer, Alex Stamos, announced Russian-backed advertisers spent about $100,000 on 3,000 Facebook ads from June 2015 to May 2017. The Trump and Hillary Clinton campaigns, by contrast, combined to spend $81 million.

“Too much was made over the ads last time around,” Persily argued. “The organic content far outpaced the ads in terms of its reach. The ads provide a lens through which we can blame the platforms for their ineptitude, but if you could win a presidential election with a $100,0000 in rubles of ads, then there’ve been a lot of political consultants who’ve been wasting a lot of money for a long time.”

But DiResta didn’t dismiss the effect of the Russian ads as quickly. “It’s a very small percentage of all the content on Facebook … that’s true,” she conceded. “But that’s like saying a targeted ad from, pick your regional fast food chain, because it’s only shown to a handful of people in six states is not an effective ad. They’re going to target the content to the people who are likely to be receptive to the content, who are going to then in turn go and share the content.”

Fast forward four years, and much of Persily’s proverbial low-hanging fruit has been picked. Facebook completely revamped its political advertising process, requiring proof from would-be marketers that they live in the country in which they want to advertise, and introducing a fully transparent advertising archive. It now has 35,000 people working on content moderation (“Our budget [for content review] is bigger today than the whole revenue of the company when we went public in 2012,” Zuckerberg said in February), and open-source researchers—at Graphika, the Stanford Internet Observatory, the Digital Forensic Research Lab—routinely flag ongoing disinformation campaigns. As a result, Facebook has taken down thousands of fake profiles and accounts linked to foreign adversaries like Russia and Iran over the past few years.

“We’re in a fundamentally different place than we were in 2016,” an official on the Facebook security team told The Dispatch. “We’ve learned significant lessons about how to structure our response to these types of operations, the tactics that we anticipate seeing and that we have seen in these operations, and we’ve built a program that has enabled us to take down a large number of these operations from countries all over the world, both foreign and domestic.”

Twitter has followed a similar path, in some cases going even further than Facebook, like when it banned political advertising entirely late last year. And although disinformation tactics are evolving—Facebook caught and took down dozens of fake accounts being operated by Ghanaians and Nigerians on behalf of Russians back in March—the platforms are much better prepared to combat them this time around.

“I think we can comfortably assume that the days where you could create an account on Twitter or Facebook, pretend it’s in Tennessee, and register it to a Russian phone number are gone,” Nimmo said, referencing the infamous @TEN_GOP account that was run by Russian operatives in 2016. “The platforms are able to look out for that kind of thing.”

For all these improvements, the social media companies still have a glaring blind spot when it comes to disinformation, and it’s one they’re relatively hamstrung to do anything about: domestic actors. “If it’s Americans using their own accounts to push out various kinds of partisan claims, which may or may not be true, you don’t have so many tools in the box,” Nimmo said. “If you can prove that the account is run by the person whose name is on the box, then freedom of speech is a thing. And I don’t think any of the platforms wants to be in this position where they’re having to decide what people are allowed to say and what they’re not.”

But whether they want to or not, the platforms have, to varying degrees, begun subsuming this role as arbiters of truth. Twitter now routinely flags President Trump’s tweets for “glorifying violence” or “making misleading health claims that could potentially dissuade people from participation in voting.” Facebook has historically taken a more laissez-faire approach to political content moderation, but has fought coronavirus misinformation aggressively, taking down 7 million posts about COVID-19 between April and June and flagging 98 million more. Just last week, it removed thousands of groups, pages, and ads related to the QAnon conspiracy theory and the Antifa anarchist movement.

Regarding the election, the Facebook security official pointed to the company’s community standards prohibiting posts that interfere with the voting process. “We do, in fact, remove misinformation around the process of voting, the timing of voting, place of voting, designed to get people to essentially not vote or not vote correctly,” he said. But a mid-July investigation by ProPublica and First Draft found that 22 of the top 50 most popular posts mentioning mail-in voting contained “false or substantially misleading claims.” Facebook deleted many of them after they were flagged for the platform by ProPublica.

Americans have grown to love accusing one another of being “Russian bots” on social media, but this misinformation was coming from inside the house. One of the videos ProPublica caught featured American actor and comedian Terrence K. Williams saying “if you mail in your vote, your vote will be in Barack Obama’s fireplace.” Another was from Navy veteran Peggy Hubbard, who ran in the Illinois Republican primary for U.S. Senate. “The only way you will be able to vote in the upcoming election in November is by mail only,” she said, incorrectly.

DiResta pointed out that our foes may not need to go through the effort of creating bot networks and masquerading as Americans—Americans are doing their work for them. “When you’re creating your own fake accounts, there’s a vulnerability there. If the platform finds one, oftentimes it can find a lot of the rest of them,” she said. “You have to decide, is it worth the effort of making your own fake accounts when—in a highly polarized society like the U.S. at the moment—you can simply amplify a real American who happens to have that same opinion?”

About two months out from the election, these adversaries’ risk-reward ratios may nudge them toward the latter. “They’re going to be caught in this place where they need to keep on building audience,” Nimmo said, “but they need to avoid getting caught. Because if they get caught now, they’re not going to have time to reconstitute before the election. Three months is just too short a time to build up an effective operation. Remember, the original IRA gave itself two and a half years to interfere in the 2016 election. Influence operations are not as easy as people think.”

But a largely amplification-based approach could still be plenty effective. “There’s an embarrassment of riches for anybody who wants to amplify content that is polarizing,” Persily said. “There is very little that is unique about foreign influence operations right now. They repeat the narratives that are already native to our polarized political environment. They can do some more amplification of those narratives.”

This kind of interference is more difficult for the platforms to suss out. “If it’s a real American opinion and it gets retweeted by a whole lot of accounts, to what extent are the platforms looking at the dynamics around the amplification piece?” DiResta said. “Once you make something hit critical mass—meaning you get it into the feeds of enough people—there are going to be organic groups of people who similarly hold that opinion and then in turn, retweet the tweet themselves … How do you quantify coordinated activism in such a way as to come up with some sort of lines for what kinds of coordination are acceptable and what kinds of coordination are manipulative?”

Some foreign amplification is less subtle. Just look at the feed of RT (formerly Russia Today), which Twitter officially labeled Russia state-affiliated media a few weeks ago. “Clinton urges Biden to not concede ‘UNDER ANY CIRCUMSTANCES,’ calls for ‘massive legal op’ in case Trump sees narrow win,” read one post on Monday. “Shot SEVEN times in the back by a cop… Is #JacobBlake the next George Floyd?” asked another. “Russian media didn’t create the widespread distrust in the American media establishment that fuels conspiracy theories like QAnon,” a third tweet said. “The likes of CNN did.”

“They’re very good at finding things that are real, and then just using them, reappropriating them,” DiResta continued, referring to the Russians. “It’s [about] creating the perception that this is a large opinion, not a majority opinion, but an opinion that a lot of people have.”

China, meanwhile, tends to be more direct in their approach, criticizing their opponents rather than sowing discord through psychological operations. Nimmo’s team at Graphika recently unearthed a series of barely viewed English-language videos from the pro-Chinese “Spamoflauge Dragon” political network “that attacked American policy and the administration of U.S. President Donald Trump.” The Chinese Communist Party-operated People’s Daily newspaper has railed on Trump’s coronavirus response for months, with one op-ed arguing it’s “easy to understand Dr. Anthony Fauci’s frustration” while another claimed a “buck-passing game exposes U.S. politicians’ lack of credibility and dereliction of duty.”

Seventy days from the election, experts have come to a rough consensus as to the best path forward. “Flood the zone of disinformation with real information,” as Persily put it. He’s most concerned that voters will be misled about how to actually file their mail-in ballot, like whether or not voters need to sign the outside of their envelope.

Facebook says they have a plan to combat exactly that. “An information vacuum creates opportunities for manipulation,” the security official said, alluding to the company’s recently launched Voter Information Center. “We have our security efforts that are designed to find and stop bad actors, but those are most effective when people can access authentic and accurate information, whether it’s about a global pandemic or about voting. … Getting accurate information to voters ends up being a really useful vaccine against these types of influence operations, or just regular misinformation.”

Sens. Marco Rubio and Mark Warner—the chairman and ranking member on the Senate Intelligence Committee, respectively—released a joint press release on the “serious and ongoing threats to our election” following Evanina’s statement. “Everyone — from the voting public, local officials, and members of Congress — needs to be aware of these threats,” they wrote. “One of the best ways to combat such efforts is to share with the voting public as much information about foreign threats to our elections as possible.”

Nimmo agreed awareness is key. But using Graphika’s recent work on Chinese threats as an example, he highlighted the importance of perspective as well. “Yes, actually there is a Chinese operation that we call Spamouflage Dragon. And yes, it is large scale and spammy, and so we need to keep an eye out for that kind of activity,” he said. “But it’s also the sense of perspective, which says as far as we can tell, none of that content has ever actually gained significant engagement from real users. So it’s there, it’s a symptom of something, but it’s certainly not sweeping the internet.”

Editor’s note: The Dispatch is a part of the Facebook fact-checking program. Articles that we write that debunk falsehoods are used to warn Facebook users of potential misinformation.

Photo illustration by Rafael Henrique/SOPA Images/LightRocket/Getty Images.

Comments (92)
Join The Dispatch to participate in the comments.
 
Load More