Skip to content
How Generative AI Content Could Influence the U.S. Election
Go to my account

How Generative AI Content Could Influence the U.S. Election

Expect efforts to intensify near Election Day and in the aftermath.

(Illustration from Getty Images)

Over the past year, a near constant drumbeat of articles has warned of the potential for generative AI content to overwhelm the information space, leading to a “tsunami” of disinformation so powerful that it could undermine democracies around the world and sway the results of elections being held in a record number of countries. In a survey released at the World Economic Forum earlier this year, key stakeholders ranked the threat of misinformation and disinformation as the greatest challenge facing the world over the next two years—ahead of risks such as extreme weather events, societal polarization, and lack of economic opportunity, among others. (Interstate armed conflict ranked only fifth.)

More than halfway through the year—and with the U.S. election looming in November—the AI onslaught has yet to arrive, notwithstanding several isolated instances. The reasons for the limited impact are diverse and depend on the specifics of each election contest. Aside from nations with uncompetitive contests, the polarized nature of discourse that we see in many democracies makes such falsities unnecessary, and some nations have already adopted heightened defenses to address the challenge.

The experiences of other countries are informative and in some sense reassuring as the U.S. election approaches. Still, the fact that we have not seen generative AI outputs meaningfully affect elections elsewhere does not mean that concerns about their potential to do so should be ignored.

Generative AI already has shown up in this election cycle.

Since President Joe Biden’s departure from the race in late July, the presidential contest has become exceedingly competitive, raising the potential for false and decontextualized content to spread rapidly, including through AI-generated images, video, and audio. There is little evidence that misinformation has a persuasive effect, but this type of content is more likely to reinforce existing partisan beliefs. 

While research on the effect of AI-generated outputs is sparse, recent real-world examples point to the limited ability of this type of content to meaningfully gain traction with voters. Thus far in the campaign, we have seen usages of generative AI outputs that range from the bizarre—former President Donald Trump and Vice President Kamala Harris enjoying a romantic walk on the beach, for example—to the semi-convincing. AI cheapfakes and deepfakes have also been deployed to create the appearance of support from high-profile public figures, such as pop icon Taylor Swift. Yet many of these fakes have been rapidly debunked by journalists and a civil society on high alert. 

Generative AI will also continue to factor into the election interference playbooks of hostile nations, including Iran and Russia. Although these countries have tried to meddle in many elections throughout the year, the contest between Trump and Harris is perhaps the most pivotal one shaping the future trajectory of foreign policy in Ukraine and the Middle East. Russian and Iranian actors are highly motivated to interfere and foment discord across the electorate, and according to intelligence reports they already are actively engaged. In addition to sowing chaos broadly, Russia has sought to undermine Harris’ candidacy and exacerbate partisan divisions, relying on influencers and private firms to avoid attribution. Iran has successfully hacked the Trump campaign and leveraged a network of online accounts to foment discord, with a particular focus on the Israel-Gaza conflict. These efforts to undermine the candidacies of both Harris and Trump highlight the cross-partisan reach of foreign influence campaigns. 

Unsurprisingly, these efforts have begun to leverage generative AI tools for tasks such as translation and the creation of fake user engagement. Over the past year, AI developers have identified and worked to disrupt several uses of their tools for influence operations. Importantly, while foreign actors have used generative AI tools in their efforts, they appear to have had limited reach thus far.

Where could generative AI have a real impact?

The contested nature of the presidential race means such efforts will undoubtedly continue, but they likely will remain discoverable, and their reach and ability to shape election outcomes will be minimal. Instead, the most meaningful uses of generative AI content could occur in highly targeted scenarios just prior to the election and/or in a contentious post-election environment where experience has demonstrated that potential “evidence” of malfeasance need not be true to mobilize a small subset of believers to act.

Because U.S. elections are managed at the state and county levels, low-level actors in some swing precincts or counties are catapulted to the national spotlight every four years. Since these actors are not well known to the public, targeted and personal AI-generated content can cause significant harm. Before the election, this type of fabricated content could take the form of a last-minute phone call by someone claiming to be election worker alerting voters to an issue at their polling place.

“The fact that we have not seen generative AI outputs meaningfully affect elections elsewhere does not mean that concerns about their potential to do so should be ignored.”

After the election, it could become harassment of election officials or “evidence” of foul play. Due to the localized and personalized nature of this type of effort, it could be less rapidly discoverable for unknown figures not regularly in the public eye, difficult to debunk or prevent with existing tools and guardrails, and damaging to reputations. This tailored approach need not be driven by domestic actors—in fact, in the lead up to the 2020 elections, Iranian actors pretended to be members of the Proud Boys and sent threatening emails to Democratic voters in select states demanding they vote for Donald Trump. Although election officials have worked tirelessly to brace for this possibility, they are correct to be on guard.

What has received less attention thus far is the day-after Election Day scenario. During the period between Election Day and the inauguration, election results could remain unknown, or, as we saw in 2020, disregarded. Lawsuits could feature prominently. Against this uncertain political backdrop, prior experience serves as a guide for what could transpire. 

In 2020, decontextualized and doctored videos and images flooded the internet after the elections, creating “proof” of a nefarious plot to steal the election for those who were already primed to believe it. Despite numerous failed legal cases and pushback against this purported evidence, threats of violence dogged election workers who were targeted as part of the post-election push to discredit the election results. The end result was the violence at the Capitol on January 6. Even months after the presidential transition, new “evidence” of supposed election rigging continued to surface, with rapid debunking of decontextualized claims and misleading interpretations of data doing little to stem the flow of falsehoods aimed at those who were already inclined to accept them as true. 

Efforts to cast doubt on the integrity of the elections with decontextualized or false content ran rampant without AI-generated images, video, or audio added to the fray. But the subsequent addition of a new class of wholly fabricated “evidence” augments potential concerns. In this context, debunking—as occurred regularly in 2020 and has occurred repeatedly throughout the presidential campaigns so far—is too little, too late, thanks to a range of well-trodden cognitive processes, including partisan motivated reasoning, confirmation bias, and the strength of a “winner-loser gap,” in which the losers of an electoral contest are less likely to be satisfied with democracy. In this scenario, the goal of misleading or downright fabricated information is not to change voters’ minds, but rather to mobilize a subset of the most ardent supporters. 

With a little more than two months of campaigning left, we are likely to see a continual flow of AI-generated content online. Most of it will be downright comical, but some of it will be cause for concern or even believable. While it is important to address such cases, more destabilizing potential uses of AI content—uses that may mobilize (or demobilize) a small, but targeted subset of actors rather than persuade the electorate one way or another—loom large closer to and in the aftermath of Election Day. 

Given this potential, AI developers in particular must be prepared for the exploitation of their tools and recognize that the guardrails in place right now prohibiting, for example, image generation of known political figures, may be insufficient to meet the challenge. Election officials must continue to educate voters on where and how to get authoritative information about voting and, where possible, provide a clear and transparent window into all facets of the vote tabulation process. Not losing sight of the potential role generative AI content could play just prior to and during the post-election period—and preparing for their potential use—will be critical to further stymieing the portended disinformation “tsunami” in the context of a historic election year.

Valerie Wirtschafter is a fellow in Foreign Policy and the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution. Her research falls into two thematic areas: democratic resilience and democratic erosion; and artificial intelligence, technology, and the information space. Her research has been featured in the New York Times, Washington Post, Wall Street Journal, and elsewhere. She received her Ph.D. in political science from the University of California, Los Angeles.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.