Special Reports

The battle to deep-six the deepfake threat to elections levels up

Officials are racing to outflank misinformation agents and their metastasizing use of artificial intelligence.

Deepfakes could become a regular part of election advertising unless legislation passes to regulate fake video and audio.

Deepfakes could become a regular part of election advertising unless legislation passes to regulate fake video and audio. ArtemisDiana

A reporter’s voice intones that Joe Biden has won the 2024 presidential election. The video hard-cuts to military operations, sirens and a Chinese invasion of Taiwan before depicting boarded-up storefronts, military patrols in the streets and waves of immigrants flooding the U.S.’s southern border. The connection between a second Biden term and global chaos couldn’t be more clearly delineated.

This April ad from the Republican National Committee is among the most prominent recent examples of artificial intelligence being used to influence the political process. And with generative AI technology far outpacing the legislation to regulate it, there are growing concerns about the impact it could have on voters being misled about candidates – and the electoral process overall. 

“We’re talking about deepfakes that are designed to fool people,” Robert Weissman, president of the nonprofit consumer advocacy organization Public Citizen, told City & State. “Permitting deepfakes will undermine political discourse.”

Weissman isn’t the only player focused on the dangers AI poses to the electoral process: Even as the Federal Election Commission and lawmakers debate a path forward, state and local officials are trying to combat misinformation targeting both candidates and election operations. 

Generative AI creates its center stage

Misinformation efforts targeting political campaigns and election operations aren’t new. However, with the evolution and proliferation of generative AI – artificial intelligence capable of generating text, images or other media, using generative models – parsing through the waves of misinformation and trying to debunk it all seems like a herculean task for both voters and election officials. 

Sam Chen, a Republican political strategist, told City & State that the advancements in deepfake technology – where bad actors utilize AI to take existing recordings to generate realistic, false audio and video – means that voters will continue to have a tough time discerning fact from fiction as they scroll past contrasting content and stories. 

“We’ve always used every tool in our arsenal” to create a narrative around a candidate, Chen, who has also given a series of lectures on political and media narratives at colleges and universities around the country, said. “You can kind of see this in finding the worst photo you can find of a person. We can make it look grainy or black-and-white and we can do Photoshop. AI is the newest entry into that.”

President Joe Biden holds a Cabinet meeting at the White House.
President Joe Biden holds a Cabinet meeting at the White House. / Photo credit: Kevin Dietsch/Getty Images

AI and deepfake technology have already begun to have an impact, muddying the media waters and allowing unverified information channels to spread AI-generated photos and videos without confirmation on whether the content being shared is real or fake. 

There are both nonprofit and public organizations attempting to combat misinformation, particularly around elections. One such entity is the News Literacy Project, a nonpartisan education nonprofit seeking to build a national movement to advance the practice of news literacy. 

Peter Adams, senior vice president of research and design at the News Literacy Project, recognized the speed and scale at which false information can be spread, adding that the loudest voices in the room are often the ones that get heard. 

“Some of the social and digital tools that we use are really optimized for engagement, so the most outrageous opinionated stuff is vying for our attention along with misinformation and disinformation when we’re on social media,” Adams told City & State. “We need to be really deliberate and not just sort of hand over our media diet to the algorithms … there are a lot of murky examples and there’s more hyper-partisan stuff masquerading as news than ever before.”

Given the ways in which many social media platforms gather posts and recommend content to users based on their interests, the natural rabbit holes that platforms like X, formerly known as Twitter, can send people down can confirm existing biases – creating even more of a cause for concern among advocates and researchers. 

“(We) might see the use of large language models to spread false claims about the election process and potentially game recommendation algorithms on social media platforms to kind of create fake news websites – including local news websites or spoof election office websites,” Mekela Panditharatne, who serves as counsel for the Brennan Center’s Democracy Program, told City & State, adding that there are “changes in (AI’s) speed, scale and sophistication that, when taken in the aggregate, could produce significant changes in the landscape.”

Falsehoods that are repeated – such as those related to former President Donald Trump’s election fraud claims in 2020 – tend to stick regardless of their accuracy. This phenomenon, Adams said, is known as the illusory truth effect. 

An AI-generated image of Donald Trump getting arrested posted by Bellingcat Founder Eliot Higgins.
An AI-generated image of Donald Trump getting arrested posted by Bellingcat Founder Eliot Higgins. / Photo credit: Screenshot / Eliot Higgins / X

“When you see a photo of a political candidate in a compromising position, or you see a photo of Trump in an orange jumpsuit, it sticks in a way that I think text doesn’t,” Adams said. “If you see a false claim or a false image repeated over and over again, some part of it will stick … The rise of synthetic visuals and synthetic media is deeply concerning and really upends our notion of what counts as evidence.”

Debunking repeated falsehoods about a particular candidate is the responsibility of the campaign. But as incumbents and challengers take jabs at each other and craft their own narratives, election officials are becoming increasingly concerned about the falsehoods being spread about election operations – everything from the time and location of a polling place and Election Day to the legitimacy of mail-in ballots and drop boxes.

Jake Dilemani, a Democratic consultant with Mercury’s New York office, told City & State that campaigns will always be “behind the eight ball” when it comes to fact-checking social media in real time. 

“It’s no good if an ad that is deliberately deceptive goes out and no one knows that’s the case until two weeks later,” Dilemani said. “Two weeks in the campaign cycle is a lifetime.” 

Secretary of the Commonwealth Al Schmidt, who endured the fallout from election falsehoods while serving as Philadelphia’s Republican city commissioner in 2020, stressed that sharing information must be a primary function of elected officials, not a support function. 

“Most people aren’t necessarily following all this closely. Most, at least in Pennsylvania, are voting in person on new voting systems, or they’re voting by mail, which is only a couple of years old in Pennsylvania. With all these changes, it’s no wonder that questions come along,” Schmidt told City & State. “But it shouldn’t be a surprise that bad-faith actors are seeking to exploit people having those questions to mislead them and undermine confidence in results when they lose.”


Policing AI

While policymakers may not be looking to ban AI or deepfake technology altogether, they are striving to rein it in at a time when the public is susceptible to an increasing amount of misinformation. 

“With our biases, plus our short memories as voters, things are just going to get worse. This is an open season for people who use AI – they don’t even need deepfakes. You can use Photoshop and fake news stories, it’s just that the more AI you use, the more convincing it becomes,” Chen said. “The great challenge is going to be: To what degree do we have the legal authority to regulate it?”

Regulatory talks at the federal level are already underway, but nothing is certain as the Federal Election Commission weighs both its authority and a realistic path to policing deepfake technology. 

The progressive consumer rights advocacy group Public Citizen called on the two major political parties and their presidential candidates to pledge not to use 

generative AI or deepfake technology to mislead or defraud voters during the 2024 election cycle. The group also petitioned the FEC earlier this year to issue a rule to clarify the law against “fraudulent misrepresentation” and how it applies to AI. 

Weissman said the actors spreading misinformation online seek not only to confuse voters about particular topics and candidates but also to ingrain an overall sense of distrust in the election process. 

“The prospect of widespread deepfakes threatens to – very consequentially – undermine political discourse and speech in two ways: by tricking people into thinking things happened that haven’t happened, but also by making it possible for candidates or other political figures to deny things that actually did happen,” Weissman said. “The impact of those two factors combined is to sow political mistrust, diminish actual political debate and leave people kind of helpless against competing claims – where all you can do is revert to your political tribe.”

The FEC’s unanimous procedural vote in August advanced Public Citizen’s petition, with a 60-day public comment window opening later that month. Public Citizen proposed giving candidates the option to prominently disclose the use of AI rather than having them avoid using the technology in campaign ads altogether. 

Weissman said he expects the FEC to make a decision by the end of October on whether to proceed with a rulemaking process. From there, the FEC would propose a rule and vote on it in the near future. 

Chen, who supports a ban on the use of deepfake technology in campaign ads, said the FEC’s challenge is to find a balance between regulating the technology wisely while not infringing upon freedom of speech. 

“The FEC gives campaigns a lot of leeway … but they’re not allowed to outright lie about something. You can make the argument that something like a deepfake would be an outright lie,” he said. “It’d be tricky legally to ban it outright. But I certainly think (an interpretation) along those lines would be within the sphere of (the FEC’s) current regulations.” 

The FEC’s authority is limited, however. Even if the commission were to clarify the rulemaking and make a firm decision on the use of AI in campaign ads, it does nothing to stop outside groups such as political action committees from imitating a candidate and/or needing to disclose the use of AI in their ads. 

For the blatant misinformation falling outside of the FEC’s purview, state and local officials are attempting to both connect individuals to proper resources and debunk misinformation already being spread about elections and voting methods. 


The Answer to AI

Outside of the legal sphere, local and state officials are utilizing existing tools to combat bad actors and their growing digital toolbox. Over the summer, Gov. Josh Shapiro signed an executive order creating an AI governing board. The state’s first generative AI working group will help state agencies find ways to use AI to improve government services while also establishing guardrails for use within the public sector. 

Panditharatne said that as bad actors begin to improve in their usage of AI, so should government entities. “Developing smart and scalable moderation policies for AI-generated content and this new landscape will be critical,” she said. 

Michael Sage, the chief information officer for the County Commissioners Association of Pennsylvania, agreed.

“If (the AI board) can produce materials that are reusable guidance for the entire commonwealth, that’s going to be invaluable because everyone’s facing the struggle,” Sage told City & State. “How do we use this? And how don’t we use it?”

Secretary of State Al Schmidt speaks on permitting and licensing processes at a Harrisburg event.
Secretary of State Al Schmidt speaks on permitting and licensing processes at a Harrisburg event. / Photo credit: Commonwealth Media Services

Schmidt and Panditharatne also boosted the concept of “pre-bunking” – identifying the strategies and trends that misinformation machines follow and getting accurate information out to those channels ahead of time. 

“It’s helpful because, to some extent, election officials should know some subset of what false narratives are likely to gain traction in the next election and forthcoming elections. They can put out materials that clarify important details about election security and about the election process,” Panditharatne said. 

Schmidt shared similar thoughts, noting that secretaries of state have clear lines of communication with each other and federal partners. Looking toward the commonwealth’s elections in 2023 and beyond, the onus falls on election officials at every level to use their established networks to inform the public of what new information – true or false – is popping up online. 

“It’s a matter of sharing factual information, doing so repeatedly and getting other voices to amplify it as best you can,” Schmidt said. “You can’t necessarily shut down people from saying all sorts of things on social media. But it’s important for us to tell the truth. I think the truth is the only antidote to the lies out there.”

Back to Special Report: AI in Government