This year’s presidential election will be the first since generative AI — a form of AI that can create modern content, including images, audio, and video — became widely available, raising concerns that millions of voters could be fooled by a flood of political deepfakes.
While Congress has done little to address the problem, states are taking decisive action — though questions remain about how effective any modern measures will be to combat AI-generated disinformation.
“I think we’ve reached a point where we really need to be mindful of bad actors using AI to spread disinformation about elections,” Pennsylvania Secretary of State Al Schmidt told the Capital-Star.
He added that while AI may have potential benefits in terms of voter education in the future, “e I saw in 2020 how easily lies spread simply with a tweet, an email, or a Facebook post. AI has the potential to be much more persuasive when it comes to misleading people. And that’s my real concern.”
Last year, an AI-generated sound recording A conversation between a liberal Slovak politician and a journalist, during which they discussed how to rig the upcoming elections in the country, was a warning to democracies around the world.
One Year Ahead: How Free and Fair 2024 Presidential Elections Could Be Threatened
In the United States, the urgency of the threat of artificial intelligence was highlighted in February, when, days before the New Hampshire primary, thousands of voters in that state received automatic connection with an AI-generated voice impersonating President Joe Biden, urging them not to vote. A Democratic Party official working for a rival candidate has He admitted to launch connections.
In response to the request, the Federal Communications Commission released ruling restricting automated telephone calls containing artificial intelligence-generated voices.
Some conservative groups even seem to using AI tools to aid address issues related to mass voter registration — raising concerns that the technology could be used to aid existing voter suppression efforts diagrams.
“Instead of voters seeking out trusted sources of election information, including their state or county board of elections, AI-generated content can capture voters’ attention,” said Megan Bellamy, vice president of law and policy at the Voting Rights Lab, an advocacy group that tracks state election legislation. “And that can lead to chaos and confusion before and after Election Day.”
Concerns about disinformation
The threat from artificial intelligence comes at a time when democracy advocates are already deeply concerned about the potential for “ordinary” online disinformation to confuse voters, and former President Donald Trump’s allies appear to be having success in fight efforts to curb disinformation.
But states are responding to the AI threat. Since the beginning of last year, 101 bills have been introduced to address AI and election disinformation, according to a March 26 analysis by the Voting Rights Lab.
Pennsylvania State Representative Doyle Heffley (R-Carbon) sent note On March 12, co-sponsors were asked to support a bill that would prohibit the employ of artificially generated votes for political campaign purposes and establish penalties for those who engage in such practices.
He told the Capital-Star that his bill does not seek to ban robocalls by campaign members, but rather seeks to prohibit the employ of artificial intelligence to trick voters into believing they are having personalized conversations with candidates.
“This is a very new and emerging technology,” Heffley added. “So I think we need to set boundaries on what is ethical and what is not.”
Bill from state Rep. Chris Pilli (D-Chester) that would require disclosure of AI-generated content was passed by last week, the House of Representatives Consumer Protection, Technology and Public Services Committee passed it by a majority of 21 to 4.
“This really is a bipartisan issue, or should be seen as a bipartisan issue,” Pielli told the Capital-Star. “I mean, there’s nothing more sacred than keeping our elections fair and free and not being manipulated. And, you know, with just three seconds of your voice, today’s AI technology can make you deliver a political speech that you never delivered.”
Heffley said he fears it will be arduous to get AI legislation passed this session given the divided legislature. He added that he was ready to cooperate with anyone on this matter.
Pielli was a little more sanguine. “This is a threat. This is a clear and present danger to our republic, to our democracy, to our elections,” he said. “And I think both sides will be able to see that and I hope that we will come together, as we have in the past, to confront this threat and protect our citizens.”
Schmidt calls turnover among Pennsylvania county election officials a ‘solemn problem’
Pennsylvania can take cues from other states.
On March 27, Oregon became the latest state — after Wisconsin, New Mexico, Indiana, and Utah — to pass a law addressing AI-generated election disinformation. Florida and Idaho lawmakers passed their own measures, which are now on the desks of their governors.
Meanwhile, Arizona, Georgia, Iowa and Hawaii have each passed at least one bill — Arizona’s two — in one chamber.
As you can see from the list of states, all red, blue and purple states have devoted attention to this issue.
States called to action
Meanwhile, the modern report on how to combat the threat of artificial intelligence to elections, based on advice from four Democratic secretaries of state, was released March 25 by NewDEAL Forum, a progressive human rights group.
“(G)enerative AI has the potential to dramatically increase the spread of election misinformation and disinformation and create confusion among voters,” the report warns. “For example, ‘deepfakes’ (AI-generated images, voices, or videos) could be used to portray candidates saying or doing things that never happened.”
The NewDEAL Forum report calls on states to take a number of steps to respond to this threat, including requiring certain types of AI-generated election materials to be clearly labeled; conducting role-playing exercises to aid anticipate problems AI could cause; creating rapid-response systems to communicate with voters and media outlets to block AI-generated disinformation; and educating the public in advance.
Voter registration, absentee voting: Pennsylvania deadlines to know before April 23 primary
Secretaries of State Steve Simon of Minnesota, Jocelyn Benson of Michigan, Maggie Toulouse Oliver of New Mexico and Adrian Fontes of Arizona contributed data to the report. All four are actively working to prepare their states for the issue.
Gaps were noticed
Despite intense scrutiny from lawmakers, officials and outside experts, several of the measures examined in the Voting Rights Lab’s analysis appear to have weaknesses or gaps that may raise questions about their ability to effectively protect voters from AI.
Most of the bills require creators to add a disclaimer to any AI-generated content, as recommended in the NewDEAL Forum report.
But a modern Wisconsin law, for example, requires a disclaimer only for campaign-generated content, meaning that false content created by outside groups but intended to influence the election — a not-so-unlikely scenario — would not be covered by the disclaimer.
Additionally, the measure is restricted to content created by generative AI, although experts say other types of synthetic content that do not employ AI, such as Photoshop and CGI — sometimes referred to as “cheap fakes” — can be just as effective at deceiving viewers or listeners and can be more easily produced.
For this reason, the NewDEAL Forum report recommends that state laws cover all synthetic content, not just that which uses artificial intelligence.
Governor Shapiro Launches Election Security Task Force Ahead of 2024 Elections
The laws in Wisconsin, Utah and Indiana also do not provide for any criminal penalties — violations are punishable by a $1,000 fine — raising questions about whether they will serve as a deterrent.
The Arizona and Florida bills include criminal penalties. But the two Arizona bills deal exclusively with digital impersonation of a candidate, meaning many other forms of AI-generated fraud — such as impersonating a news anchor reporting on a story — would remain legal.
One bill in Arizona, as well as in New Mexico, would only be in effect for 90 days before an election, although AI-generated content that appears before that time period has passed could potentially influence the outcome of the vote.
Experts say the shortcomings are largely because the threat is so modern and states do not yet have a clear vision of exactly what form it will take.
“Legislators are trying to figure out the best approach and are working with examples they’ve seen,” Bellamy said, pointing to examples of Slovakian audio and Biden’s robocalls.
“They’re just not sure where it’s coming from, but they feel the need to do something.”
“I think we’ll see an evolution of solutions,” Bellamy added. “The danger is that AI-generated content and what it can do will probably evolve at the same time. So hopefully we can keep up.”
Schmidt noted that the Pennsylvania Department of State has a page on the subject. her website focused on answering voters’ questions, but it was the duty of officials to be proactive.
“I don’t feel like there are millions of people in Pennsylvania who wake up every day to check the State Department website,” Schmidt said. “It’s important that we don’t stay silent. It’s important that we rely on others who are acting in good faith and want to do their part to strengthen our democracy by encouraging participation and education.”