Experts say Americans concerned about the role of artificial intelligence in elections may not know its full scope

Polls show Americans worry about artificial intelligence’s impact on elections, but the public likely doesn’t understand the full extent of its impact on what they experience every day, according to a scientist who studies the technology.

There are obvious examples of AI-generated disinformation, such as: counterfeit video of President Joe Bidencounterfeit video about voting irregularities or memes aimed at evoking emotions or spreading propaganda. The AI ​​is also regularly habituated generate legitimate campaign messagessuch as phone calls and text messages.

But behind these public examples are the “unseen” tasks that artificial intelligence performs during elections, said Cody Buntain, an assistant professor in the College of Information at the University of Maryland, whose most significant job is determining the nature of social media channels.

“The systems that determine what content is presented to you are artificial intelligence in action,” Buntain said. “From your TikTok For You page, channel, or X profile page to your Facebook feed. This is all powered by artificial intelligence.”

Buntain is currently teaching a course on how AI is changing politics, and he said one of the biggest areas where AI has made an impact is in things we don’t generally notice, like the “information diet.”

In a Pew Research Center survey of nearly 10,000 Americans across the political spectrum: released in Septembera sense of concern about the role of artificial intelligence in the presidential election was shared almost equally by both Democrats and Republicans. The survey found that 41% of Republicans and 39% of Democrats believe that artificial intelligence is used “primarily for malicious purposes” during campaigns. Similarly, 56% of Republicans and 58% of Democrats are “very concerned” about the impact of artificial intelligence on elections.

A separate Pew studyalso released in September, found that many Americans cite social media as their primary source of news.

While the general sentiment about AI involvement in elections is negative, most Americans likely don’t understand the full extent of the utilize of these technologies in campaigns and outside forces, Buntain said. They probably don’t understand how they design your social media to feed into your existing views and biases.

Algorithms are designed to promote irate and emotional content across feeds, potentially contributing to the creation of information silos and echo chambers.

Echo chambers aren’t inherently bad – they can provide a sense of safety and community, Buntain said. And while there is algorithmic ranking on social media, people tend to self-sort by channels they identify with. Recently, more and more conservatives have been flocking to X after Elon Musk purchased the platform, and more liberal people are spending time on TikTok, for example.

“Generally speaking, echo chambers in the offline world are much more resonant than online echo chambers,” he said.

But campaign advertising is another system that has been using “invisible” artificial intelligence for more than a decade, Buntain said. While AI seems to have only been popular for a few years – especially since the release of ChatGPT in 2022 – this type of information retrieval, categorization, and targeted advertising has long been a political campaign tool.

The Obama Campaign for America 2012 used data, technology and analytics to better reach American television viewers. This type of information retrieval, categorization, and targeted advertising now forms the basis of many artificial intelligence systems, and the strategies used by the Obama campaign were refined and implemented during the 2016 and 2020 elections.

Today’s artificial intelligence algorithms can extract information about you beyond general demographics like age and gender to take into account your unique interests and affiliations. This information is then used by campaigns to target advertising across almost all of your online spaces.

Beyond these “invisible” AI tasks, Buntain focused on the potential harms that participants in the Pew study were likely concerned about. People often worry inequality and disinformation perpetuated by artificial intelligence. They are also concerned about whether they will be able to trust the information provided to them by artificial intelligence systems such as chatbots. Many people are probably also concerned about whether they are connecting with a real person or a bot throughout the campaign cycle.

People are rightly concerned that AI strategies and systems will play a role in the election, but Buntain worries about the ways AI could be used in the coming days, especially if the race is very close.

“AI tools will allow people to very quickly create content that will make things worse,” he said. “Five years ago, you could still create disinformation content, but it would take longer and be much more expensive.”

If you’re not a technologist, there’s a lot in artificial intelligence that will likely baffle you and heighten concerns about society you already had, Buntain said.

“Is this all just some chatbot behind the scenes trying to get us to donate or make us angry?” Buntain said. “I think the concern about, you know, whether it’s an authentic actor is a concern that AI really amplifies, but it’s a concern that’s certainly been around since 2016.”

Buntain hopes that over time, society’s perception of artificial intelligence will change. He believes that concerns about him, especially related to his electoral role, stem from broader social problems such as the economy, sense of security and trust in information.

“I think being in an increasingly online but at the same time isolated world makes us a little bit more ripe for… a negative attitude that these new technologies probably won’t help us as much as we thought,” he said.

Get in Touch

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related Articles

Latest Posts