The dangers of AI in the 2024 elections
Aug 18, 2023

The dangers of AI in the 2024 elections

"AI literacy," or the ability to see through artificial intelligence-generated misinformation, will be important for U.S. voters, says Susan Gonzales, CEO of AIandYou.

Joe Biden has said a lot during his tenure as president, but he’s never said this:

This is just one example of how fake digital media is making the rounds as we creep toward the 2024 national elections.

These efforts to manipulate voters with the help of artificial intelligence and other tech tools are being crafted by activists, propagandists and political campaigns.

Marketplace’s Lily Jamali spoke with Susan Gonzales, CEO of the nonprofit group AIandYou, about what the nation’s first “AI election” could look like.

The following is an edited transcript of their conversation.

Susan Gonzales: It’s going to be a campaign video created with AI to make voters believe something about a candidate that is not true. It’s going to be a phone call that sounds like a candidate, but it’s actually AI. And it’s going to be a personalized fundraising letter created by AI. But the most prevalent tool for misinformation in this election is called deepfakes. And deepfakes are hyperrealistic, but entirely fake, content pieces that can be spread, which is based on misinformation. So in the past, creating a deepfake would require someone in technology, let’s say, like, someone who writes code, but not anymore. Deepfakes today can be created by someone from the laptop in their living room.

Lily Jamali: Sure. At the same time, though, we have been talking about deepfakes for years now. There is so much material out there aimed at helping people get better at spotting them, the classic example being, you know, count the fingers on the hand in the photo. Aren’t we all getting better at this, at this point?

Gonzales: The biggest problem with deepfakes, and the most troubling aspect of it, is that there are no guardrails to protect voters. So today, there are no rules, regulations or consequences related to creating false information during the campaign to protect voters against false news, disinformation and false narratives. That is the biggest risk. And undecided and new voters are expected to be the targets of political misinformation. And that’s critical, right? Because this election is expected to be decided by a small percentage of voters.

Jamali: And why is it that those groups are the targets of these types of images?

Gonzales: Because typically, voters who have claimed a particular party, most are not going to sway either way. But if you look at the independents, new voters and undecided, those are prime targets for campaigns to personalize information and to get to them with false information. So today, AI can process vast amounts of voter data, right? Preferences, pain points, even emotions from social media posts. So this information helps campaigns tailor their messages to specific voters, especially those that are independent, undecided or new to voting. So as opposed to campaigns in the past, instead of a one-size-fits-all campaign, AI enables hyperpersonalized messaging, reaching voters with issues that matter most to them and swaying voters with true or false information.

Jamali: And so is part of the concern here that you might have a voter who already has a bias, sees one of these images, and that bias is only strengthened?

Gonzales: One hundred percent. And more concerning are the false key points that campaigns can target voters. And here’s the key. Listeners really need to understand that our online behavior matters. There’s a reason why, when we’re watching a particular streaming service that, let’s say, you know, I like thriller movies. Well, there’s a reason it recommends thriller movies, it’s because the AI is learning from my behavior. So similarly with a campaign, if a particular voter is just a little interested in looking around at all the different campaigns and the messaging, that then could potentially signal that they are a target to be swayed.

Jamali: And are swing voters considered more of a target here?

Gonzales: From what I understand [of] this election, because it is expected to be decided by such a small percentage, my guess is yes, that the swing votes are up for grabs, for lack of a better term. And a particular party will know that. And the campaigns have access to micro data on voters, and they will get that data and create specific messaging in real time, by the way.

Jamali: You know, we’re talking a lot about what campaigns do and the information that they have. What is the responsibility of some of the companies that run some of the platforms where this activity will be taking place where some of these images will be disseminated? How do you think they should approach the election, given all the fears that you’ve laid out?

Gonzales: Well, I can only reiterate again, the most challenging and troubling aspect of this election is there are no rules or regulations, or most importantly, consequences related to creating false information. So it will be incumbent upon voters to independently research key issues to determine what is true or false. But as we know, history would dictate that private companies, tech or otherwise, respond well to rules and regulations. And absent those, it’s a free for all.

Jamali: But I wonder what kind of action the government can take, if any, in the limited time they have available, that regulators have available to address these potential dangers. Is there anything out there that they can be doing?

Gonzales: Well, we should not expect anything to be able to happen in such a short time, which is why this election is going to come down to AI literacy and the level to which people understand the basics of what’s happening. And again, I am not attempting to communicate, you know, doom and gloom. This is just about awareness. This is about protecting our democracy, and it’s about people protecting their own vote based on correct information. And the one thing to be clear about too is that deepfakes and other misinformation, it’s not just about online. I mean, we really will not be able to believe anything we see, read or hear whether it’s online, on TV or in the press, and we need to keep in mind even on broadcast TV, the political commercials, the broadcast stations legally are not allowed to touch those commercials. So there could be a, well, we’ve seen a few already, but there could be more deepfake commercials on TV. And people will assume those are correct because they’re on TV, and that’s not correct.

More on this

Recent waves of layoffs among big tech companies like Meta, X and TikTok hit teams focused on, among other concerns, misinformation as well as AI ethics.

And while there aren’t really any set rules around AI use during campaigning —yet — some government efforts are in the works.

We mentioned in an episode this week that the Federal Election Commission would vote on collecting comment about using deepfakes in campaigns for deceptive purposes.

That has since been approved, and the public has until October to weigh in. There’s also a bipartisan bill from Republican Sen. Josh Hawley of Missouri and Democratic Sen. Richard Blumenthal of Connecticut related to AI-generated content.

If passed, it would essentially mean that Section 230 of the Communications Decency Act, which shields online platforms from liability for third-party content, would not protect AI work.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer