It's the LAST DAY to donate and pick up any of our thank-you gifts at a discount. 🔥 Give Now!
Deepfake detectors promise to tell truth from AI-generated fiction. Do they work?
Jun 6, 2024

Deepfake detectors promise to tell truth from AI-generated fiction. Do they work?

HTML EMBED:
COPY
Oren Etzioni of TrueMedia.org explains why deepfake detection is a challenge and reminds us that there's no substitute for common sense.

Telling truth from fiction online has become a lot harder since the AI boom kicked off a year and a half ago.

Some have been waiting for tools to help us analyze media to tell us if something is AI-generated or not — something that feels especially urgent this election year.

An estimated 40 deepfake detection startups are working on this, but none can deliver 100% reliable detection.  

One organization taking on the challenge is TrueMedia.org. Marketplace’s Lily Jamali asked its founder and longtime AI researcher Oren Etzioni what sets his organization’s system apart from the rest.

The following is an edited transcript of their conversation.

Oren Etzioni: First of all, we are free and available to journalists, fact-checkers and the general public. We’re not trying to make a dime here. This is a nonprofit. Secondly, we actually use a number of the commercial vendors. And so, we have a tool that when you submit some piece of media, its typically running upwards of 10 different AI models to assess it, both ones from commercial vendors and some state-of-the-art ones that we have adopted and extended from academia. So, unlike a lot of tools out there, you’re really getting a kind of roomful of experts to decide whether this is likely to be fake or not because it’s actually a very hard technical problem.

Lily Jamali: And what is your accuracy rate like?

Etzioni: So, the accuracy really depends on the data you run it on. We run it on deepfakes found, quote, in the wild, meaning uploaded by users and by journalists based on what they see in social media. And we are well above 90% in detecting those. I also have to say, we also make mistakes.

Jamali: You do make mistakes? I don’t hear people say that often. It’s refreshing.

Etzioni: Well, as an academic, our integrity is sacrosanct. So, we definitely make mistakes. And we view our tool as a component. There’s no substitute for exercising common sense and for having media literacy, for checking your sources, all that good stuff. But in these tricky cases, it’s also helpful to have this technical tool that, by the way, runs in close to real time. It takes a minute or two. This isn’t one of those things where you get the results a day later. And it gives you an analysis that’s informative.

Jamali: So, we’re going to give your detection tool a try. We found a deepfake on YouTube that was published last fall. It is of an AI-generated Ron DeSantis, the governor of Florida, who at the time was running in the Republican primary. Here’s what he says:

“I’m Ron DeSantis, governor of the great state of Florida. After the last week’s events, including my poor performance at the debate, as well as President Trump rejoining X, I’ve realized I need to drop out of this race immediately.”

I think you get the idea, right? So, we have brought this into your tool, and it says TrueMedia detected substantial evidence of manipulation. So, on your website, the video is now encircled in bright red with the term “highly suspicious” above the thumbnail frame. Looking at the technical side of things, what enables your site to determine that this is a highly suspicious video?

Etzioni: I am looking at the same screen as you are, and I can see that we’ve run 10 different models to assess this. Five were looking at his voice and five were looking at the faces because this is a video, and his face is also manipulated to look as if he’s saying these things. And so, with the voices, there are statistical indications that this is automatically generated, and I have to be a little bit circumspect because we don’t want to reveal our specific methods to adversaries. But let’s just say that the voice signature is clear, and we have 100% confidence. You can see this in the user interface that this is fake audio. In addition, we look at the faces, and we specifically look at has the face been manipulated? Are there discontinuities that suggest editing? Is there something called face blending going on? So, we look at all these things, and we were able to see that there are visual artifacts. They might not be available to the naked eye, but the AI system is able to detect visual artifacts that tell it with 100% confidence that this is fake.

Jamali: Why did you decide you wanted to get involved in this work?

Etzioni: I am terrified about the upcoming election in our country. It’s going to be one of the most consequential in our history. It looks to be very narrowly decided. And I don’t want people to be duped into not voting, into distrusting the process, or into voting for the wrong candidate.

Jamali: Was there a particular incident or news story that gave you that final push into this space?

Etzioni: Last summer, I had the honor of meeting with President Joe Biden and his team. And we were all going around this small group of AI experts talking about our moonshot projects, our dreams for how AI can help humanity. And in the course of that conversation, we talked about challenges. And I realized that the challenge of deepfakes just seemed to be the most important thing to work on, particularly this year.

Jamali: So, there is no such thing as a perfect deepfake detection tool, even the tool that you all make it TrueMedia doesn’t work with 100% accuracy. Why is it so hard to get exactly right?

Etzioni: Generative AI, which is the set of techniques that are making this available, has moved really fast, to the point where they can generate things that fool us. And we can be fooled in two ways. We can be fooled to believe that something that’s manipulated by AI is actually real. But also, we can be overly suspicious. There can be noise, there can be slight editing and that might trigger our alerts, if you will. So, because it’s a statistical process, and because the technology is so frustratingly good, we make mistakes.

Jamali: What are some of the technological barriers to having a tool that works 100% of the time?

Etzioni: It’s virtually impossible, because generative AI has gotten so good and because it’s a moving target. So, it’s not only gotten so good, but it keeps getting better, right? We keep hearing about new things being released. The other thing that I want to remind the listeners of is that often in something you can see on social media, there’s background noise, there’s low resolution, and all the things, all these artifacts can be used by disinformation players to make it harder to tell truth from fiction.

Jamali: So, if a tool cannot provide an answer to me with 100% accuracy about whether this is or is not a deepfake, is there still some value to that? If a tool can’t confirm whether the image I’m suspicious of is real or not?

Etzioni: I think there’s a lot of value for two reasons. First, we do know how to use uncertain information all the time. We look at the weather forecast, right? And secondly, while it’s not 100% right, when it is highly confident, like in this case, all these different models are saying 100% confident, then you can be virtually certain that this is fake. So, it’s not like it’s often shrugging and throwing up its hands and saying, “Oh, I’m not sure.” Sometimes it’s really quite sure, and that’s highly informative. All that said, I do want to emphasize that there is no substitute for human common sense. There is no substitute for checking your sources. And you need to do those things when you encounter highly political information and when you encounter jarring stuff on social media. These things are designed for us to have an emotional response, to be outraged, to be angry. And that’s exactly the time where you want to take a step back and say, “Am I sure this is real?”

More on this

As we discussed, no deepfake detection tool is perfect and likely never will be, because they are driven by probabilities. These services have fallen for images of kissing robots and giant Neanderthals, according to a New York Times profile of Oren Etzioni.

General trust in facts and evidence can erode when images like those, which are clearly fake, are declared by a detection tool to be the real deal.

Etzioni was among a group of AI researchers who signed an open letter earlier this year. They’re pushing for new laws to hold AI developers accountable if their technology can be easily used to make deepfakes deemed harmful.

Meanwhile, OpenAI, the parent company of ChatGPT, announced last month it has developed its own deepfake detection tool. It’s being shared with disinformation researchers and is designed to identify fake images created by its own DALL-E 3 image generator.

And if you thought it could recognize its own fake creations, it comes close. A 98.8% accuracy rate was reported in May. Still not 100% though.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer