Thanks for supporting Marketplace this Giving Tuesday. If you missed it - you can still contribute to powering public media! Donate Now!
The next front in the disinformation war is when you can’t trust what you see
Nov 4, 2020

The next front in the disinformation war is when you can’t trust what you see

HTML EMBED:
COPY
Just the threat of a video or image not being real is a powerful weapon against trust.

This election has included almost every type of disinformation, including deepfake videos that are manipulated to look real, and even the creation of a fake persona with a very real-looking face that was generated by a computer. In fact, software that uses artificial intelligence to create photos of people who don’t exist is increasingly cheap and widespread.

I spoke with Jevin West, a professor at the University of Washington and director of the Center for an Informed Public. He says fake faces work for disinformation because people relate to human faces. But that’s not even the biggest concern. The following is an edited transcript of our conversation.

Jevin West. (Photo courtesy of West)

Jevin West: The threat of a video not being real, or having the ability to create videos that are hard to discern between the real video of someone saying something and fabricated words being put in someone’s mouth — the fact that we can create that now provides opportunities for those that want to use it as a defense. So if I’m a politician running for elected office, and shortly before an election there’s a video of me doing something I actually did, but I don’t want the public to think it’s real, I just say, “Ah, that’s a deepfake.” And that defense can be quite effective, and it has been used in other elections around the world. So it’s something that is probably as much of a threat as the deepfakes themselves.

Molly Wood: I mean, this seems like an A-bomb inserted into the ecosystem of information.

West: It is like an A-bomb in a system that’s already a mess. The problem is there’s a lot of cheaper tools and lies and misleading strategies that are being employed right now that the A-bomb really hasn’t even come to rear its ugly face yet, at least in this U.S. election. It has in other elections over the last couple years around the world, but this A-bomb, I think, will arise at some point. And that’s why it’s really important for the public to least be aware of this technology.

Wood: There’s simultaneously, it seems like, deepfakes and then also synthetic faces. Do you differentiate between those two?

West: Yeah, so the underlying technology is similar — it’s not exactly the same. There’s all sorts of different incarnations of the technology. And in fact, the technology that scares me more right now that’s not being talked about enough is something called GPT-3. This technology gives you the ability to write, or auto-write, human-like text. And that to me could be just as dangerous as some of these deepfake videos. The technology is going to get better and better, and pretty soon those experts are going to have less things to look for.

Wood: As this technology does get more sophisticated, does it become a technology arms race? Will we have to build sort of competing algorithms to be able to quickly use machine learning to scan one of these images and determine that it’s not real?

West: Yeah, and there are a lot of efforts right now. In fact, just a couple weeks ago, there was another research paper out of Stanford University. The government has many programs that are out there trying to encourage researchers to develop auto-detection techniques. Microsoft’s Defending Democracy team has a bunch of work [and] is doing a bunch of work in this space. Microsoft and Facebook actually came together on something. They don’t collaborate on very much. But they came together around a competition — actually, other technology companies too, it wasn’t just Facebook and Microsoft — where they were sharing deepfake videos, and again, encouraging researchers to develop auto detection. The big summary from all those efforts is it’s really, really hard. And from my own opinion of this is that the technology is going to get good enough that detection is, I wouldn’t say impossible, but it’s going to be very difficult. So the best thing we can do, just let the public know that this exists. And so if they see something suspicious, they can at least question. Like we could with Photoshop and still do today. We don’t outlaw Photoshop, we just say, “That photograph looks photoshopped.” And so if we can get to that point where everyone in the public is aware of that, then I think we can get through this new information leap in technology.

Wood: Where is the line, do you think, between developing this kind of healthy skepticism for things that you see online and no longer having a shared fixed notion of truth? When does it just evolve into really destructive distrust?

West: The worst thing that can happen is people start to not trust anything. Trust in institutions becomes so eroded, that we retreat into our own little, tiny views of the world. And democracy depends on collective decision-making. It depends on trust and institutions and experts. One of the things we have to all be careful of is to do whatever we can to build that trust again, because that’s [the] worst-case scenario: when we just don’t believe anything and we live in these altered universes.

Related links: More insight from Molly Wood

For a fun game, West has co-created a website where you can test your skills at trying to spot fake faces.

This is where I show what a nerd I can be sometimes, because as he was talking about how first there were deepfakes and then fake faces and then an even scarier monster in terms of computer-generated writing, all I could think about was the “Jurassic Park” movies and how first there was the T-Rex and then Indominus Rex and then the giant sea monster dinosaur that comes out of nowhere and then you’re like, “Yeah, OK that wins.” Anyway, he said that it should be a cartoon to show misinformation, and I hope he goes and computer-generates it, or whatever, because that would be awesome.

Anyway, for those of you who are more serious than I am at this moment, the Financial Times has a really nice piece on the tech behind computer-generated faces. They’re created using something called generative adversarial networks, or GANs. One will create an image and the other will use machine learning to compare the image to huge datasets of real images looking for flaws until the original network is forced to keep improving its software, and pretty soon you end up with something you really almost can’t spot as a fake

The fact that this is how these images are created, I should say, is also part of the problem in detecting them and using software to try to spot them online, because the efforts to find them and detect their minute flaws also, in turn, make the technology better over time, or put more simply: sea monsters.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer