Support our non-partisan non-profit newsroom 💜 Donate now
Fake pictures of faces are getting much harder to detect
May 22, 2019

Fake pictures of faces are getting much harder to detect

HTML EMBED:
COPY
Inside the world of "counterfeit" faces, or deep fakes.

There’s a long history of trickery with images in media – even famous photos of Civil War casualties were staged for more impact. Technology has increased our ability to alter pictures and our skepticism of them. “Photoshopped” is a verb. Now, artificial intelligence brings us so-called “deep fakes” — very believable pictures that aren’t just altered, they’re completely made up.

Combating disinformation online is a mission for professor Jevin West, at the University of Washington Information School. He co-created the site WhichFaceIsReal.com. It shows you two pictures at a time, and you pick which one is real. Marketplace’s Jed Kim asked West why it’s important to be able to spot fake people. The following is an edited transcript of their conversation.

Jevin West: The danger is that you can use these images to do things like catfishing or to write a story about someone that fell victim to a crisis event, when, in fact, they never were real people. But when you create millions and millions of fake humans that are hard to differentiate from real humans, we’re now in this world where we not only counterfeit money but we counterfeit people.

Jed Kim: What are the ways that we can tell whether a photo of somebody has been computer generated?

West: If you’re questioning whether this human is an actual real human or not, look at things like the background. Look at the border of the hair in the background. Strange things happen there. Look for asymmetries in things like earrings or the bevels of sunglasses. It does weird things with teeth, for whatever reason. There’s all these sorts of things that it doesn’t quite do well, but give it three years and it’ll figure those things out.

Kim: This is so terrifying to me, because when you can’t trust your own senses, then we rely on our own intuition, and so much of that is already biased and set to believe certain things that may not be true. How do you address this?

West: I know. This is why it’s something we should be talking about, because I don’t know that there’s a magical wand other than just getting better at trying to detect this technology. Also, a lot of the efforts to inject disinformation into our information environments online, their goal is to make you not trust in anything, including the institutions in which you rely. It doesn’t want you to trust what you’re hearing in the news. This is why we need to jump on this problem as soon as we can as a society, because if not, then we retrench to our little tiny communities and even there we might start to distrust. Democracy depends on trust and discourse, and if that all goes away, it is scary. It scares me. The fact that our senses themselves are being tricked here, I wake up thinking about it at night sometimes, many times.

Related links: more insight from Jed Kim

In a show of how advanced fake human generation has become, Japanese company DataGrid recently announced it’s creating full-body, high-resolution images of completely made up people. Dazed Digital reports that the company is working on improving how their figures move. The company expects these figures will be used in advertising. So long, casting calls.

Google made waves last year by demonstrating a virtual assistant that called a hair salon and booked an appointment with a very human-sounding voice. That service is part of Google Duplex, which CEO Sundar Pichai recently announced would be moving to web, doing lots of online booking. The Verge says booking over the phone will still be available to some degree, though it says results have been mixed for restaurant bookings. A lot of times, workers won’t actually pick up the phone, because the caller ID says “Google.”

A study published in Palgrave Communications, which is affiliated with the journal “Nature,” gives a possible explanation for why fake news is so effective. One reason is that it’s not bound by reality. That means it can freely incorporate elements that appeal to us, like negative news, threatening stories, things with sex. You know, the stuff you click.

A scientist at Emory University wrote an appeal in “Scientific American” calling for her peers to take more active steps to fight propaganda — in this case, the anti-vaccine movement. She wants them to spend less time in the lab and more in communities, and also to speak more emotionally, since that seems to work for the other side.

There are heavier-handed approaches. Singapore’s government has taken steps to deal with fake news. New rules would give government officials the power to decide whether a news or social media story is false. They could demand removal of such statements. Fake accounts and bots would also be illegal. The BBC reports that penalties could be as high as $73,000 and 10 years imprisonment. That, of course, prompts concerns about the chilling effects on speech by authoritarian regimes.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team