Some of our best minds in tech have set out to tackle the problem of fake news with the help of artificial intelligence.
A group of researchers, academics, engineers and hackers have come together with the hope that human fact-checkers will pick up the slack when AI technology reaches its limits. This grassroots movement spurred a website, the Twitter hashtag #FakeNewsChallenge, and a presence within online communities like Slack and GitHub.
Cade Metz, a senior staff writer at Wired who wrote about this crusade, joined Marketplace host Lizzie O’Leary to talk about AI's role in the fight against fake news. The following transcript was edited for clarity and brevity.
Lizzie O'Leary: These researchers basically tried to build an AI that can spot fake news. Can you tell me what they did?
Cade Metz: Well, they're in the process of trying. As this controversy over fake news has reached such heights, so many people are looking for ways of dealing with it. And one of the ways that has been floated is through what's called artificial intelligence. But the reality is that this is a very, very difficult problem to solve, for many reasons. And this task is in a sense beyond what machines are capable of. So, the technology they're building is a way of aiding this effort, as opposed to really solving it.
O'Leary: Well, that brings us right up against the limits of what AI can do. You focus on this moment where it's like, "Oh, they can sort of spot patterns but they don't have judgment."
Metz: Exactly. The reason this is even a question is that AI has, in recent years, undergone a huge revival. A technology called neural networks is now used to identify faces and objects in photos. And the hope is that it can be used to recognize fake news. The problem is that this solution is a bit more complex than that in the end.
O'Leary: When you spoke to the researchers and when you look at the landscape out there, do you think there is a role for an AI like this within, say, a Facebook, or a Google, or the companies that work as distribution mechanisms between people? Because that seems to be the place where a lot of this moves out into the larger world.
Metz: There's absolutely a role for it. But, again, it's a complicated thing. Facebook in particular employs many, many contractors who are tasked with identifying material on the network that is not suitable and removing it. Sometimes algorithms and sometimes users of Facebook will flag what they see as potentially inappropriate material and then these human contractors will make the final call. And this is what we're moving toward with with the fake news problem. And perhaps, we can we can build algorithms that can better identify fake news, but in the end it's going to be humans that need to make that call.
O'Leary: Did it surprise you at all that in investigating this kind of high tech solution, you came back to something that is so intrinsically just a part of being human?
Metz: I don't think it's surprising. I've spent a lot of years covering companies like Facebook, so I've seen the way it works and I've seen the limitations of the technology. I think that what this shows is that the general public needs to understand that better. They need to understand the way that Facebook works, the way that Google works a bit more than they do and I think that that's part of the problem here — is that people are taking these things as organs of automation and of truth, and they're not. They're flawed, they're flawed just like humans are flawed.