Disinformation

How social media bots can amplify fake news

Molly Wood, Stephanie Hughes, and Shaheen Ainpour Feb 27, 2018
HTML EMBED:
COPY
Twitter announced a new effort to prioritize accessibility after garnering criticism from the rollout of a voice tweet feature this summer that didn't offer captioning. Leon Neal/Getty Images
Disinformation

How social media bots can amplify fake news

Molly Wood, Stephanie Hughes, and Shaheen Ainpour Feb 27, 2018
Twitter announced a new effort to prioritize accessibility after garnering criticism from the rollout of a voice tweet feature this summer that didn't offer captioning. Leon Neal/Getty Images
HTML EMBED:
COPY

The war of disinformation is raging online, and bots are the foot soldiers. Bots are fake social media accounts that work together to amplify messages online. That’s how fake news started trending on social media during the 2016 presidential election. In response, U.S. special counsel Robert Mueller recently filed charges against a Russian organization called the Internet Research Agency that had hundreds of employees operating fake accounts during the election. Social media companies have also been trying fight the army of fake accounts on their platforms but without much luck. Marketplace Tech host Molly Wood spoke about bots—and who’s behind them—with Jonathon Morgan, CEO of New Knowledge, a company using artificial intelligence to protect companies from disinformation attacks. Below is an edited transcript of their conversation.

Jonathon Morgan: Ultimately the decision making always happens by a human because a human being writes the software that’s going to operate an account even when it’s automated. So somewhere there is a human being saying whenever some account that I already know of posts any message that contains the keyword “guns,” retweet that tweet.

Molly Wood: And then, what are the mechanics of actually amplifying a conversation? I can imagine people listening to this thinking, “I don’t follow any bots.” So how do they help make a conversation bigger than it might otherwise be?

Morgan: I think the tricky thing is that it’s hard to know whether you’re following accounts that are not who they say they are. I mean there’s an old joke that says, “you never know who’s a dog on the Internet.” But the other way that some of these networks amplify certain content is that they all work together. I might say to all my friends, “Hey everybody we’re going to use this hashtag, we’re going to post these keywords and we’re going to do it at a certain time of day.” And what the platform sees, it looks like some conversation that has a lot of excitement around it and might be something that’s going to be kind of a major Zeitgeist news cycle conversation that they should stay on top of and amplify to the rest of their users.

Wood: And then the results of that, of course, is media coverage and the network effect of more people tweeting it because it’s trending and so on, so forth. And then all of a sudden everybody thinks it’s a really big deal.

Morgan: That’s exactly right. And what we often see is that a small group of accounts will coordinate their activity. They’ll even sometimes target people on the periphery of whatever conversation they want to have. Maybe people one or two degrees removed from mainstream political pundits and they’ll pick that up, they’ll retweet it to their 2 million followers and among their 2 million followers are people a little bit closer to the center of the conversation and millions more people see it and then they start discussing it on social media and it’s kind of like laundering information.

Wood: I am old enough to remember Google bombing, where a similar concerted effort would attempt to push certain websites to the top of Google search rankings. That went away. Is this an existential threat to these businesses?

Morgan: What’s different about this problem — the ones that we saw in the past, like Google bombing like you describe, that activity was really bad for Google’s product. Because if you could trick their algorithm into feeding people content that they didn’t want, then they’d stop using Google. On the other hand, people seem to really like this sort of hyper-polarized incendiary content that they consume on social media.

Wood: But when you say people like it, you know, people also liked smoking and they like drinking and they like reality TV and sugar. Is it part of a continuum where we think we like something and then we realize that it’s bad for us? 

Morgan: I think that’s a really good point. This is not sustainable. I do think we will see a change in people’s behavior where they just start to get fed up with it. Maybe over the course of a generation we’ll see a change in behavior. But some of these companies are large enough that they need to be thinking about things on that sort of timeline. 

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.