Support our non-partisan non-profit newsroom 💜 Donate now
Michael Smith/Newsmakers
Online Radicalization

Extremists online: How a troll becomes a terrorist

Molly Wood Mar 21, 2019
Michael Smith/Newsmakers

Radicalism, terrorism and hatred of perceived “others” is, sadly, as old as humanity. But the internet — social media platforms in particular — has proven to be the perfect delivery mechanism for this age-old poison.

Researchers and social media experts like Becca Lewis, at the nonprofit institute Data & Society, or academics like Zeynep Tufekci, and many more, have warned for years that a generation of people are being radicalized online. When a gunman killed at least 50 Muslim worshippers in Christchurch, New Zealand, on March 15, he appeared to be the ultimate product of that process. He used the language of internet trolls, he carefully tailored his live video broadcasts and his messaging to be as viral as possible on social media and members of sympathetic online communities celebrated his actions unashamedly. 

So the question is: What can social media platforms do to interrupt this process? And what’s their responsibility to root out the language and behaviors that can lead to extremism — not to mention the documentation of its violence, in the case of videos of the shooting that are still showing up across the web, despite efforts to contain them. 

First is to acknowledge that these lonely Cassandras are right — that young people, especially men, many, but not all, with far-right or extremist leanings, are finding encouragement and an endless well of what Lewis’ research calls “alternative influence” on YouTube and other sites. And the language they use to communicate isn’t overt enough to get them banned outright. It’s couched in jokes that give what she calls “plausible deniability,” so that when members of the media, researchers or content moderators accuse posters of racism, extremism or misogyny, they’re painted as overly sensitive fools who fell for the trolling. 

Lewis says this use of humor is literally part of the neo-Nazi online playbook — a style guide leaked out that described recruiting people with humor and couching racist statements with deliberate dog whistles and vague language. And this not only makes it hard for platforms and their clumsy artificial intelligence tools to detect such coded hate speech, it also causes people to argue about what it actually even means. 

One example, she said, is the “OK” hand symbol, which either has or has not become a shorthand for “white power” depending on whether you are or are not successfully being trolled. It was turned into a hoax campaign by members of the anonymous discussion board 4chan for the purpose of “trolling the libs” into reacting to perceived expressions of white supremacy. But over time, most likely, it’s also been co-opted by actual white supremacists, and the alleged New Zealand shooter even flashed it in court. 

But these messages aren’t being buried in 4chan and the darkest holes of Reddit. YouTube, Facebook and increasingly Instagram are being seeded constantly with alternative in-jokes, conspiracy theories and hoaxes, helped along by recommendation algorithms that lead people down rabbit holes that can, as Taylor Lorenz detailed this week in The Atlantic, quickly turn them into believers.  

But then what? Just because someone has been successfully converted into an online troll or a flat-Earth conspiracy theorist doesn’t mean they’re going to commit a violent crime. That’s true. But experts say the process and pace of radicalization is only increasing.  

Fathali Moghaddam, a professor of psychology at Georgetown University, published a paper in 2005 called “The Staircase to Terrorism.” It was an exploration of how, out of millions of disgruntled people in the world, a very few rise up this metaphorical staircase and escalate to committing violent acts in the real world. But he said social networks feed into age-old fears and resentments and try to offer reasons and also culprits. 

And, he said, radical groups tend to spring up almost in opposition to each other, and that causes each side to dig in more, believe increasingly more extreme things and deepen the kinds of worldviews that can drive that small number of the population to actual violence. 

The issues around online radicalization are hard to tackle, touching on the nature of free speech, the level of deniability at play, regulatory will and the sheer volume of content that’s created online every second. But it’s not impossible. Moghaddam told me he absolutely thinks social media platforms have a responsibility to address extremist speech as a matter of their own survival and the survival of democracy. Becca Lewis said platforms did a good job of largely stamping out ISIS and related terrorist content online and need to treat far-right extremism with the same seriousness. 

And I spoke with Dipayan Ghosh, a researcher at the Harvard Kennedy School who previously worked on global privacy and public policy issues at Facebook. He told me, yes, it’s difficult to sift through massive amounts of data and monitor content online (and even Facebook admitted this week that AI alone isn’t necessarily up to the task). But he said that for some of the most cash-rich companies on the planet — Facebook and Google — it’s not so much that it’s that hard. It’s a matter of economic incentive. 

YouTube and Facebook and Instagram are frequently accused of pushing people toward increasingly extreme content because the fact is, it’s engaging. You watch one video that’s lightly critical of feminism, as Lewis put it, and YouTube’s algorithm leads you down a rabbit hole of videos that grow increasingly misogynistic, never urging you to stop or change course. And if an ad runs on every one of those videos, YouTube gets paid.  

But Ghosh says platforms should treat extremist content more like Google treats junk mail and phishing scams in email. Like explicit ISIS recruitment videos, “spam” mail has almost disappeared from Gmail inboxes, because there’s no economic incentive for it to be there. The more junk there is, the less likely we are to use the product. So far, that’s not the case with extremist content, although Google has experienced multiple boycotts by advertisers who don’t want their products showing up next to the worst material on the web. 

Ghosh said at some point, regulation will have to become a part of this conversation, and platforms would be wise to move faster and put the same resources and energy into policing their content as they put into filtering spam out of Gmail. 

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.