Earlier this month, the White House released its first-ever strategy to fight domestic terrorism. The plan includes more funding for investigators and prosecutors, better information sharing between agencies and efforts to address the underlying causes of violent extremism, such as racism and bigotry.
Tech has a role to play too. The Joe Biden administration says it will invest in programs to increase digital literacy and work with tech companies to make it harder for terrorists to recruit online.
I spoke with Heidi Beirich, co-founder of the nonprofit Global Project Against Hate and Extremism, which works to expose and counter racism, bigotry and prejudice. I asked her what we know about preventing extremist ideas from spreading online. The following is an edited transcript of our conversation.
Heidi Beirich: We know for sure one thing works, and that is basically deplatforming hate group material from major tech platforms. There’s ample evidence that this drives down the number of recruits, shrinks the amount of propaganda. And that’s not just about white supremacists. It’s also true of ISIS, for example, and al-Qaida, who have been massively deplatformed over the years. We have some evidence on click-through rates of people looking at videos that warn them about the dangers of these movements. There are also what they call “redirect programs” where you might be searching for something about white supremacy, and material comes up about maybe mental health issues or other things that are kind of a path to get you away from, for example, something that glorifies Hitler.
Amy Scott: And what do you think the click-through rates do tell us? Do we know that people follow, say, the mental health link?
Beirich: We know that people click to it. We have very little evidence of what happens after that point, which is really the most important point, right? Do they engage with a mental health professional? Do they get help? If they watch a video about the horrors of white supremacy, does that actually change their opinions? This is the kind of data that we need to make sure that these programs are successful.
Scott: And what could the government do to help with that?
Beirich: Well, I think it’s very important — and some of this has already started — for the government to make funding available to civil society organizations to start experimenting in this space.
Scott: I imagine government involvement is risky, though, in terms of people’s perceptions. Say, a counterprogramming video that has a government stamp might be seen as propaganda.
Beirich: There is no question, and there have been failed propaganda projects by the government many times. I mean, certainly during the Cold War, but we’ve also seen it in the fight against ISIS. And actually, the FBI not too long ago put out — you know, I hate to say this, because I have a lot of friends in the FBI — but a terrible website called Don’t Be a Puppet, which was to stop young people from being radicalized, and it was completely not evidence-based and basically ridiculous. So you don’t even want the heavy hand of government and just pure propaganda, you want facts. And you also need to have people who are informed in psychology to make stuff like this successful.
Scott: Is there anything we can learn from how other countries approach this?
Beirich: We have a lot to learn from our allies. For example, Germany, for decades, Sweden, as well, have had exit programs to help get people out of extremist movements of all kinds, including things like neo-Nazism. They do a lot of work with youth to try to blunt these problems, and they’ve been more direct about defending democracy from white supremacy and similar threats. These are all things that the United States could learn from, and actually in Biden’s recent strategy document about combating domestic terrorism, there’s a lot of talk about learning from allies, so I hope that that’s exactly what happens.
Scott: The report from the White House or the strategy talks about addressing the causes of extremism, among them systemic racism. But there is this fight right now over whether we can even teach about that in our schools. What can law enforcement really do about the kind of underlying causes?
Beirich: Yeah, this is a complicated thing in this debate over critical race theory, I think is very unfortunate because at the root of the problem of white supremacy is not really understanding the history of racism and the impact of racism in this country. I mean, obviously, we wouldn’t have this movement if we didn’t still have deep strains of racism in the country. But now when it comes to law enforcement, one, we need to use them to stop the most terrifying parts of this threat — hate crimes, for example, domestic terrorism — but we’ve also got to reform the relationship between law enforcement and communities. It’d be nice to have law enforcement take hate crime seriously in many parts of this country, which they don’t, which would strengthen those bonds and also ultimately, hopefully lead to a reduction in the racism that’s propelling all of this.
Scott: Why do you think the U.S. is so far behind in recognizing the threat of domestic terrorism and of white supremacist extremists?
Beirich: Frankly, this has been a political failure of administrations of both parties. After 9/11, our entire apparatus of government, the entire FBI, intelligence services, domestic and international, shifted their entire focus to Islamic extremism. It was as though they’d forgotten Timothy McVeigh had blown up the Oklahoma City federal building just a few years before, in 1995. And all the way until 2014, very late in the [Barack] Obama administration, there was really no emphasis on this threat as it was growing, as more people were being killed by white supremacists. I mean, there were serious failures of government to stand up and pay attention and realize this wasn’t an issue of, sort of, one kind of terrorism only being our focus. It was an issue of “and.” And frankly, we should have known better. So now the problem is just much worse than it was even 10 years ago.
Related links: More insight from Amy Scott
We’ve got more on that disastrous Don’t Be a Puppet campaign Beirich was talking about. The interactive website was meant to teach teenagers how to recognize violent extremist messaging and avoid being drawn in by it. But, as Laurie Goodstein reported in The New York Times back in 2015, civil rights and religious leaders objected to its focus on Islamic extremism and worried it would lead to bullying of Arab and Muslim students and hinder free speech. The site is no longer active.
What about a tech solution to countering hate speech online? The MIT Technology Review reported on a new study that finds artificial intelligence isn’t there yet. Scientists tested a number of “state of the art” systems for detecting hate speech, and none of them cut it. Some scenarios that tripped up the AI moderators: the use of profanity in otherwise innocuous statements, slurs that have been reclaimed by the target group and references to hate speech that are actually meant to counter it.
And, if the Biden administration wants tech companies to do more to stem extremism, it’s going to have to talk to Alphabet’s YouTube. Earlier this year, USA Today reported on a study by the Anti-Defamation League that found that even after being pressured to take down extremist content, YouTube was still recommending white supremacist videos to viewers who had watched similar content. In response, a YouTube spokesman said that “views this type of content get from recommendations has dropped by over 70% in the U.S.” But YouTube apparently did not tell USA Today “70% down” … from what?