Marketplace is community-funded public service journalism. Give in any amount that works for you – what matters is that you give today.
We talk a lot on this show about how social media platforms have been slow to react to disinformation over the years, and especially around elections — and now the coronavirus and also the coronavirus vaccine. But perhaps the slowest to take a stand is YouTube. The video platform waited until Dec. 9 — more than a full month after the presidential election — before it started to remove videos falsely claiming election fraud or rigging.
Researchers have worried about its radicalizing algorithm for years, and the company has basically no interest in working with them. I spoke with Evelyn Douek, an affiliate at Harvard’s Berkman Klein Center for Internet & Society. She said YouTube is flying firmly under the radar. The following is an edited transcript of our conversation.
Evelyn Douek: It’s baffling. In the lead-up to the election [there were] so many stories, you could almost be forgiven for thinking that Facebook and Twitter were the only sources of online information in the country. But what we do know is that YouTube is one of the biggest, if not the biggest social media platform in the United States, at least. And we also know that there is a fair amount of disinformation and misinformation on the platform. And if we look at even the congressional hearings, Mark Zuckerberg and Jack Dorsey have appeared a number of times, and [YouTube CEO] Susan Wojcicki hasn’t been called yet. It sort of seems like YouTube’s strategy has often been to keep its head down and sort of let the other platforms take the heat. That seems to be working for it.
Molly Wood: What could YouTube be doing? I know one thing you’re interested in is this kind of false binary of either take down a piece of information or leave it up. But that’s not the only choice, particularly for a platform like YouTube.
Douek: Right, and I want to be sort of specific about my complaint. So one of the things that I would just really like you to do is just be far more open about what it’s doing and the measures that it’s taking in demoting or not recommending certain content.
Wood: Let’s dig into this transparency a little bit, because I think people don’t exactly understand what you and researchers are asking for. What might you get from Facebook or Twitter compared to YouTube?
Douek: For example, Facebook and Twitter are far more transparent about the engagement metrics and the content that is on its surface. So Facebook has a tool called CrowdTangle, which allows researchers to map what’s happening on the platform in terms of engagement. And there are definitely limitations to that, but it is at least something. And Twitter, by its very nature, being a more public platform, provides more data to researchers, whereas a lot of that stuff just doesn’t exist for YouTube. So we have far less visibility. So that’s a key thing.
Wood: And so then what happens? You call YouTube and you’re like, “Hey, we are trying to understand better how, for example, young people keep getting radicalized on your platform. Can you give us a sense of what’s happening in the algorithm?” And they just don’t answer the phone?
Douek: Yeah, I mean, pretty much exactly. There’s this big debate happening in the researcher community about the level of filter bubbles or the radicalization effect of YouTube’s algorithms. And it’s sort of still an open question, and it’s really hard to answer based on the sort of tools and the data currently available to researchers.
A Pew Research survey in September found that around 1 in 4 adults get their news from YouTube.
Douek wrote a piece for Wired last month, asking why there hasn’t been more focus on YouTube. She noted, among other things, when researchers hired by a Senate intelligence committee in 2018 asked Facebook, Twitter and YouTube for data around Russian interference in the 2016 election, YouTube provided the smallest amount of information by far. I mean, hey, maybe they’re absolutely crushing it over there and the amount of disinformation on YouTube would be exponentially worse if they weren’t hamstering away in their little wheel knocking things down. But it sure doesn’t seem that way, and we sure aren’t hearing about it if they are.
Twitter announced Wednesday that it will take down tweets that spread lies about vaccine safety, and possibly also tweets that say COVID-19 doesn’t exist, or what the company called “widely debunked” claim, starting next week. And it will roll out labels on tweets that try to spread vaccine conspiracy theories starting early next year. Facebook made the same pledge about two weeks ago.
Listen, I know sometimes the news can be a relentless bummer and this next comment is, sadly, no exception. But increasingly, prosecutors, terrorism analysts and former national security officials are warning that the embrace of conspiracy theories and disinformation, which literally happened in Congress this week, is radicalizing conservatives in particular, dangerously increasing the potential for right-wing violence and terrorism, and is becoming a true national security threat — because online speech has consequences in the real world and it’s past time any company got to pretend that it didn’t.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.