❗Let's close the gap: We still need your help to raise $40,000 by April 1. Donate now
Social media platforms are fighting disinformation, but with half the resources
Mar 25, 2020

Social media platforms are fighting disinformation, but with half the resources

HTML EMBED:
COPY
It's not likely the protections are the same as at the office.

Instagram announced big changes this week on how it will moderate content on its platform to try to stop the spread of disinformation around COVID-19. It says it will aggressively take down some content and deprioritize accounts that aren’t from trusted sources. That’s in line with what parent company Facebook is doing and what social media platforms have teamed up to do as well. But the truth is, this moderation is really tricky because of technology and because of the realities of the pandemic itself.

I spoke with Alex Stamos, director of the Stanford Internet Observatory and the former chief information security officer at Facebook. He told me that the content moderators have all been sent home for privacy and safety reasons. The following is an edited transcript of our conversation.

Alex Stamos. (Photo courtesy of Stamos)

Alex Stamos: A content moderator, that’s all they do. They spend all day looking at content, and that includes content that might be private, that might be in private groups, that might be in direct messages between users and the like. As a result, a lot of work goes into protecting the privacy of those users to make sure that that private information does not leave the computer of the content moderator. You do a bunch of technical protections to make sure they’re not just walking around and looking at people’s information. All those things are extremely difficult or impossible to do at home, so if you send your tens of thousands of content moderators home, you end up having to make a decision of whether or not you want them to work. If you do, you’re going to have to loosen the privacy protections that have been used to protect users’ data in the past.

Molly Wood: Now, it sounds like these platforms are relying a lot more heavily on machine learning. In one case we saw Facebook accidentally taking down almost everything that mentioned coronavirus. What are the trade-offs, and is there any way around mistakes?

Stamos: You’re right, companies are going to have to rely a lot more on machine learning. As you change the ratio for the number of people making judgments and the number of machines that are then applying those judgments, then you inevitably end up with less accuracy. That means you might take down content that should not be, and it might mean you miss bad things. I think that’s exactly the kind of thing we’re going to have to get more used to.

Wood: It’s interesting. It seems like potentially some of the limitations, but maybe also some of the efficacy of these tools, might come into clearer focus now.

Stamos: Right, especially since the Christchurch [New Zealand] shooting, [where] there was a huge push for more content moderation. Truth is, machine learning is not that smart. We’re going to see a lot more content moderation that does not take in the subtleties of human communication — of comedy, of sarcasm, of repeating some kind of phrase because you’re criticizing it. All of those things are classically very difficult for these companies to deal with. They’ve had to, in those situations, punt that out of the machines to humans. But now that those humans are working from home, there’s a real trade-off between people’s privacy and doing a better job with content moderation.

Wood: What do you think this will look like? Will they have to err on the side of more intensive censorship, for lack of a better word?

Stamos: I think every platform is going to have to make their own calculation. To be frank, I think politicians and the media are going to have to be more realistic about what kind of speech control we want at scale.  Truth is, there’s been a moral panic around tech platforms, especially social media for the last couple of years, where the human problems that are reflected online, the immediate response from politicians and the media is “We want the tech companies to fix it.” It turns out that asking these companies to do that level of control of speech in a complicated society like ours has downsides. That was always true, but the fact that these companies are now having to deal with these issues short-handed with way less human intervention, I think is going to bring that into much sharper relief.

Wood: Absolutely. If you see platforms just basically say, “OK, wholesale, we will have to block all information,” nobody will be happy. But clearly, if misinformation gets through and lives are lost as a result, that ends up being a problem for platforms also. Is this a lose-lose?

Stamos: I think there is no way that you can solve societal problems by making the speech that reflects those problems go away. The other core issue here is that in a number of these areas, but especially coronavirus, a lot of the misinformation is coming from public officials. In a situation where the president of the United States is saying things that aren’t true, then the entire media and social media environment is going to struggle with how to deal with it. This is actually the other side of the coin of the discussion of whether the media should still carry Trump’s news conferences live. It’s clearly newsworthy, anything the president says from the podium in the White House, but if it’s both newsworthy and untrue, or newsworthy and harmful, what is the responsibility of both the media and then the social media intermediaries to either put more information around that and wrap it in context? So far, what they’ve been trying to do is to counter that elite misinformation with direct links to sources like the CDC. As long as, hopefully, there are parts of the government that are still somewhat independent, and that are sharing information that is still trustworthy and reliable, they still do have an option [to], next to a Donald Trump Facebook post or tweet, also have a link directly to the CDC and then allow people to make their own decision.

Wood: Where does ad tech and advertising mechanisms fit into this conversation, if at all?

Stamos: When you look at disinformation as a problem overall, you have to deal with it in a different way for different kinds of products. The online product that is most risky is advertising, and that’s for two reasons. One, advertising allows people to trade money for speech amplification. The second is, advertising is one of the only ways that you can put information in front of somebody who did not ask to see it. The number one determinant of what is on somebody’s Facebook newsfeed is who their friends are and who they follow. On Twitter, it’s who you’ve decided to follow — you’ve made an affirmative decision that these are the people [you] want to hear from. Advertising bypasses all that. It allows people for whom you have no relationship to take information to put it on your screen. That can be incredibly powerful for commercial purposes, but it can also be very risky from a disinformation perspective. I think most of the companies have banned — in this case, when you talk about coronavirus, it’s a little bit easier than the general political issue in that there’s a handful of terms that you can ban. What a number of companies have done is they have, by default, banned any use of certain words in their advertising. Although, what you see now is that advertisers are coming up with euphemisms. In this case, people will talk about “the illness” or “the virus,” or use some kind of euphemism and not say coronavirus or COVID-19. There is an interesting question on the commercial platforms, like the eBays and the Amazons, of what is the appropriate response here. It is not illegal to sell masks, it is not illegal to make a profit off of it, except in certain circumstances, but it might be morally reprehensible in some ways. Although, medical professionals are also getting [personal protective equipment] and masks via some of these platforms. I think it’s actually a very difficult balancing act of do you allow normal commerce to happen — even in a situation where the things that are being sold should be rationed and given to very specific people? Overall, there doesn’t seem to be very much legal guidance. I think this is the place where you actually want states and the federal government to step in, because there’s a lot of different ways you can sell stuff online, and without guidelines across multiple companies anything that any individual company does is going to be ineffective.

Related: Artists, like John Legend above, are using Instagram Live during the coronavirus outbreak to perform mini concerts. (Sarah Morris/Getty Images)

Related links: More insight from Molly Wood

There’s a good story on NBC about the Reddit message board r/coronavirus, which has now grown to about 2 million people. The 60 volunteer moderators include infectious disease researchers, computer scientists, virologists and even doctors and nurses. I’ve seen several stories popping up over the past few weeks about Wikipedia being a source of trusted information in this time of fast-flying rumors and home remedies and panic. I did a similar story earlier this year about Wikipedia, based on the idea that when the incentive is information and not advertising or engagement, the product can be better.

Also watching

Just in time for Amazon to start hiring like crazy to fulfill orders during the pandemic, warehouse workers there just won the fight for paid time off — not even sick leave, but actual vacation. A California-based labor group has been pushing for the coverage since December, after discovering that the Amazon employee handbook described benefits that they weren’t getting. Warehouse workers are eligible for the PTO starting this week. The new battle they’re fighting is they say Amazon isn’t doing enough to protect warehouse workers from getting the coronavirus and they’re asking for face masks, hand sanitizer, hazard pay, child care and time to wash their hands.

While that is happening, the Wall Street Journal reported Tuesday that top U.S. executives, including Amazon CEO Jeff Bezos, sold off a lot of stock since the beginning of February as the outbreak began to intensify but before the markets started to tank. Bezos sold almost $3.5 billion worth of Amazon stock. He sold almost as much stock in the first week of February as he sold in the previous 12 months. 

Finally, a story that made me feel better, from the Washington Post. My phone informed me on Sunday that my daily screen time average had gone from about four hours to over seven in the course of the previous week, and apparently this is a thing. People all over Twitter are joking about their horrifying screen time reports during quarantine. While I agree that it’s time to put the phone down for both my mental health and my right wrist, I also have to find a way to turn off that little nag — because I am not shaming myself for screen time during the apocalypse.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer