❗Help close the gap: We still need to raise $40,000 by the end of March. Donate now
Social media takes baby steps in dealing with hate speech. Time to grow up?
Jul 24, 2020

Social media takes baby steps in dealing with hate speech. Time to grow up?

HTML EMBED:
COPY
While Facebook backs a relatively hands-off approach to speech, Twitter has taken down thousands of accounts related to the conspiracy group QAnon.

Online hate speech has gone way up since the police killing of George Floyd in May. According to an analysis out this week from the company DoubleVerify, hate speech in the form of inflammatory posts has increased by nearly 40% around the country. And while Facebook continues to advocate a relatively hands-off approach to speech, Twitter this week took down thousands of accounts related to the conspiracy group QAnon, saying it will take action on accounts that could “lead to offline harm.”

Offline harm is something researchers have warned about for years. I spoke with Dipayan Ghosh, co-director of the Digital Platforms & Democracy Project at Harvard. He said this is all still moving way too slowly. The following is an edited transcript of our conversation.

Dipayan Ghosh (Courtesy Ghosh)

Dipayan Ghosh: I don’t think that [social media companies] have the incentive to do so until public sentiment rises up and they’re almost forced to do something. That’s just not a good situation. I think it’s very harmful for society and for our public and democratic interest to have to wait until people get so angry at these platforms that they’re forced to act. I think we just need a better regulatory system overlooking them.

Molly Wood: There are people getting mad at platforms, but there’s also, it sounds like, recently since the protests over George Floyd’s killing started, there’s been a marked increase in hate speech. What are the consequences of that, and isn’t that enough?

Ghosh: It should be. What we know is that there’s so much polarization, especially in this country, and George Floyd really illustrates that in high resolution. It’s really brought terrible people and terrible ideas to the forefront of our media ecosystem today. And what I hope is that, yes, while it’s engaging for many people to see such hateful content, what I would hope is that companies can start to reorganize and restructure the way that they prioritize content and try to hold our attention by thinking more about what people really want to see, not such hateful content.

Wood: Companies often say that this is a problem of scale. Facebook has specifically said this, that there’s just no way to stop this kind of speech at scale across the entire world. Do you buy that? Do you think that companies do or do not have the technical ability to do this?

Ghosh: Companies like Facebook absolutely have the technical capacity to be able to prevent the spread of hate. Let’s be real about this. Facebook pours money, lots and lots of money, into artificial intelligence and uses that artificial intelligence to profile us, determine our behaviors and our likes and interests and beliefs and routines. Now, could it use some of that cash toward developing artificial intelligence in ways that immediately detect hateful content? I believe it can, and I believe it should. I also believe that it’s not doing everything it can do to catch that content.

Related links: More insight from Molly Wood

Two links popped up on my Twitter feed almost simultaneously on Thursday. One that said Facebook is creating new teams at Instagram and Facebook that will examine how bias in its products — the algorithms that suggest content or flag some kinds of speech for deletion — might be disproportionately affecting people of color. The second tweet, by NBC tech reporter Olivia Solon, said maybe the release of that news in The Wall Street Journal is why Facebook had declined to comment on her story about how Facebook had, since at least 2019, declined to deal with discoveries by internal researchers who found that the company’s moderation system was discriminatory. The researchers told their bosses at Facebook that the automated moderation systems, on Instagram in particular, were 50% more likely to disable accounts belonging to Black users than white users. 

Eight current and former employees told NBC they were told to stop conducting research on racial bias, and when Instagram made some changes to the moderation system, the company wouldn’t let them test it. Facebook didn’t deny that it stopped some researchers from continuing to work on racial bias, but said it was because their methodology was flawed.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer