We fell short of our Fall Fundraiser goal of 2,500 donations. Help us catch up ⏩ Give Now
Marketplace Tech Blogs

What if social media treated extremist content like junk mail?

Molly Wood Mar 21, 2019
HTML EMBED:
COPY
An email server shows alerts for spam, or unwanted emails, on a computer screen. Ian Waldie/Getty Images
Marketplace Tech Blogs

What if social media treated extremist content like junk mail?

Molly Wood Mar 21, 2019
An email server shows alerts for spam, or unwanted emails, on a computer screen. Ian Waldie/Getty Images
HTML EMBED:
COPY

As part of a series on extremism on the internet, Marketplace’s Molly Wood explores how people get radicalized online. Read about what the responsibility is of social media companies to curb this type of activity (if any) and how internet “echo chambers” can lead to faster radicalization.


Marketplace Tech is taking a look this week at how people get radicalized online. They may start out as trolls making tasteless jokes, get sucked deeper into increasingly radical communities and maybe take the step to commit violence in the real world, like the Christchurch shooter in New Zealand. What can the biggest social media platforms in the world do to interrupt this process? Host Molly Wood talked with Dipayan Ghosh, who used to work on global privacy and public policy issues at Facebook. Now he’s a researcher at the Harvard Kennedy School. He said yes, it’s hard for big platforms to both monitor huge amounts of content online and also make difficult decisions about blocking or minimizing that content. But he said it’s also not that hard. The following is an edited transcript of their conversation.

Molly Wood: How hard is it for companies to stop extremist content?

Dipayan Ghosh: When they do decide that they want to throw their money behind some commercial effort, let’s say, for example, the development of [artificial intelligence] to better target ads, they’re throwing dollars behind that. I would argue that they should be doing this in exactly the same capacity, at exactly the same level, in the case of thinking about content moderation, especially as we are dealing with examples of tremendously damaging content.

Wood: What about the incentives question? Let’s take Google as an example. Google Search knows that not all information is equally important and has attempted to stop giving you junk based on the origin, the trustworthiness. But YouTube, Facebook, Instagram and Twitter give you the thing that you really want, including potentially divisive propaganda, because it’s really engaging. At what point do you think companies will really confront that set of incentives?

Ghosh: It’s an excellent question. When we think about Gmail sorting out junk mail versus YouTube, both owned by Google, actually raising up junk news or misinformation or conspiracy theories, Google does not have an economic incentive in pushing fake news or misinformation or these other kinds of content at us over Gmail.

Wood: You’ve given us this great metaphor for this information, which is junk. Just in the past couple of weeks, we’ve seen platforms, led by Pinterest surprisingly, essentially classify anti-vaxxer information as junk, as information that is not good for people. Is that a baby step toward what we’re talking about?

Ghosh: These all represent baby steps, but there is nothing that’s really going to get to the industry until and unless we have a real economic incentive in place for them to actually take action.

Wood: Are you saying the engagement model just got out of hand? It just went too far and people didn’t necessarily anticipate how extreme it would get?

Ghosh: That’s precisely right. We don’t have to break down this business model entirely, but through competition reforms to push back on the engagement factor of these platforms and their implicit power in the market, and privacy reforms to push back on their uninhibited nature of data collection to advise their behavioral profiles on us. I think that that kind of a regime can start to get us coming back to a more secure American democracy.

Wood: Do the companies recognize this? When you talk to your former colleagues, is there an internal debate happening about this? Are the companies feeling increasingly like this is their responsibility?

Ghosh: Absolutely. I think there is no doubt that the industry’s understanding the public’s discomfort with the industry and distrust. I think that the industry is really hearing it loud and clear. If they don’t change themselves, if they don’t voluntarily take action to really do what’s right here, then the government is going to come in. We’ve seen this, not in the United States where politics is so crazy these days, but in the United Kingdom, where politics is [also] crazy, they have put out a series of white papers that are strident against this industry. In Germany, the government has pushed for the breakup of Facebook. If these companies do not recognize the problems that they have put forward to the public and come to the U.S. Congress ready to meaningfully negotiate at the table, they’re going to get broken up. They are going to get so ridiculously regulated by foreign markets that they’re going to then come back begging to the U.S. Congress to actually develop a more reasonable regulatory regime that the U.S. can then advocate for in other countries outside the U.S.


Related links: more insight from Molly Wood

Ghosh, who also advised the Obama administration on tech policy, made the point about about how these companies need to take some action before regulators take it for them. We may actually be closing in on that time. European regulators are now up to almost $10 billion in fines against Google, including one just yesterday over complaints that it unfairly tilts the playing field toward its own products. The European Union has made plenty of other moves to regulate tech giants. New Zealand’s prime minister said her government will look at social media’s role in radicalizing terrorists and spreading their messages. Earlier this week, the chairman of the U.S. House committee on antitrust law wrote an op-ed in the New York Times calling specifically for the Federal Trade Commission to investigate Facebook on antitrust grounds. Analysts warned investors that the company’s ad model could be in trouble because of the regulatory scrutiny. Pressure from journalists and activists hasn’t let up, either.

Facebook this week also said that it would limit ad targeting options in ads for housing, jobs and credit services. That’s after it was accused in five separate cases for violating civil rights laws by letting advertisers exclude people from seeing ads based on their gender, age or ethnicity.

People are starting to have real conversations in public about whether we should allow targeted advertising at all. This came up in a panel I hosted recently in Seattle, and there’s op-ed in a Bloomberg about it this week. Change or be changed, apparently.

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.