Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report

This Spanish "ghost town" can be yours for $100K

Jun 17, 2019

Latest Episodes

Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Tech Blogs

What if social media treated extremist content like junk mail?

Molly Wood Mar 21, 2019
Share Now on:
HTML EMBED:
COPY
An email server shows alerts for spam, or unwanted emails, on a computer screen
Ian Waldie/Getty Images

As part of a series on extremism on the internet, Marketplace’s Molly Wood explores how people get radicalized online. Read about what the responsibility is of social media companies to curb this type of activity (if any) and how internet “echo chambers” can lead to faster radicalization.


Marketplace Tech is taking a look this week at how people get radicalized online. They may start out as trolls making tasteless jokes, get sucked deeper into increasingly radical communities and maybe take the step to commit violence in the real world, like the Christchurch shooter in New Zealand. What can the biggest social media platforms in the world do to interrupt this process? Host Molly Wood talked with Dipayan Ghosh, who used to work on global privacy and public policy issues at Facebook. Now he’s a researcher at the Harvard Kennedy School. He said yes, it’s hard for big platforms to both monitor huge amounts of content online and also make difficult decisions about blocking or minimizing that content. But he said it’s also not that hard. The following is an edited transcript of their conversation.

Molly Wood: How hard is it for companies to stop extremist content?

Dipayan Ghosh: When they do decide that they want to throw their money behind some commercial effort, let’s say, for example, the development of [artificial intelligence] to better target ads, they’re throwing dollars behind that. I would argue that they should be doing this in exactly the same capacity, at exactly the same level, in the case of thinking about content moderation, especially as we are dealing with examples of tremendously damaging content.

Wood: What about the incentives question? Let’s take Google as an example. Google Search knows that not all information is equally important and has attempted to stop giving you junk based on the origin, the trustworthiness. But YouTube, Facebook, Instagram and Twitter give you the thing that you really want, including potentially divisive propaganda, because it’s really engaging. At what point do you think companies will really confront that set of incentives?

Ghosh: It’s an excellent question. When we think about Gmail sorting out junk mail versus YouTube, both owned by Google, actually raising up junk news or misinformation or conspiracy theories, Google does not have an economic incentive in pushing fake news or misinformation or these other kinds of content at us over Gmail.

Wood: You’ve given us this great metaphor for this information, which is junk. Just in the past couple of weeks, we’ve seen platforms, led by Pinterest surprisingly, essentially classify anti-vaxxer information as junk, as information that is not good for people. Is that a baby step toward what we’re talking about?

Ghosh: These all represent baby steps, but there is nothing that’s really going to get to the industry until and unless we have a real economic incentive in place for them to actually take action.

Wood: Are you saying the engagement model just got out of hand? It just went too far and people didn’t necessarily anticipate how extreme it would get?

Ghosh: That’s precisely right. We don’t have to break down this business model entirely, but through competition reforms to push back on the engagement factor of these platforms and their implicit power in the market, and privacy reforms to push back on their uninhibited nature of data collection to advise their behavioral profiles on us. I think that that kind of a regime can start to get us coming back to a more secure American democracy.

Wood: Do the companies recognize this? When you talk to your former colleagues, is there an internal debate happening about this? Are the companies feeling increasingly like this is their responsibility?

Ghosh: Absolutely. I think there is no doubt that the industry’s understanding the public’s discomfort with the industry and distrust. I think that the industry is really hearing it loud and clear. If they don’t change themselves, if they don’t voluntarily take action to really do what’s right here, then the government is going to come in. We’ve seen this, not in the United States where politics is so crazy these days, but in the United Kingdom, where politics is [also] crazy, they have put out a series of white papers that are strident against this industry. In Germany, the government has pushed for the breakup of Facebook. If these companies do not recognize the problems that they have put forward to the public and come to the U.S. Congress ready to meaningfully negotiate at the table, they’re going to get broken up. They are going to get so ridiculously regulated by foreign markets that they’re going to then come back begging to the U.S. Congress to actually develop a more reasonable regulatory regime that the U.S. can then advocate for in other countries outside the U.S.


Related links: more insight from Molly Wood

Ghosh, who also advised the Obama administration on tech policy, made the point about about how these companies need to take some action before regulators take it for them. We may actually be closing in on that time. European regulators are now up to almost $10 billion in fines against Google, including one just yesterday over complaints that it unfairly tilts the playing field toward its own products. The European Union has made plenty of other moves to regulate tech giants. New Zealand’s prime minister said her government will look at social media’s role in radicalizing terrorists and spreading their messages. Earlier this week, the chairman of the U.S. House committee on antitrust law wrote an op-ed in the New York Times calling specifically for the Federal Trade Commission to investigate Facebook on antitrust grounds. Analysts warned investors that the company’s ad model could be in trouble because of the regulatory scrutiny. Pressure from journalists and activists hasn’t let up, either.

Facebook this week also said that it would limit ad targeting options in ads for housing, jobs and credit services. That’s after it was accused in five separate cases for violating civil rights laws by letting advertisers exclude people from seeing ads based on their gender, age or ethnicity.

People are starting to have real conversations in public about whether we should allow targeted advertising at all. This came up in a panel I hosted recently in Seattle, and there’s op-ed in a Bloomberg about it this week. Change or be changed, apparently.

If you’re a member of your local public radio station, we thank you — because your support helps those stations keep programs like Marketplace on the air.  But for Marketplace to continue to grow, we need additional investment from those who care most about what we do: superfans like you.

Your donation — as little as $5 — helps us create more content that matters to you and your community, and to reach more people where they are – whether that’s radio, podcasts or online.

When you contribute directly to Marketplace, you become a partner in that mission: someone who understands that when we all get smarter, everybody wins.