Free speech on the internet is: A) complicated B) complicated C) complicated D) all of the above
Oct 18, 2019

Free speech on the internet is: A) complicated B) complicated C) complicated D) all of the above

HTML EMBED:
COPY
Big social platforms are trying to figure out how much free expression should be allowed online.

It’s been quite the week for speech online. Twitter introduced new guidelines on how to deal with world leaders and their tweets after Democratic presidential candidate Kamala Harris called on the platform to ban President Donald Trump. 

Candidate Elizabeth Warren bought a deliberately fake campaign ad on Facebook after Facebook said it won’t police political ads for lies. And Congress has been debating whether to get rid of Section 230, the federal law that shields online platforms from liability for what their users post. Then, on Thursday, Mark Zuckerberg gave a hotly debated speech at Georgetown University, saying Facebook would prioritize free expression above all. 

On this segment of “Quality Assurance,” I take a deep dive on platforms and regulating speech. I spoke with Daphne Keller, who is at Stanford Law School’s Center for Internet and Society. The following is an edited transcript of our conversation.

Daphne Keller: People, including most Americans, very much want platforms to take down a bunch of information that is legal. We want them to take down racial epithets, or bullying, or violent extremist content. And in most cases, that’s not illegal. It’s protected by the First Amendment. We wind up in a situation where we are asking platforms themselves to become the creators and enforcers of a new set of rules.

Molly Wood: In the past week, we’ve had obviously Mark Zuckerberg giving this big defense of free expression. We’ve had Twitter saying, “We want to put as few limits as possible on speech by elected officials that we consider newsworthy.” Where are these guys right, and what are they leaving out?

Keller: I think the biggest problem is there’s no such thing as a right answer on the questions of political speech or speech about sex, speech about race, speech about religion. We, even within the U.S., have never had a consensus about what the right rules should be. There’s certainly no such thing as a global consensus. I think we’re in a moment of some political magical thinking that if we yell at platforms enough, they will arrive at the right and responsible outcome, and we’ll all agree on what that is.

Wood: You have had a version of this job before. You were once associate general counsel for Google, and part of what you did there was help them make decisions around whether to take down content. Now that you’re on the outside, from your perspective, is it possible for platforms at this scale to effectively regulate speech online? Should it be their jobs?

Keller: I think unavoidably they will seek it out as a job, because the market is telling them to do that; the market is saying, and the press is saying, and regulators are saying, “We don’t want free speech mosh pits. We want curated content.” But there’s a real limit on how nuanced their enforcement can be. It’s easy to make the right judgment call about something like the Vietnam War-era photo that Facebook took down if you are somebody who’s familiar with that photo, and you’re taking time to think about it. But what Facebook needs is a set of rules that can be written down very concretely and that they can train a global workforce of tens of thousands of people to enforce even if those people have very different cultural backgrounds because they’re sitting in the Philippines or India. The inevitable result is the rules are going to be pretty crude. They’re going to be blunt instruments, and not very nuanced, and we’re not going to get anything like the perfect enforcement that people hope for.

Wood: Others have pointed out, though, that there is a lot of ground between speech getting posted, or even purchased via advertising, and speech getting amplified by bots, or algorithms, or some of the elements that are unique to the technology of these platforms. Is that maybe the real question here?

Keller: I think that’s a useful thing to look at because you can imagine a set of rules that say, for example, Facebook can or should choose to host particular content, but it should demote it in the news feed, or can YouTube host certain videos, but they shouldn’t be in the recommendations, or they should be lower in the recommendations. That’s a useful toggle to play with when trying to correct problems, but if we think of this as a question of law and if what the government can require, the First Amendment problems with telling platforms to demote lawful speech are pretty much as bad as the First Amendment problems with telling them they have to take it down. I don’t think we should get too excited about the idea that we can regulate amplification in a way that avoids all of these problems.

Wood: To that point, what is the role of regulation? Do you think Mark Zuckerberg asked for it repeatedly at Georgetown? And what do you think that that could look like in the future?

Keller: I think we should bear in mind that he is speaking to a global audience of regulators. The regulators who are likely to act soonest and pass the strongest laws are not American ones, they are lawmakers in the [European Union] who are moving very quickly to regulate platforms. I think we should expect that there will be more regulations coming out of Europe, that they, much like the [General Data Protection Regulation] and other recent internet regulations, will influence global behavior by platforms. That may have good and bad consequences. The GDPR, for example, a lot of people think created important privacy protections that people wish that we had in U.S. law. But if regulators create requirements that only big, entrenched companies can afford to comply with, that will be a real problem for the internet going forward. If there are laws that effectively require you to hire 20,000 moderators or that effectively require you to spend $100 million building a filter, which is what YouTube did with content ID, those are not things that a newcomer can do. So regulation that is designed for Facebook or designed for YouTube or Twitter is not necessarily regulation that is good for the global internet or for competition and innovation going forward.

Wood: One of the things we have been talking about in the U.S., and there were just hearings this week, is Section 230. Can we play a little bit of a thought experiment game and talk about what things would start to look like if for some reason, Section 230 protections went away?

Keller: We have a little bit of a sandbox example of this from a law called FOSTA that passed in 2018 that amended Section 230 just for claims to do with prostitution or sex trafficking. It was a very well-intentioned law, but it was drafted very broadly, and smaller platforms felt like they just didn’t know what they needed to do to be safe. We saw a swath of changes in policies. For example, Craigslist shut down several of its forums — the therapeutic services forum where massage therapists and so forth posted listings. Tumblr greatly changed their policies on sexual content and erotica, and a lot of long-standing erotica sources on Tumblr went away. I think what we can expect is for companies that can’t tolerate the legal risk of potentially being liable for their users’ speech, the wisest and cheapest and safest thing to do is to just shut down a whole lot of speech, shut down open forums or be very conservative in choosing to take down anything that might create legal risk.

I think we would expect to see even less investment in smaller platforms or new companies that try to occupy this space and become forums for online speech. I should add there’s a huge concern that doesn’t get enough air time — that when platforms are scared and overdo it in taking down speech, or turn to machines and AI and robot magic to take down speech, that the harm of that disproportionately falls on people who are from minority populations, people who are speaking Arabic, for example. There are some alarming studies about language filters and how they tag African American English as toxic disproportionately. There’s an equality issue with platforms cracking down on speech that goes beyond the First Amendment.

Related links: More insight from Molly Wood

We referenced Mark Zuckerberg’s speech quite a few times. I think it’s worth checking that out, along with a couple stories about the speech itself and an update on the testimony about Section 230 this week. 

Daphne Keller wrote a good piece in The Atlantic last month about her point that in some ways, public pressure is what’s pushing these companies to try to regulate speech on their platforms in ways they otherwise wouldn’t. She writes that “we should be realistic about who is likely to call the shots as private, for-profit platforms assume greater roles in restricting online speech.” And she goes on to say it’s going to be governments —democratic ones in the best case — but probably less democratic regimes, or China, as well as advertisers, business partners and potentially legislators. 

The point is, we’re all frustrated and baffled by the scale and power of these platforms, but let’s not hand them more power as we try to fix their problems.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Thanks to our sponsors