Here’s the real deal with Section 230
Dec 7, 2020

Here’s the real deal with Section 230

HTML EMBED:
COPY
A lot of what people say they don't like about Section 230 really has nothing to do with the law.

For years, President Donald Trump has been calling for the repeal of Section 230, the part of the Communications Decency Act from 1996 that says an online publisher or platform like YouTube or Facebook can’t be sued for things that are posted by other people. The president recently said he would actually veto the country’s annual defense spending bill if it didn’t include a repeal of Section 230.

But, see, a lot of what people say they don’t like about Section 230, like claiming that social media platforms censor conservatives, really has nothing to do with that law. I spoke with Jeff Kosseff, author of a book on Section 230, called “The Twenty-Six Words That Created the Internet.” The following is an edited transcript of our conversation.

A headshot of Jeff Kosseff, author of the book “The Twenty-Six Words That Created the Internet.”
Jeff Kosseff (Photo courtesy of Kosseff)

Jeff Kosseff: I think that people are upset at Big Tech. They’re upset with these companies for a lot of good reasons, and Section 230 is, really, a proxy for this anger, even when Section 230 is not necessarily what is responsible for what they’re angry about.

Molly Wood: I mean, it seems likely that when platforms moderate less, no one’s happy; when platforms moderate more, no one’s happy. And then, in some ways, none of that has anything to do with existing regulation.

Kosseff: That’s exactly right. And you really identify a source of the problem here, in that you’re never going to satisfy everyone. And then, you scale that up to a platform that has billions of users. There’s never going to be a time where people say, “Oh, we’re happy with how platforms are moderating.”

Wood: What do you think will happen? We have an incoming president. President-elect Biden has talked about revoking Section 230, which is not something a president can do on his own, as far as I understand. There are something like 20 bills before Congress that attempt to change the Communications Decency Act in some way. What do you think 2021 looks like for moderation and liability online?

Kosseff: It’s so hard to predict because we have these two very different criticisms of the platforms, and we have different proposals that get at these different criticisms. In general, and this does vary a bit, but conservatives will generally say that they want the platforms to be politically neutral in how they moderate. The Democrats, I would say, are more likely to say that there should be more moderation of what they view as harmful content. There haven’t been as many proposals from the left as there have been from the right in terms of actual legislation, but generally, the idea is that they want to at least limit Section 230 protection to platforms that act more responsibly in how they moderate. And when you step back and look at those two different goals, it’s difficult to see how you can reconcile them into a single solution that would satisfy both sides.

Wood: In this kind of moment right now, where you have this debate that is ostensibly about Section 230, and might not actually be, you have some politicians, most notably President Trump and conservatives, talking about how without Section 230 they would be able to speak more freely on social media, without companies taking down or labeling posts. Can you walk us through how that doesn’t add up?

Kosseff: I think that if Section 230 were to be repealed, you would actually see more moderation by the platforms because they would suddenly face substantially more potential liability for the content that their users post. So if something’s really right on the border of possibly being defamatory, or having some other legal issue associated with it, without Section 230 a platform is most likely going to err on the side of caution, of taking it down rather than keeping it up.

Related links: More insight from Molly Wood

For a longer explainer with Kosseff, head on back to January of 2020 when Kai Ryssdal and I did an episode of Make Me Smart all about Section 230.

Let’s dig a little more into what this whole free-speech-on-the-internet thing really means, versus what people seem to think or say it means when they want to say whatever they want online and never get in trouble for it. One, Kosseff said that if Section 230 went away entirely, it would actually cause platforms to moderate way more heavily because they’d be afraid of getting sued all the time. So that’s probably not what people actually want when they advocate for getting rid of it. Two, if you’ve been following the development of Parler, the so-called free-speech app that right-wingers are flocking to, which we talked about a couple of weeks ago, you’ll have seen that the app is being flooded with porn and escort services. That was not only inevitable but also shows why totally unmoderated platforms often end up being kind of bad for business. Free apps rely on advertising, and what advertisers hate, as evidenced by boycotts on YouTube and Facebook, is having their brands show up next to the bad stuff, like terrorist content, hate speech and porn.

The real reason social media has gotten cleaner and tidier and less free speech-y over the years is the business model, friends. And if Section 230 is really a long, messy story about unintended consequences, on Friday, The New York Times had a story about how a new privacy regulation in Europe is set to take effect Dec. 20. It would ban the use, by social media services, of software that looks for images of child pornography or evidence of grooming by sex traffickers in emails and other messages sent online. This is, of course, under the theory that scanning private communications, even for child pornography, is an invasion of privacy. The Times reports that between January and September of this year, more than 2 million reports of online child sex abuse came from Europe, and almost all of them were found using automated scanning.

Facebook, actually, is the biggest reporter in the world of child sexual abuse imagery. The social network said it would stop that work entirely in the EU if the law goes into effect, and anti-trafficking groups worry that all social networks will stop such scanning out of fear of similar regulations worldwide.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer