The debate over how social-media platforms deal with content hit a new peak this week after Twitter fact-checked several of President Donald Trump’s tweets. That prompted Trump to sign an executive order trying to limit the legal protections for social-media platforms. Currently, under Section 230 of the Communications Decency Act, internet platforms aren’t legally responsible for most content posted by users.
That is the subject for Quality Assurance, where I take a closer look at a big tech story. I spoke with Jeff Kosseff, author of “The Twenty-Six Words That Created the Internet,” a book on Section 230. I asked him what would happen if Section 230 went away. The following is an edited transcript of our conversation.
Jeff Kosseff: We don’t know exactly what will happen without Section 230, because the modern internet has existed with Section 230 in place. One possibility, if Section 230 were repealed altogether, is that platforms would really just scale back or entirely eliminate user content on their sites because they just don’t want to take the risk. There’s another possibility that they might still allow user content, but do absolutely no moderation. We really just don’t know how courts would interpret the First Amendment protections without Section 230 because we haven’t really seen that applied to social media.
Molly Wood: Is some of what we’re seeing now a consequence of not modifying Section 230 earlier?
Kosseff: I think what we’re seeing now is, frankly, the result of the platforms playing such an increasingly important role in society. In 1996, there were 40 million people worldwide on the internet. If you were kicked off of Prodigy, it would be inconvenient, you wouldn’t like it very much, but it wouldn’t be the end of your career or the end of your world. If you’re kicked off of one or two social-media sites, that could be your livelihood now. As the platforms have grown in importance, and also in size, there’s been a lot more attention paid to the rules that they set and how they enforce those rules.
Wood: Is it fair to say, though, that platforms have taken advantage of this protection?
Kosseff: I think that for the first 20 years of Section 230, it was treated as if it was written in stone. The platforms developed different practices. It’s hard to really paint with a broad brush and even say “the” platforms because there are some that are really thoughtful in exercising their moderation techniques and come up with really innovative ways to serve their users while not overly blocking content. Others, for a while, frankly, were not transparent and often turned a blind eye to some real harms that were going on on their site, and they acted basically like, well, we have Section 230, and that will protect us. That changed, I think, especially most significantly in 2018, when Congress amended Section 230 substantively for the first time in its history to allow a certain amount of liability for sites that facilitate sex trafficking.
Wood: What do you think comes next? What might Congress do? Do you think there’s a likelihood that this issue will actually be taken up?
Kosseff: I think that there is a very real chance that in the next few years Section 230 will either be repealed entirely or amended in such a way that it’s basically the same effect as a repeal. Section 230 is a very attractive target because it is seen as the huge privilege for this very large industry that people are upset about.
Wood: Do you think there will be modifications in between now and perhaps, like you said, full repeal or basically that?
Kosseff: I think there might be. There’s still been talk about particular harms that people want to address. But at a certain point, it starts to look like a piece of Swiss cheese, and it’s not as meaningful if you have a lot of different carve-outs. I think there are a number of people who are proposing different, larger reforms or entire repeals of Section 230.
Related links: More insight from Molly Wood
In case you were wondering how Facebook is thinking about all this, CEO Mark Zuckerberg made a little media tour this week to say that no internet platform should be the “arbiter of truth,” which was interesting because as several commenters pointed out, Facebook has its own fact-checking operation. And as ArsTechnica pointed out, just two weeks ago, Zuckerberg was actually bragging about the site’s success at labeling 50 million pieces of misleading content related to COVID-19. The story notes that Facebook has, in the last two months, actually removed a Trump campaign ad that contained misinformation about the 2020 census. It labeled another Trump campaign video “partly false.” But hey, never let a crisis go to waste when you want to paint yourself as the good guy and the business model is on the line, right?
Also, you may recall the other day when Twitter made this fact-checking move, I basically said this wasn’t going to end well. One Zero’s Will Oremus has a piece looking at the thinking inside Twitter around these labels. A company spokesman told Oremus that they knew that “all hell would break loose.” The company probably did not think, however, that the president himself would target a specific employee on Twitter by name and handle, who is not a top executive at the company, leading to death threats against that person. In some ways, it highlights the nightmare situation that social-media platforms are in, but it also highlights how absolutely weird, illogical and ultimately probably futile it is to ask the CEOs of companies to solve this problem at all.
If I agree with Zuckerberg on one thing, it’s this: Why are we asking him or Jack Dorsey to somehow rein in the president if he spreads false, potentially damaging or even, according to one New York Times opinion piece written by a lawyer, potentially slanderous information? Why are we asking the CEOs of public companies to do more than Congress or the American electorate? You have to admit, that’s weird.
The future of this podcast starts with you.
Every day, Molly Wood and the “Tech” team demystify the digital economy with stories that explore more than just “Big Tech.” We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.