When it comes to stopping misinformation, it’s not the speech. It’s the algorithms.
Apr 30, 2021

When it comes to stopping misinformation, it’s not the speech. It’s the algorithms.

HTML EMBED:
COPY
Algorithms help misinformation, conspiracy theories and fake news spread quickly.

What you see on social media isn’t there by accident. It’s there because of an algorithm, the programs that use data to decide what content will keep you online for the longest possible time — so that you’ll see and click on more ads. These algorithms are right up there with the secret recipe for Coke, in business terms, but they’re also the secret formula that helps misinformation, conspiracy theories and fake news to spread so fast and so far.

Congress this week had a hearing to try to understand this dynamic and, maybe, to try to regulate it. It’s a topic for “Quality Assurance,” where I take a second look at a big tech story. I spoke with Ina Fried, chief technology correspondent for Axios. The following is an edited transcript of our conversation.

A headshot of Ina Fried, a chief technology correspondent at Axios.
Ina Fried. (Photo courtesy Axios)

Ina Fried: I have an 8-year-old in second grade, and a lot of times for the math problem, it’s not enough to just have the right answer. You have to show your work. And a lot of people feel algorithms should work the same way — that the only way to ensure that they’re fair is to explain the methodology they use to come to a decision. It’s harder to create an explainable algorithm, but it is possible.

Molly Wood: We had this week where Facebook reported absolutely astronomical revenue, so obviously, this is working. Do you think that if we get to a point where Congress is starting to circle closer and closer to this kind of central issue, that we’ll see more pushback from companies, because this is the moneymaker?

Fried: I think we’ll see companies really try and join the process, which might be a better path forward anyway, when you consider the odds of Congress actually coming up with rules on its own without the participation of the industry. If the federal government doesn’t act, all it takes is California or another big state to pass their own rules and, basically, the platforms will abide by it nationwide, just because it’s not possible to really craft a set of rules for one state and not for the rest of the country.

Wood: It sounds like you don’t think it’s likely that we’ll actually see federal regulation come out of this.

Fried: Come on, these guys can’t agree on anything. I’m always skeptical of federal legislation being the answer to everything, just because Congress is so divided.

Wood: I think there’s value in just seeing the recognition that this is a conversation about more than just what people say online. It’s about understanding how the technology built by these companies plays a huge role in pushing that information far and wide and creating virality.

Fried: Definitely. The platforms use the power of the algorithm all the time for business reasons, but they’ve been very hesitant to use the algorithm as an enforcement mechanism. I mean, if you think about problematic content that isn’t quite bad enough to ban — if you made it so that the only way someone saw that content is if they specifically went to the user feed of the person who posted it, almost nobody would see it. So at least if we’re having this conversation, that’s a step in that right direction.

Wood: And we have seen them twist the dials recently. Is that going to make it harder, going forward, for them to argue that they can’t?

Fried: I think it is and, again, I think that’s a good thing. I think they are going to face criticism for when they do that. I think people are wise to the technique now. When the right complains about shadow banning, this is actually what they’re talking about. They’re talking about when content that they like gets limited in the algorithm. And there’s a big argument to be made that free speech is the right to say something. Nowhere in the Constitution does it give you a right to be amplified by a private company.

Tristan Harris, co-founder and president at the Center for Humane Technology, testifies remotely during a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing Apr. 27 on Capitol Hill in Washington, D.C.
Tristan Harris, co-founder and president at the Center for Humane Technology, testifies remotely during a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on April 27 on Capitol Hill in Washington, D.C. (Al Drago/Pool/Getty Images)

Related links: More insight from Molly Wood

Here’s Ina’s reporting on this hearing and a piece she wrote earlier this week on a coalition of advocacy groups that is pushing the Biden administration to create a task force to figure out how to stop disinformation campaigns without infringing on free speech. And as much as it represents good baby steps for Congress to have had this hearing, some watchers were not impressed this week. That included people like our friend of the show and regular guest Joan Donovan from Harvard, who actually testified, but said later that lawmakers needed to go a lot deeper to understand how these companies determine what to amplify, what data they use and how advertising plays into it all. During the hearing, Donovan told the committee that nothing less than democracy is at stake.

Earlier this year, researchers at the Stanford Program on Democracy and the Internet finished up a study on what they referred to as Big Tech’s influence on information. They determined that the problem isn’t actually even the size and reach of social media companies — it’s all about the algorithms. They propose a reform to Section 230 that would require companies to let users filter their news feeds using third-party transparent algorithms they call middleware. They imagine that the software from lots of different companies could verify accuracy and value of the sources and, they wrote, “take over the editorial gateway functions currently filled by dominant technology platforms whose algorithms are opaque.” Which is actually quite fascinating. At least it’s a different way of thinking about things.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer