From QAnon to Russian propaganda campaigns to COVID-19 myths, social media is unquestionably the vector for increasingly dangerous misinformation. But the big platforms are still trying to have things both ways.
They take credit for pro-democracy movements and Black Lives Matter but maintain that groups devoted to armed militias didn’t influence the shooting of protesters in Kenosha, Wisconsin. They pitch services to politicians that they claim can win elections but then say they’re not responsible for political speech.
With just weeks left until the U.S. election, we wondered: If the platforms all agreed overnight that disinformation is a threat to society and democracy, what would change?
Joan Donovan is the research director of the Shorenstein Center on Media, Politics and Public Policy at Harvard. First up: She said gaming Twitter should get a little harder.
“A place like Twitter needs to get a little bit more honest about how their broadcast system plays into the disinformation incentive structure,” Donovan said. “So if you’re just a small website, by and large, it’s very difficult for people to stumble across that. However, if you’re employing a little bit of automation, maybe some advertising, and you have potentially even paid off some influencers, you can make that disinformation scale and look organic.”
Then there’s YouTube, which is way more aggressive about suggesting things for us to consume.
How an algorithm helps convert empty offices into housingJun 2, 2023
AI is already taking jobs from some voice actorsJun 1, 2023
What we know about social media’s effects on kidsMay 31, 2023
“Systems like YouTube that depend on recommendations need to take a serious look at that and need to understand that if I hate-watch a QAnon video because someone sends it to me, or I click on it accidentally, the algorithm remembers that and it tries to serve me more of that content,” Donovan said. “Even if I didn’t like it, I can’t get out of that vortex. So I think that recommend [algorithm] needs an overhaul because it does tend to work in the favor of manipulators.”
Donovan says recommendations for Facebook events, pages and groups also need an overhaul. Facebook left up a “call to arms” event in Kenosha even after it was flagged at least 455 times, according to BuzzFeed, and only took it down after an armed teenager killed two protesters.
“We have to be attuned to the fact that groups like that don’t grow fast and become agile without technology. And that Facebook event page for that night bears a lot of responsibility for people knowing where to show up, what to bring and, and how to interact with each other,” she said. “This is a feature of the design.”
Donovan also said don’t ignore Instagram. Influencers are driving a lot of disinfo. And no, we don’t think any of this will happen absent a lot of regulation.
Related links: More insight from Molly Wood
There is more reading on this topic, um, kind of all over the internet lately, which is a good thing, I guess.
I like this Clemson effort, in particular, to create a quiz called Spot the Troll to help people figure out when they’re talking to a bot or a spam account that is deliberately trying to sow division or mislead you. I used to think I was pretty good at that, but the trolls have either gotten a lot better or everyone on Twitter is super-pissed off.
I talked with Joan Donovan for nearly half an hour and wanted to share just a little more of the conversation with you, mainly because researchers believe this is a real threat to the integrity of the election and ballots are going to start going out in just a few weeks. So, I asked her about the sense of urgency around solving this problem. The following is an edited transcript of our conversation.
Joan Donovan: You know, there was a lot of scenario planning in 2019, absent the pandemic, of how people were going to be able to vote and how people were going to engage with the election process. With the pandemic, there’s added emphasis on social media companies to really get this right. Because most of everyone’s information is being filtered through these platforms. So even people that would have been going door to door, organizing ride-shares for the elderly, are unable to do that. And so, tech companies now are not only the targets of groups that want to perpetrate information warfare and carry out influence operations, but they’re also the most important information conduit for groups that just want to have a fair election.
Molly Wood: So you’re saying that this scenario never accounted for the fact that social media, which was already a vector for disinformation or for good information, would become even more central.
Donovan: Exactly. And the thing that we know as researchers about disinformation and media manipulation is that it works because it plays on people’s outrage, and it plays on novelty. So some of the features of social media itself, which is journalists really strive to be first out the gate with new information so that they can be at the head of the pack and the top of the search results, are now being turned against them. And so the notion of what it means to wait in journalism to confirm something is up against viral misinformation that is really playing on that information void and the time that it takes for journalists to really know what it is that they’re writing about and be able to share that information with their audiences.
Wood: We also talked to a researcher about QAnon as really kind of an emerging cult. And one of the things that she said was that movements like that really thrive in this kind of perfect storm scenario, where not only do you have an active and thriving ability to push out disinformation, but you have a lot of people stuck at home with a lot of time and a lot of loneliness and desperation. I would assume that was also not in this scenario planning.
Donovan: There was, to be honest with you, as people were thinking this moment, there was a conspiracy wild card. In the threat models and in the matrix of our thinking, we did know that something would happen. Pizzagate in 2016 is something that has become instructive and informative for researchers to understand how this attention to conspiracies within certain communities can strengthen the trust between these groups and also can help them to recruit. And I use “recruit” really loosely. Nobody’s like, I’m a member of the Pizzagate conspiracy community. But with something like QAnon, you do have folks that are outwardly saying, “Where we go one, we go all,” using these hashtags and trying to signal to others that they believe this, too. And because the infrastructure of QAnon and the networks were already in place during the pandemic, when the pandemic hit, it became an opportunity to tell us another story about the “deep state,” another story about the collusion between what are very coded anti-Semitic tropes about a global cabal. And that feature of social media that brings these groups together through the use of hashtags is something that has accelerated medical misinformation and really grown the ranks of people who are familiar with QAnon.
Wood: Do you think that we’re headed toward a flashpoint where people will re-recognize the value of information? Or do you think it will be a long, slow decline into little islands of personal truth?
Donovan: If you asked me this 10 years ago, I’d say the capacity for the internet to ensure that we have hyperlocal information networks is at an all-time high, if you think back to the days of indie media and how great some of the local citizen journalism was in those moments. However, because we’ve reached this obligatory passage point where information networks have become consolidated within these superlarge companies, we have to rethink data as currency. And we have to rethink data infrastructures if we’re going to build the web that we want, and we do have to figure out a public component to this infrastructure that doesn’t leave us at the mercy of social media companies to do all of this information wrangling. I’ve been a big proponent of trying to get platform companies to see the capacity to do content curation, using librarians. They keep talking about content moderators. I’m like, what about your curation process, so that when you do look for information on Google or on Facebook, you’re getting things that have been vetted, not things that are popular or new?