We’ve got bigger problems in November than foreign disinformation
Share Now on:
The Democratic National Convention starts Monday, kicking the run-up to the November election into even higher gear. Representatives from big tech companies met with officials from the FBI and Department of Homeland Security last week to coordinate responses to malicious disinformation campaigns on their platforms.
I spoke with Alex Stamos, the former chief security officer at Facebook, who now directs the Stanford Internet Observatory. He also helped launch the Election Integrity Partnership. He said COVID-19 “pre-hacked” the election because so many states have had to change procedures on short notice. The following is an edited transcript of our conversation.
Alex Stamos: If you look at the example of the Iowa Democratic caucus, that was a situation in which honest mistakes around the software they were using in the tabulation process were then spun by partisans as evidence that the caucus was being stolen. And that’s my fear, is that we have that kind of chaos due to the very difficult situation that election officials are facing. But instead of that being seen through the lens of being honest mistakes and things that need to be fixed, it’s seen through the lens of being part of a grand conspiracy. And I’m really afraid that we’re going to end the election period, which might last weeks or months, with a significant portion of the country believing that it was stolen.
Kimberly Adams: Can you talk a bit more about the specific lessons that you’ve learned or your team has learned from watching elections in other countries that are informing the work that you all are doing heading into this election?
Stamos: The ability for domestic groups to do this kind of work is much, much greater than foreign emissaries. When you have disinformation coming from domestic groups, coming up with policies to stop it gets much more complicated. After 2016, all the major platforms have come up with policies that target foreign interference. It is a lot harder to apply that domestically because coming up with coordinated messaging and amplifying it is exactly what political parties were built for. That becomes a huge policy challenge because it’s very difficult to fight against that without the companies putting their thumbs on the scale of what should be a democratic process.
Adams: A lot of big tech companies — Google, Facebook, Twitter, Microsoft and others — meet regularly with federal agencies about preparing for misinformation and what they’re doing about it. But you have tweeted that there are some key companies left out.
Stamos: That’s right. This is a series of meetings that actually started when I was at Facebook in the summer of 2018 in the run-up to the midterms, where we brought together the large tech companies and the relevant federal agencies, most specifically, in this case, the FBI and the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency. Those government agencies are part of what is called the Countering Foreign Influence Task Force, and they’re very focused on foreign disinformation. One of the interesting things about these recent meetings is that there’s a bunch of companies that aren’t listed. The social media landscape has changed significantly since 2016. And we now have a situation where there are foreign-owned companies, most specifically TikTok, that are actually relevant. And I think that’s one of the interesting challenges here, is how do you bring into the fold companies that have much more complicated ownership and geopolitical structures into this kind of collaboration that seemed very natural for American companies? It’s much harder for the FBI and DHS to collaborate with a company owned by ByteDance, which is a Chinese corporation.
Adams: And unlike, say, a meeting of the FCC, the Federal Communications Commission, these meetings are behind closed doors, as I understand it.
Stamos: So far, they have all been behind closed doors. All we have seen is kind of a coordinated paragraph press release that they all signed off on.
Adams: How do you feel about that?
Stamos: Well, this goes both ways. There does need to be the ability to coordinate privately because unfortunately, there’s a lot of domestic actors in the United States right now who would take any kind of work on disinformation as a political statement, and who will try to disrupt it. But that being said, especially in situations where tech companies are working with the government, I think there needs to be more public discussion of what the interfaces are there, what the legal requirements are, that are being placed on the tech companies and how we’re thinking about both the speech and the privacy aspects of a relationship between government agencies and the tech companies.
Adams: Facebook says it will do more to give users information about how to register to vote, where to get absentee ballots or where their polling place is. It’s going to be prominent on the website. But with the way a Facebook feed works, if that information is right next to your uncle posting about misinformation, what does that do to how you actually view the election?
Stamos: The most important disinformation always comes from friends and family. That is one of the other things that you can see is that it is much more effective to push people to share disinformation with the people that trust them, than it is to have fake accounts that are being operated internationally. I think that’s one of the things that we’re really going to have to struggle with this fall.
Adams: If your team and your partnership do find a case of misinformation, [and] it’s spreading like wildfire across social media platforms, say, a wrong election date, walk us through what you can do specifically to address that.
Stamos: Let’s say we were either given a tip or we found ourselves one tweet that said the election had been moved and that you shouldn’t go vote that day. So what we would do is do some initial looks to see whether or not this is something that’s been pushed by multiple accounts. And then we will send that information to the appropriate platforms so that they can take their own action based upon their own policies. At the same time, our analysts will then dive deeper into it and try to see, do we know about these accounts? Are these accounts fake? Are these accounts part of a known disinformation group? Can we provide any kind of attribution or knowledge about who is behind this?
Related links: More insight from Kimberly Adams
Speaking of Facebook and the difficulty of policing domestic political speech, The Wall Street Journal reported Friday that Facebook’s top public policy executive in India decided not to ban an Indian politician using hate speech, in part because he was part of the prime minister’s party. The politician, T. Raja Singh, called Muslims traitors and said Rohingya Muslims should be shot. Moderators at Facebook said that was dangerous. But the Facebook official said taking action could jeopardize the company’s business in India, according to the Journal. A Facebook spokesperson confirmed that the potential political damage was discussed, but denied it was the only factor in the decision not to ban Singh for potentially inciting violence. The spokesperson also said a ban could still happen.
Meanwhile, ProPublica is tracking multiple threats to the November vote in the U.S. for its Electionland project, like cybersecurity threats, including highlighting a new report from the Center for Election Innovation on the security of voter registration databases.
Listening makes you smarter…
donating makes it all possible.
Our mission is to raise the economic intelligence of the country, exploring the intersection of the economy, tech, and our daily lives. As a nonprofit news organization, we count on your support – now more than ever before.