People see and absorb a lot of election information — and misinformation — on the web. But we are not all getting the same information about politics and policymakers, and certainly not from the same sources.
So understanding where people gather and communicate online can be crucial to understanding the political polarization in the United States, especially when some people are migrating to newer platforms that cater to specific political beliefs or content moderation rules.
Marketplace’s Kimberly Adams spoke with Kate Klonick, a law professor at St. John’s University, who studies online communities and speech. Klonick said people decide where they “live” online these days based on the rules of the platform.
The following is an edited transcript of their conversation.
Kate Klonick: A lot of that has to do with trust and about who you trust with kind of your opinions and your information. I think that it also depends on whether you want to have a kind of walled-garden experience speaking to people that you have actively friended behind the default privacy settings of some place like Facebook, versus someplace like Twitter, which is typically mostly out in the open. And you are seeing a rise in these fringe conservative platforms that have developed new types of rules that are different from the rules of Facebook and Twitter in terms of content moderation.
Kimberly Adams: Are there similar types of siloed platforms on the left?
Klonick: You are seeing the predominance of certain types of people congregating in places like Facebook groups within a larger ecosystem. But no, you’re not seeing people on the left breaking off and kind of coming up with their own social media platforms.
Adams: If people are living online in these different spaces and having these different conversations, depending on your age and your politics, how does that influence what information or misinformation people get exposed to?
Klonick: I think that there’s generally going to be always some amount of mis- and disinformation that people are exposed to. But there is certainly going to be a rise in reinforcement of those types of content, and [they will] foment the exact type of conspiracy theories and the exact type of negative content or calls for extremism that we’re so upset with right now.
Adams: What about platforms that have a specific political bent, like former President [Donald] Trump’s Truth Social? What does it do to the entire online ecosystem when people are siphoning off into those kinds of spaces?
Klonick: I mean, I think that what you’re seeing is also, [to] use the example of [Elon] Musk and Twitter right now, you’re seeing people stay on Twitter, but add in time that they’re spending on those new, very conservative types of platforms, or a new federated-verse type of platform, like Mastodon or something like that. Network effects are very sticky. People will stay where other people are and where they can be heard and where they can get their information and content. But I do think that when you do have people go into these kinds of siloed groups or you ban someone from a platform, you diminish their ability to reach other people, and so diminish their effect of kind of spreading bad ideas. But [that] also means that it’s harder to intervene. You don’t know what they’re planning, you don’t know what they’re going to do, you don’t know if they’re getting more extreme. And so we’re not going to be able to watch them as closely to kind of have an idea of what kind of is coming and what the worst things that people are saying are.
Adams: I suppose the classic example of that is Jan. 6, which was, you know, planned pretty openly on some of these platforms, but not necessarily seen by as many people as probably needed to.
Klonick: Exactly. And it’s a big debate in trust and safety online, whether or not you keep them on the platform where you can kind of watch them or you take them off and so you diminish the megaphone that you’re giving them. The good news is that mostly, even though these are small fringe groups, and they don’t have as much moderation on these new platforms, is that they’re still public, and so people can still sign up for them. It takes more work. And it takes more time to kind of build those systems in from the outside when the platforms aren’t going to be doing that type of watchdogging themselves.
Adams: When you talk about these different online spaces with different rules and codes of conduct, it’s almost like different rules of law, the way countries differ from each other. Does this lead to an internet with maybe more rigid borders where you can’t move so freely from one online platform to another?
Klonick: Yeah, this is a funny story for those of us who study the history of internet law and technology. In the late ’90s, there were two scholars, David Johnson and David Post, who basically predicted if you erase the geographical boundaries and the geographical limitations of individuals and communication, that people will just choose and populate certain platforms because they like the rules there. They like the governments that kind of get created on them. The other thing that we’re seeing, which seems counterintuitive to that, but we’re seeing it almost the exact same time, was a prediction by Tim Wu and Jack Goldsmith. The nation-states will in fact use geolocation technology to raise borders up into cyberspace, and then the traditional notions of geopolitics will play out along that, about what people can say and what people can do in certain spaces. We’re seeing the [European Union] and China make completely different rules for the people within their state about how they can use the internet. And we’re seeing the rise of all of these new platforms that have different markets of rules and that are trying to get people to choose different types of platforms for that reason. So I think it’s a fascinating moment.
Related links: More insight from Kimberly Adams
A lot of young Generation Z voters will cast their ballots in this midterm election, but where does the next generation of voters live online?
A survey from Pew Research suggests American teenagers 13 to 17 years old are spending much of their time on YouTube, TikTok, Instagram and Snapchat.
Facebook and Twitter were on the low end — only 32% of teenagers today said they use Facebook, while only 23% use Twitter.
Speaking of those two platforms, a recent article from The Washington Post suggests both are still struggling to flag or remove election misinformation.
Its reporters found at least 26 political candidates have posted inaccurate election claims since April, and both platforms “have done virtually nothing to refute” or correct them.
Twitter told the Post that it “activates its civic integrity policy around 90 days out before Election Day.”
Facebook, meanwhile told the newspaper that many of the posts highlighted were “examples of standard political content like candidates promoting their campaign websites, posing questions in congressional hearings or reacting to court decisions.”
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.