❗Help close the gap: We still need to raise $40,000 by the end of March. Donate now
Why visual misinformation online can be tough to stop
Jan 31, 2023

Why visual misinformation online can be tough to stop

HTML EMBED:
COPY
Misinformation using photos, memes and videos is easy to share and difficult to censor, says the University of Utah’s Andy King.

Technology is making it easier and easier to create and disseminate visuals, from text-to-image artificial intelligence models and sophisticated deepfakes to simple memes retweeted with hashtags.

Visuals are the lingua franca of the internet, but their potential to easily spread misinformation — particularly about health topics — make them especially dangerous to the public.

That’s according to an article published last year in the journal Science Communication.

Marketplace’s Meghan McCarty Carino spoke to Andy King, an associate professor of communication at the University of Utah. King co-authored the commentary titled “Missing the Bigger Picture,” which discussed what makes visual misinformation unique.

The following is an edited transcript of their conversation.

Andy King: The way that people process visual content generally is different than how they process verbal content. And so for visual misinformation, people will process it differently, they’ll be able to access it differently and how they integrate that information into sort of their existing mental models of how the world works will be affected differently because of the visual content. Visual content is just, you know, it’s more likely to be shared on social media, you’re providing people heuristically with something that they may sort of buy into differently. And I think it’s important also to note that, you know, not all visual content is misinformation by itself. A picture may be an accurate photograph, but it appears in a message where it’s recontextualized with verbal information, where then the total message unit, that multimodal message unit where there’s visual and verbal content together, is actually a misinformation sort of instance.

Meghan McCarty Carino: Give me some examples of the different forms visual misinformation might take.

King: Yeah, we group visual health misinformation into three categories: visual recontextualization, where you have an image that might actually be an accurate image. And it’s placed together with verbal content that then together sort of creates an instance of misinformation. Then the two other types are visual manipulation and visual fabrication. Visual manipulation, much like it sounds, is when something has changed in an image. And sort of the most frequent version of that, I mean, we do that ourselves on social media posts, where we sort of change the lighting, or we maybe touch up our face, which obviously is sort of on the lower end of manipulation. And the digital fabrication refers to sort of the generative version of this, where we might have deepfakes, where there’s video accompanied by audio and both might be faked. And on the health side of things, we’ve seen fewer examples of deepfakes so far, but obviously it’s a concern moving forward about how those might be differentially effective at communicating misinformation to people. I think that that’s something that researchers and the public and regulators need to be a little bit more concerned about in the health context.

McCarty Carino: Are social media platforms doing enough to stop the spread of this kind of misinformation?

King: There’s a number of challenges, unique to visual content. And so, you know, memes are another way that people can sort of convey visual information that sort of taps into something that we know already. If we’re familiar with a meme template, right, it signals some meaning. And then the text that gets added to that meme template might seem harmless, but when it’s combined with the meme template might actually carry forward sort of a very problematic piece of information. And that’s actually, again, another way that recontextualizing visual content can have an effect on people, is because people can make comparisons without actually stating them. And so an example of visual recontextualization related to COVID-19 would be there was a map that someone took and they juxtaposed it with another map. And one map was 5G coverage, the other map was COVID cases. And then they said, but there’s definitely not a connection, right? Question mark. There’s not a link between 5G and COVID cases. Obviously, we could provide a lot of maps where we have population density, essentially playing out with what’s being covered by the maps. And so that is an example again of how, sort of, people are able to get past some of those monitoring with visual content.

McCarty Carino: What’s at stake here when it comes to health misinformation?

King: Part of it is, is something that’s been eroding for a long time, which is trust in certain institutions. I think it’s serious. But I think that part of it is really about people’s well-being and about people’s anxiety about health and related topics, particularly when they’re facing a diagnosis or prognosis that they’re uncertain about. I also worry that health misinformation will exacerbate existing communication and health disparities that people experience, that already lead to more negative health outcomes related to all sorts of social determinants and other structural factors like racism that have affected a lot of different populations over time. Potentially, misinformation is just going to exacerbate those issues even further.

The report Andy King co-authored has a lot more detail about visual misinformation and some examples of what he’s talking about. You can find that here.

The spread of this visual misinformation online related to the COVID-19 pandemic doesn’t seem to be slowing down.

The Atlantic had a story last week about one of the slipperiest theories on Twitter linking the death or medical crises of high-profile people in the news with the COVID vaccine. And last year we talked about the proliferation of deepfake celebrity videos on social platforms like TikTok, even though they’re technically not allowed.

Our guest, Anjana Susarla, a professor at Michigan State, said that while these may be legal because they are clearly satire, they raise a lot of questions about whether we have the tools — both legal and technological — to keep malicious deepfakes off these platforms.

Even if it is weirdly fun to watch a fake Keanu Reeves perfectly slice a baguette with a sword.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer