We fell short of our Fall Fundraiser goal of 2,500 donations. Help us catch up ⏩ Give Now
Telegram-linked ads on Meta platforms may promote illegal activity, report finds
Sep 16, 2024

Telegram-linked ads on Meta platforms may promote illegal activity, report finds

HTML EMBED:
COPY
A recent report from Cybersecurity for Democracy looked at Meta's ads library and found that a majority of ads that linked to Telegram involved channels promoting the sale of drugs, financial scams and child sexual abuse material.

Late last month, the CEO of the encrypted messaging app Telegram was arrested in France. Authorities there have charged Pavel Durov with being complicit in illegal activities conducted on the platform due to a lack of content moderation.

A recent report from the research group Cybersecurity for Democracy shows some of that activity is finding its way onto other platforms. Senior Fellow Yaël Eisenstat looked at advertisements on Meta platforms that linked back to Telegram, and found that a majority were promoting channels with potentially illegal activities. The following is an edited transcript of her conversation with Marketplace’s Meghan McCarty Carino:

Yaël Eisenstat: So we used the Meta ad library, and we searched for ads that linked to Telegram, or had links to Telegram. And what’s so interesting is it’s a simple signal that shows you all the ads running that link back to Telegram. And we didn’t even need to go deep, because within the first 50 ads we already found so many ads that appear to be violating Meta’s policies and even some that seemed to be illegal. So we were surprised by how stark the results were.

Meghan McCarty Carino: And what kind of stuff did you find?

Eisenstat: So we found illicit drug sales, financial scams, ads for stolen and counterfeit goods. They certainly seem to violate Meta’s own policies. And some of them, we would question, if they truly were, seemed to be engaging in illegal activity as well.

McCarty Carino: And you came up with a pretty staggering figure that 64%, almost two thirds of the posts that you looked at, appeared to be violating Meta standards, and maybe another 14% that likely would. I mean, your research seems to show that if an ad includes a link to Telegram, it’s kind of more likely than not to be a problem.

Eisenstat: Exactly, and it’s funny, we did that research two weeks ago, and last night, we did the same basic search, because it’s been two weeks since we put our findings out there. It’s been two weeks since we gave a very simple technical solution to Meta, and last night, same thing, did a simple search for Telegram linked ads, and again, you can still see the same kinds of violative content coming through on those ads.

McCarty Carino: Since your report came out, has Meta responded at all?

Eisenstat: So we did not directly notify Meta. But here’s the really interesting point: first of all, I will applaud Meta for having an ad library at all, because part of why we searched on Meta is because not all the platforms have an ability to search ads. So once we found these results, we realize that this is actually not a question about free speech or content moderation. This is a simple technical situation where if you have such a high signal, and the high signal here is that well over 50%, 64% of the ads violated your policies, then there’s a really easy way to build in to your own ad verification process that you will flag Telegram linked ads for just a higher level of scrutiny. You know, this is actually standard process if you follow how early spam detectors for an email, for example. The early spam detectors were not looking at the contents of emails. They were looking for the reputational signal of the sender. And we don’t know why [Meta is] not using that solution, but we certainly wanted to highlight both the problem and how they could fix it.

McCarty Carino: This is sort of one instance of Meta appearing to be behind in enforcing its own standards. Do you think it speaks to a wider problem in content moderation?

Eisenstat: I mean, it certainly does. But I do want to emphasize, again, though, that it doesn’t have to be about content moderation. So content moderation is when you’re looking, when you’re building these classifiers into your whether it’s your automated tools, of course, they’ll use human review as well in some instances. But what you’re doing is saying, here is all the content that violates our rules, and then they’re scanning content. They’re scanning ads often after they’ve already started running for that kind of content, which does get more and more complicated when you turn to video and audio as opposed to the written word. But that is why it’s really critical to understand this isn’t even a content moderation question, because you have a high signal in terms of Telegram links. But yes, to your point, this is just one example of large problems across this ecosystem of whether or not you’re debating about what speech should be kept up or what speech should be taken down when it comes to things like illegal activity, when it comes to a platform saying we won’t allow illicit drugs to be sold on our platforms. Why they’re not building in actual technical solutions to fix that is, to me, appears to be willful negligence, because even after we pointed it out, it’s still happening.

McCarty Carino: And as someone who studies democracy in this context, what are the stakes, if that is the case?

Eisenstat: I mean, for this particular study, it was obviously less about democracy and more about truly harmful content that is not just showing up in people’s feeds, but is being advertised to them. So I would say, first and foremost, those stakes are real. We don’t have any information, there’s no transparency around how these ads were targeted. Were children shown these ads for drug sales and financial scams? There’s a real implication there for the everyday user of the platform. But moving on to the democracy piece: this lack of true transparency for us to be able to understand more about these ads, about how they’re targeted, about how long they’re active, about who sees them, makes it such a messy space to understand it all, and we are in a very high stake election period right now, and researchers and journalists and advocates don’t even have access to enough tools to truly understand how these things are playing out online. Now, again, it’s a little bit different our study here, because it didn’t specifically focus on democracy, but what is important is it showed a true fail on the part of this platform to live up to its own contract with the public, and if they’re going to fail on that with ads that are engaging in really bad activity, what does it say for how they’re enforcing their other roles around elections and whatnot? And the key thing there is: a policy is only as good as its enforcement.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer