How AI chatbots are turning the tables on scammers

Scam calls about fake warranty renewals, non-existent credit card bills and more are still a global problem.
But some companies and telecommunication providers are turning to AI chatbots to intercept the calls before they ever reach a real person.
Marketplace’s Meghan McCarty Carino recently spoke with Dali Kaafar, founder and CEO of Apate AI, an Australia-based company creating these chatbots, about how his company is designing these bots to scam the scammers.
The following is an edited transcript of their conversation.
Dali Kaafar: We basically create what we call a perfect victim, right? And so these are AI personas, they are specifically designed and trained to engage the scammers, to keep them talking, and, yeah, pretty much waste their time, of course. But the other very important piece is that while you’re keeping them distracted from your actual victims, right, or your actual customers, rather, you can have that as a huge opportunity to extract critical information from the mouth of the scammers themselves, as it’s not only providing you know this accurate view of what the scam landscape looks like in real time, but it’s putting, you know, all these organizations that are being impersonated, for example, and their customers, ahead of the scammers tactics. It’s before even they reach those potential victims that you could provide them with alerts about everything that the scammers is about to do.
Meghan McCarty Carino: So what makes for an effective scam baiting AI chatbot? What qualities that do you train?
Kaafar: So I think very importantly, obviously, is having these AI bots to be as realistic as possible, not only in the way that they interact, as in the language, but it’s really important to get them to be realistic in the way that the natural language processing style is really good. Like 300 milliseconds, for example, is a big constraint for our AI bots, because that’s pretty much what humans are expecting in terms of human-to-human interactions and real conversations. And that’s really a big challenge that we cracked. The second I think, is really more about how they’re designed in a way, to waste the scammers time or to reach a particular objective, and then the rest, I think, is just more about the ability, really, for the bots to compete against each other, so that they do obtain information that is very, very unique every single time they converse, or they have a conversation with a particular scammer.
McCarty Carino: Do scammers very often, you know, reach a point in the conversation where they understand that they are talking to a bot? Does that happen very much?
Kaafar: So our Apate bots can engage scammers for up to 50 minutes. In fact, we had cases where it’s really more than hours, but it’s very, very often that we do have conversation between bots and scammers for 20, 26, 27 minutes, and they get very frustrated. They get unhappy. In fact, I can tell you one thing which is probably very, very unique to Apate AI is that we do celebrate those kind of frustrations of our interactions with the scammers. And a little anecdote just about that, Meghan, is one day I dropped [by] at the office very early morning, and I find the whole team celebrating, just jumping around. And they were extremely happy. And when I asked what was happening, they were basically telling me that they were celebrating our very first F-word from a scammer. And believe it or not, I think we are the only company in the world that does have on their dashboard, you know, on the sort of milestones dashboard, a card, basically, and a milestone to hit the 10 millionth F-word from scammers. True story.
McCarty Carino: I’m curious, if you have, you know, discovered any cases of the scam baiting AI agents interacting with another scamming AI agent?
Kaafar: Yeah, so the AI versus AI is really something that we thought a lot about. And, you know, we considered that that might happen. So scammers might start deploying their own AI bots to talk to ours. And honestly, I would love that to happen, the scammers bots talking to Apate bots, that basically means that no humans get ever scammed, right, if you think about it. There were obviously cases. They’re still a minority in the deployments that Apate has where we end up having conversation with, you know, bots not as sophisticated as ours, definitely not. So we definitely detect immediately that these are bots, and the bots would actually understand, as part of that context that they’re talking to scammers bots. We keep some of those conversations alive because we are interested in extracting the insights and the intelligence, like I said, if or nothing else is to extract what sort of scam category it’s about, what’s the scam tactics used there, if it’s really part of a scam campaign? And these are all the insights that we do collect. What’s the personal information being requested? All those sort of pieces of intelligence is something that we still collect from the interactions with AI bots from the scammer side. So yeah, sure, there are cases of bots, and obviously there are the robo calls themselves, right? And just very, very typical cases, but it’s still a minority.
McCarty Carino: What is your ultimate goal with Apate and systems like this?
Kaafar: I think I like to really think about a party as being in the deception business, right? And I like really to refer to Apate again and remind people that Apate is named after the Greek goddess of deception, right? But deception of bad people. And I think, ultimately, what we’d really like to do is [be] on top of all the scammers tactics and the scammers operations by extracting this very, very valuable and accurate intelligence about their scam tactics, and again, their scam techniques in very real time. Sadly, you know, fraud is in all sort of businesses that we care about today, I’d like to see Apate as being this shield against those types of fraud. So we recently introduced Apate text, which basically tries to do and implement the exact same thing in the WhatsApp world. And so we moved to the social media side of things, where we know that there’s a lot of scams happening and there’s still a lot of losses as well. The real value there is to do all these cross channel extraction of intelligence. So we covered the lot, if you like. We don’t leave any sort of chance to the scammers themselves.
A British telecom company Virgin Media 02 has also developed its own anti-scammer chatbot by the name of Daisy — a nice, elderly grandma type also designed to waste a scammer’s time.
And if that mention of the AI versus AI conversations between two bots intrigued you, I recommend the podcast Shell Game from journalist Evan Ratliff.
We talked to him on the show last year about how he created AI voice clones of himself and let them loose on the world, including in AI therapy sessions with other bots and with AI telemarketers. The results are worth a listen.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.