Donate today and get a Marketplace mug -- perfect for all your liquid assets! Donate now
Shelf Life

Can AI learn to flirt?

Kai Ryssdal and Maria Hollenhorst Nov 7, 2019
HTML EMBED:
COPY
In her new book, "You Look Like a Thing and I Love You," Janelle Shane explores what AI can and cannot do. Photo by Jess Jornacki, courtesy of Hatchette Book Group, Inc.
Shelf Life

Can AI learn to flirt?

Kai Ryssdal and Maria Hollenhorst Nov 7, 2019
In her new book, "You Look Like a Thing and I Love You," Janelle Shane explores what AI can and cannot do. Photo by Jess Jornacki, courtesy of Hatchette Book Group, Inc.
HTML EMBED:
COPY

The following is an excerpt from “You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place” a new book by Janelle Shane about what artificial intelligence is, where you’ll encounter it, and when and how it gets things wrong.

Teaching an AI to flirt wasn’t really my kind of project.

To be sure, I’d done a lot of weird AI projects already. On my blog, AI Weirdness, I’d trained an AI to come up with new names for cats — Mr. Tinkles and Retchion were some of its less successful ones — and asked an AI to generate new recipes, some of which would sometimes call for “peeled rosemary” or handfuls of broken glass. But teaching an AI to sweet-talk humans was something else entirely.

AI learns by example — in this case studying a list of existing pickup lines and using them to generate new ones. The problem: the training dataset on my computer screen was a list of pickup lines that I collected from various internet sources, all of which were awful. They ranged from cheesy, bad puns all the way to rude innuendo. Once I’d trained an AI to imitate them, it would be able to produce thousands more at the push of a button. And, like an impressionable child, it wouldn’t know what it should and shouldn’t imitate. The AI would start with a blank slate, knowing nothing about what pickup lines are (or even what English is), and would learn from examples, doing its best to imitate all the patterns it found including grossness. It wouldn’t know any better.

I considered giving up on the project, but I had a blog post to write and had just spent an inconvenient amount of time collecting the example pickup lines. So I started the training. The AI began to look for patterns in the examples, inventing and testing rules that would help it predict which letters should appear in which order in a pickup line. Finally, training was over. With some trepidation, I asked the AI for some lines:

  • You must be a tringle? Cause you’re the only thing here.
  • Hey baby, you’re to be a key? Because I can bear your toot?
  • Are you a candle? Because you’re so hot of the looks with you.
  • You’re so beautiful that you say a bat on me and baby.
  • You look like a thing and I love you.

I was surprised and delighted. The AI’s virtual brain (about the same complexity as a worm’s) wasn’t capable of picking up the subtleties of the dataset, including misogyny or cheesiness. It did its best with the patters it had managed to glean … and arrived at a different, arguably better, solution to the problem of making a stranger smile.

Though to me its lines were a resounding success, the cluelessness of my AI partner may come as a surprise if your knowledge of AI comes from reading news headlines or science fiction. It’s common to see companies claim that AIs are capable of judging the nuances of human language as well as or better than humans can, or that AIs will soon be able to replace humans in most jobs. AI will soon be everywhere, the press releases claim. And they’re both right — and very wrong.

In fact, AI is already everywhere. It shapes your online experience, determining the ads you see and suggesting videos while detecting social media bots and malicious websites. Companies use AI-powered resume scanners to decide which candidates to interview, and they use AI to decide who should be approved for a loan. The AIs in self-driving cars have already driven millions of miles — with the occasional human rescue during moments of confusion. We’ve also put AI to work in our smartphones, recognizing our voice commands, auto-tagging faces in our photos, and even applying a video filter that makes it look like we have awesome bunny ears.

But we also know from experience that everyday AI is not flawless, not by a long shot. Ad delivery haunts our browsers with endless ads for boots we already bought. Spam filters let the occasional obvious scam through or filter out a crucial email at the most inopportune time.

People often sell AI as more capable than it actually is, claiming that AI can do thing that are solidly in the realm of science fiction.

As more of our daily lives are governed by algorithms, the quirks of AI are beginning to have consequences far beyond the merely inconvenient. Recommendation algorithms embedded in YouTube point people toward ever more polarizing content, traveling in a few short clicks from mainstream news to videos by hate groups and conspiracy theorists. The algorithms that make decisions about parole, loans, and resume screening are not impartial but can be just as prejudiced as the humans they’re supposed to replace — sometimes even more so. AI-powered surveillance can’t be bribed, but it also can’t raise moral objections to anything it’s asked to do. It can also make mistakes when it’s misused — or even when it’s hacked. Researchers have discovered that something as seemingly insignificant as a small sticker can make an image recognition AI think a gun is a toaster, and a low-security fingerprint reader can be fooled more than 77% of the time with a single master fingerprint.

People often sell AI as more capable than it actually is, claiming that AI can do things that are solidly in the realm of science fiction. Others advertise their AI as impartial even while its behavior is measurably biased. And often what people claim as AI performance is actually the work of humans behind the curtain. As consumers and citizens of this planet, we need to avoid being duped. We need to understand how our data is being used and understand what the AI we’re using really is — and isn’t.

Excerpted from YOU LOOK LIKE A THING AND I LOVE YOU: HOW ARTIFICIAL INTELLIGENCE WORKS AND WHY IT’S MAKING THE WORLD A WEIRDER PLACE Copyright © 2019. Available from Voracious, an imprint of Hachette Book Group, Inc.

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.