Support our non-partisan non-profit newsroom 💜 Donate now
Marketplace Tech Blogs

The reason not to yell at Siri and Alexa

Adriene Hill Nov 24, 2017
HTML EMBED:
COPY
David Becker/Getty Images
Marketplace Tech Blogs

The reason not to yell at Siri and Alexa

Adriene Hill Nov 24, 2017
David Becker/Getty Images
HTML EMBED:
COPY

Alexa, Echo and home assistants that use artificial intelligence are becoming more and more common. EMarketer estimates the number of people using virtual assistants at least will grow nearly 25 percent this year .

People use these assistants for some of the most mundane tasks, like setting kitchen timers, playing the radio, or Googling the answer to a simple question. But what about “how” we talk to our AI devices in front of children or strangers? As humans, we hear more than just words. Tone and mannerisms play a big role in conveying meaning behind what we say to each other.

Michael Littman, a professor of computer science at Brown University, has thought a lot about how we treat artificial intelligence. We asked him whether being polite to a voice assistance program is important. Below is an edited transcript. 

Michael Littman: I think it doesn’t matter that much right now. They’re really not set up well for social pleasantries. So, you can try to treat them in a respectful and friendly manner. But it actually just makes them work less well as the tools that they’re supposed to be. I think there’s an awful lot of researchers right now and researchers at companies who are very much interested in creating more temporally extended dialogues. Basically being able to talk to it for more than just one turn. But at the moment, it really is kind of a call-and-response thing, which doesn’t lend itself to treating it like a person.

Adriene Hill: So I’ve got a five-year-old, and one question I have with my Alexa is, should I be saying “please” and “thank you”? And I worry that if I say please and thank you, I’m humanizing this computer too much, but if I don’t, I feel like I’m not being nice. Should I say please and thank you to my Alexa in front of my child?

Littman: I think it’s perfectly appropriate and I think that for simple things like “please” and “thank you,” mostly the system just ignores them, so it doesn’t do any damage to the usability. But as far as what you’re modeling for other people, I think that’s appropriate. I think, you know, being kind and respectful should be automatic.

Hill: But does it, I guess, send a signal to my kid that Alexa is human or that Alexa deserves those kinds of polite manners?

Littman: Yes, so I don’t think it necessarily does that. But I also don’t think that doing the opposite is necessarily dangerous either. I think that, you know, my own kids who are a bit older than that now have learned very effectively how to use these kinds of systems as tools. And so they’re not the least bit confused about thinking of them as social agents. And it only takes a moment of interacting with these voice assistants before it’s clear that they’re not people, they’re not acting like people. There’s no sense of context. There’s no sense of the kind of indirect request that you can do with people, in fact, that politeness demands we have to do with people. Kids discover that quickly and they learn how to use it to get the job done.

Hill: Another question I had about how we treat these things is I have heard, and I have heard of, a lot of people who are really mean to their Siris and I have found myself yelling at my Alexa – I will admit. What do we make of that?

Littman: Well, I’m much more concerned about that issue than the reverse  than the idea of well, “Wait, are we being bad by being too polite or not being sufficiently polite?” By being mean and aggressive to the systems, I do think that that’s modeling bad behavior that really can erode our own humanity, right? It doesn’t bother the system at all. It doesn’t care, it doesn’t know. You can vent at it all you want. But for the other people who are listening, you come across as being a nasty person, right? And you’re also kind of saying to people, not only am I nasty, but I’m OK with you seeing me as nasty. And I think that’s more the danger. 

Hill: As I understand it, one of the things AI does, is it learns from how we respond and how we act toward that AI. If we are mean to the AI, if we are maybe meaner than we are to other people, does it train the AI to respond in a meaner way?

Littman: That’s a really interesting question. So there have certainly been examples where AI have been trained by people to be mean. The classic cautionary tale that we talk about is Microsoft’s Tay tweet bot. So the basic story there was that they released a bot online that was able to learn from interactions with people on Twitter, and very quickly it learn to be a white supremacist. So, people went out of their way to try to train it to be bad and it worked.Microsoft had to take it down very quickly. So yes, this totally happens, but it was done by people who were intentionally trying to mess with it. But one of the things that we’re experimenting with in my lab is in the context of, for example, self-driving cars: can we use politeness or rudeness on the road as a way for self-driving cars to essentially fend for themselves? One of the concerns that that many people are raising is that these cars you know, before all else, don’t run over people, don’t smash into other cars. And people can take advantage of that to basically get the right of way when it’s really not their turn. And so people can be very rude to these systems and there’s a fear that these systems are not going to be effective on the road because they’re going to be walked all over. We’re employing ideas from game theory in a repeated interaction: what can you do to kind of bring somebody along to be more cooperative with you? And so we’re trying to use exactly those ideas of politeness and rudeness as a way of making sure that these robotic systems are treated appropriately.

Hill: So you don’t want driverless cars to hit people to prove a point, but… 

Littman: That’s right. They may need to be a little bit more rude. They might have to be aggressive. But responsively aggressive, right? That’s the main thing that we’ve learned from game theory.

Hill: Like they honk the horn or what do they do? How do you teach people not to just walk in front of these cars?

Littman: Exactly, honking horn is one of the things that we’ve talked about. There’s tons and tons of papers about self-driving cars activating the steering and the brakes and maybe even the windshield wipers, but nobody talks about the horn,  and the horn is really important because the horn is a mechanism of social interaction. And so, to the degree that you think of driving as a social act, because it really is – I learned this watching my son learn to drive. He didn’t get this. He knew the mechanics of it, but he didn’t understand that every turn that you make, every time you drift into an intersection, you’re saying something, you’re yelling something to all the other drivers, you know? This is me, this is what I’m about to do, please get out of my way. And seeing driving as a social act is a really important step towards systems that are going to be able to interact with people.

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.