Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace

Gas prices, explained

Oct 18, 2019

Latest Episodes

Download
HTML Embed
HTML EMBED
Click to Copy
Corner Office from Marketplace
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace
Download
HTML Embed
HTML EMBED
Click to Copy
This Is Uncomfortable
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Marketplace Tech Blogs

Should a machine have to tell you if it’s a machine?

Molly Wood and Stephanie Hughes May 23, 2018
Share Now on:
HTML EMBED:
COPY
The Shadow Robot company's dextrous hand robot holds an Apple at the Streetwise Robots event held at the Science Museum's Dana Centre on May 6, 2008 in London, England. 
Jeff J Mitchell/Getty Images

This week Microsoft bought a company called Semantic Machines which works on something called “conversational AI.” That means computers that sound and respond like humans. Mostly it’s for digital assistants like Microsoft’s Cortana, Apple’s Siri, Amazon’s Alexa or Bixby on Samsung. Last month Google showed off its own smart assistant called Duplex, which can call a hair salon to make an appointment on your behalf, or a restaurant to make a reservation. But it’s clear from what Google showed that the people on the other end of these calls don’t know they’re talking to a computer. This has led some to ask what the rights of the human on the other end of line are. Marketplace Tech host Molly Wood spoke with Jacob Metcalf of the Data & Society Research Institute about the business case for making computers sound so real. The following is an edited transcript of their conversation.   

Jacob Metcalf: You know, what’s odd about the product that Google showed at their IO conference is that it doesn’t put that signaling up front. And so, you kind of have to conclude that that means they put in effort, in fact a lot of effort, in order to hide the fact that it is a bot. Something is accomplished by the fact that it’s not obvious to the person on the other end that they’re talking to an automated machine. None of that conversation required the machine to insert “ahs” and “ums” as if it was a human. So then why are they trying to imitate a human? Is it because it looks cool? Or is it because the product’s more effective if it’s deceptive? And is that a tradeoff that we should accept as society?

Molly Wood: So, is it legal? I mean, is there a wiretapping concern too? This gets into the ethical question of whether these AIs need to identify themselves as not human, but also just the question of is it legal for them to collect data about their interactions with people who don’t know that they are part of that collection?

Metcalf: Yeah, well, that’s certainly an open question. I think it’s probably going to be litigated. There’s all kinds of ways that they could say that the algorithmic analysis is not the same thing as a wiretap. However, I really think it’s important that we not just focus on the legal question of wiretapping, but also on the question of them making use of other people’s time and energy in order to improve their product without those people being aware that they’re making that contribution.

Wood: I guess the devil’s advocate question then is how is this significantly different from any of the other automated systems that I might interact with, whether or not I think it’s a human. Is it different from any of the times that a computer calls me, whether for a robo-call or when I interact with an automated system for customer support? How is this different?

Metcalf: Well, if they identify themselves as a bot I think it’s not so different. However, I think it’s important to realize that there are ways in which if I call a restaurant and I make a reservation that interaction with that other person sort of sets social expectations, right? There’s a question here: Am I more likely to just skip out on the reservation and cost this business money if I used the assistant? Am I more likely to not feel compelled to act as a good citizen? So, I think it can be very helpful in some situations. It can be helpful for people who struggle to speak clearly on the phone or have anxiety about social interactions. But it’s critically important for us to get some early norms about how this kind of data is collected, how the bots identify themselves and for these companies to be clear that as they provide benefits to their users, it’s not coming at the cost of the other parties who might not even realize that they’re interacting with an automated system. 

 

If you’re a member of your local public radio station, we thank you — because your support helps those stations keep programs like Marketplace on the air.  But for Marketplace to continue to grow, we need additional investment from those who care most about what we do: superfans like you.

Your donation — as little as $5 — helps us create more content that matters to you and your community, and to reach more people where they are – whether that’s radio, podcasts or online.

When you contribute directly to Marketplace, you become a partner in that mission: someone who understands that when we all get smarter, everybody wins.

Check Your Balance ™️
Check Your Balance ™️
Personal finance from Marketplace. Where the economy, your personal life and money meet.

Thank you to all the donors who made our fall drive a success!

It’s Investors like you that keep Marketplace going strong!