This week Microsoft bought a company called Semantic Machines which works on something called "conversational AI." That means computers that sound and respond like humans. Mostly it's for digital assistants like Microsoft's Cortana, Apple's Siri, Amazon's Alexa or Bixby on Samsung. Last month Google showed off its own smart assistant called Duplex, which can call a hair salon to make an appointment on your behalf, or a restaurant to make a reservation. But it's clear from what Google showed that the people on the other end of these calls don't know they're talking to a computer. This has led some to ask what the rights of the human on the other end of line are. Marketplace Tech host Molly Wood spoke with Jacob Metcalf of the Data & Society Research Institute about the business case for making computers sound so real. The following is an edited transcript of their conversation.
Jacob Metcalf: You know, what's odd about the product that Google showed at their IO conference is that it doesn't put that signaling up front. And so, you kind of have to conclude that that means they put in effort, in fact a lot of effort, in order to hide the fact that it is a bot. Something is accomplished by the fact that it's not obvious to the person on the other end that they're talking to an automated machine. None of that conversation required the machine to insert "ahs" and "ums" as if it was a human. So then why are they trying to imitate a human? Is it because it looks cool? Or is it because the product's more effective if it's deceptive? And is that a tradeoff that we should accept as society?