The CEO of Google Sundar Pichai said he wants artificial intelligence, or AI, to be the company’s driving force. What exactly does that mean?
New York Times Magazine writer Gideon Lewis Krauss answers this question in his recent piece, “The Great A.I. Awakening.” He talked to Marketplace host Kai Ryssdal about how Google has been revamping its AI technology, and what that means for the future of smart machines. Below is a transcript that was edited for clarity and brevity.
Kai Ryssdal: First, talk about how folks noticed what Google is doing with artificial intelligence and talk about Google Translate a little bit.
Gideon Lewis Krauss: Over the last nine months, a team at Google called Google Brain had set out to completely gut-renovate this existing product, Google Translate, that is used by half a billion people a month. And they didn’t tell anyone they were going to do it. And so in Japan a professor of computer science at the University of Tokyo, who works on human augmentation research, tweeted that he had noticed that Google Translate had all of a sudden gotten a lot better. Instead of being kind of a word salad from Japanese into English and vice versa, it seemed to be producing, obviously not human level translations, but something considerably better. And it turned out that this was because of this renovation of the product that Google had just completed.
Ryssdal: Google did, as you said, a gut renovation of the of the of the innards of Translate. And it’s all about neural networks and how Google figured out how to make them better. Is that a fair summary?
Krauss: The previous product behind Translate what had been running for about 10 years and it used statistical machine translation techniques that had been around for about 30 years.
Ryssdal: That’s like, substitute this word for that word and do it very formulaically?
Krauss: More or less. I mean, it’s a little more complicated than that but, that’s basically it. And about a year ago the team on the Google Brain side that works on these things called neural networks had said, “Hey we think that there’s a way that we can totally revamp this.” And instead they could train it kind of from the ground up — the way that a person might learn a foreign language by just being dumped overseas and expected to start talking, that it was almost like an immersion kind of education for the machine.
Ryssdal: Tell me about the cat picture because that to me was the best example in this piece of what these machines are learning how to do.
Krauss: Well in 2012, Google Brain — which was then part of a secret of X Labs the company had — had released a paper that showed they could give millions and millions of still images from YouTube to a network that had never been provided the definition of any human category. Once it was trained, they found that it had essentially invented for itself the concept of a cat. So it was it was the first time that a computer might be said to have come up with its own concept.
Ryssdal: There are those who will hear this interview who are going to go “Oh my God, the robots are coming alive!”
Krauss: It’s true there are. But one of the ideas behind this piece was there has been so much long-term futuristic speculation about AI and about artificial super intelligence and about computers that are going to take over everything with their own will, and what we really wanted to do with this piece was to say, “Well, what exactly is going on there? How does this stuff work technologically?” Instead of racing forward to futuristic sci-fi scenarios, how can we actually keep this grounded in how the technology works and what can actually do and what it can’t do.
Ryssdal: Right. And this gets us back to the cat picture because a 1-year-old human being can recognize a cat. That’s the deal.
Krauss: Yes. That is the deal. Right. The hope is that then it would evolve organically from that.
Ryssdal: So if Google, Microsoft and all the other big names are becoming A.I. companies, what does that mean for my refrigerator that’s hooked up to the Internet, and Alexa and Siri and Google home, and all those devices.
Krauss: Nobody actually knows. One of the things that I found so interesting about talking to these engineers is that, as one of them said to me, “Look, you know I’m an expert in this field and I can maybe make some guesses about what this will look like six months, or a year down the line. But as far as five or 10 years down the line, really your guess is as good as mine.” The upshot is these techniques — instead of focusing on the obvious kind of repetitive drudgery tasks that we have long associated with the march of automation — that these techniques allow machines to do things that we never would have really thought of as the province of machines.