Artificial intelligence, whether we realize it or not, is a part of our daily lives and is here to stay. But according to a study out of the Center for the Governance of AI at Oxford University, Americans aren’t so sure how they feel about AI or what AI advancements will mean in our day-to-day lives. Kelsey Piper wrote about the report for Vox; she spoke to Marketplace host Kai Ryssdal about her reporting and whether AI researchers agree with the report’s findings. The following is an edited transcript of their conversation.
Kai Ryssdal: I’m going to let you interpret this study for us, because you look at this stuff and you report on it widely. I do want to start, though, on a somewhat hesitant note, and that is to point out that one of these researchers told you that people that they talked to in this survey are not convinced that AI, advanced artificial intelligence, is going to be to the benefit of humanity. That’s slightly troubling.
Kelsey Piper: Yeah, I do think that’s slightly troubling. One thing that’s going on there is a lot of skepticism about the AI systems we have now and whether they’re helping. And then separately from that, a lot of skepticism about what AI is going to look like in 10 years.
|An argument for algorithms that reflect our highest ideals|
|What to worry about when you worry about smart speakers|
Ryssdal: OK, so let’s talk about what AI we have now. Are we talking, like, Siri and sort of the semi-autonomous car stuff that we’ve got going on, is that what you’re talking about?
Piper: So AI today, I think people point to Siri; they point to the translation services that have gotten a lot better over the last couple of years; to semi-autonomous vehicles; also to the algorithm that Amazon briefly debuted to identify good hires that they found was actually using gender in order to decide who is likely to be a software engineer. Also the algorithms that send your notifications on Twitter and Facebook that have been criticized for being addictive and encouraging people to spend more time on their phones than they are.
Ryssdal: OK, so that’s what we’ve got now with all of those problems, and I confess I had not heard about the gender thing at Amazon. Are we counting on AI to get smarter by itself, or are we going to improve the inputs to the algorithms as we move toward advanced AI?
Piper: I think there’s certainly a lot of experts who are sort of warning us here that yeah, if AI safety research is moving slower than AI capabilities research, then we’re going to have extremely powerful systems that still aren’t doing what we actually want them to do and are sort of executing on their badly specified goals in ways that can be tremendously destructive.
Ryssdal: OK, wait. Let’s be clear about this. The people working on AI safety, we’ll call it, are not the same people working on making AI smarter?
Piper: So there are definitely people whose work involves both. But there are a lot of people who are working on making AI smarter who are not working on making AI safer. And the more conservative people I know working on AI safety tend to actually say, “We don’t think we should be making AI smarter right now, we think we need to sort of work on transparency and interpretability that’s understanding what algorithms are doing. We need to get that stuff right before we do our capabilities research.”
|What kind of “intelligent” is your artificial intelligence?|
|The International Space Station has an AI assistant. No … it’s not evil|
Ryssdal: There is an economic reality to AI, right, which is this entire idea of technological unemployment and the fact that once machines can do it better than people, whatever that task is, up to and including speaking your native tongue into a microphone, people are going to lose their jobs, right, and there will be economic disruption.
Piper: Yeah, I think that was definitely one of the concerns that a lot of the public mentioned in this survey. In the past, technological development has just meant creating different jobs, and a lot of AI experts have said that. We’ll see more people in caring professions, teaching, things where working with another human being is just [as] powerful. But there are some people saying this one could happen so fast and be so disruptive as to sort of change the game there, and we need to be thinking about, you know, in a world where we don’t need to work, are we going to have society structured so that everybody has enough and can choose how to spend their time, or are we going to have a society where most people live in poverty?