Self-driving cars will probably save a lot of lives in the future. But right now, the tech is new, and most of it requires human intervention. Experts refer to several levels, 1 through 5, of automation in cars. A 5th level car would have no steering wheel or gas pedal. Several cars on the market now fit into the middle category; requiring human intervention with some autonomous features. Marketplace Tech host Molly Wood spoke with Missy Cummings, director of the Humans and Autonomy Lab at Duke University, about the risks of having humans only partly in control. The following is an edited transcript of their conversation.
Missy Cummings: Well I think that one of the problems with these levels are that they seem linear and that we should go in order: No. 1, 2, 3, 4 and 5. But the reality is there are really two different paths. There's the 1, 2, 3 path, and then there's the 1, 2, 4, 5 path. And the reason that this is an issue is because level 3 which is where the automation is partially capable but not fully capable, and we have to have that in the cases where the cars can't perform under all conditions, then the car hands over control back to the human. And this is the deadliest phase. In fact, I'm pretty much against level 3. I don't think it should exist at all. Because one thing I know as a former fighter pilot is having a human step inside the control loop at the last possible minute is a guaranteed disaster.
Molly Wood: And yet we see car makers going there. In fact, I think we arguably see Tesla pushing that on consumers. Does it make drivers part of a living R&D lab on city streets?
Cummings: Well I think there are two issues here. No. 1, should we have cars that have to ask for human intervention at time-critical periods? That answer is no. But there’s a separate issue of should we allow car makers to use the American public as guinea pigs to test out these new technologies. And I also think that that answer is no.
Wood: How do automakers need to communicate to car owners when they're safe to use?
Cummings: Well I think this is a real problem because in the aviation world when we put fancy new automated technologies, which are actually not as complex as ones that exist in cars right now which I think is a big surprise to most people, we make commercial pilots go through years of training. They have annual check rides. They have to show that they know the rules, they have to take a flight exam with a flight examiner to show that they can operate the vehicle under conditions of uncertainty. We're not doing that for human drivers, right. You know this is a real issue because if you have these complex modes of operation which we know exist in these driverless cars, or even driver-assist cars, and we know people aren't reading the manuals, then arguably are we doing the ethical thing by allowing them to be on the roads? And I think the answer is no.
Wood: So what needs to happen, setting aside what we think might or might not in the current environment, what does need to happen from NHTSA or any other officials?
Cummings: So, the reality is we still need to invest in this technology. I'm not a Luddite. I run a robotics lab. I want the technology to continue to progress. And so, one of the things that I personally have been advocating for is that we need to establish vision tests for autonomous vehicles. One of the things that we know about driverless cars are that their perception systems, what makes them see, are deeply flawed. So, we know that the perception systems are deeply flawed then we need to have a set of tests that the cars must be able to pass before they're allowed on public roads.
|How the future of self-driving cars looks in Arizona|
|Five reasons for a self-driving car slowdown|