Support our non-partisan non-profit newsroom 💜 Donate now

What kind of ‘intelligent’ is your artificial intelligence?

David Brancaccio and Ali Oshinskie Oct 22, 2018
HTML EMBED:
COPY
(Photo by Kevin Winter/Getty Images)

What kind of ‘intelligent’ is your artificial intelligence?

David Brancaccio and Ali Oshinskie Oct 22, 2018
(Photo by Kevin Winter/Getty Images)
HTML EMBED:
COPY

Films imagined the robots of the future with some personality: the outspoken C3PO, the quirky Wall-E and maniacal Megatron. Next to the robots of Hollywood, Siri and Alexa are rather dull. But their scripts weren’t written by a Hollywood writer, but by the engineers and programmers pioneering artificial intelligence.

All artificial intelligence, quirky or not, is programmed with a set of rules by which they operate. For autonomous cars, speed limit and turn signals are requisite but what about life-or-death questions: hit the pedestrian or veer off the road? For the robots that imitate humans, the stakes are usually lower but come loaded with social nuance and so far, Siri just doesn’t seem like the best listener.

John Havens wants robots to be polite. As the executive director of The Institute of Electrical and Electronics Engineers’ (IEEE) Global Initiative on the Ethics of Autonomous and Intelligent Systems, he’s setting rules for artificial intelligence. He’s designing a certification program for this technology, the Ethics Certification Program for Autonomous and Intelligent Systems, or the ECPAIS. The program intends to set standards for products and services, both in terms of functionality and ethical standards. 

Havens spoke with David Brancaccio about bias in engineering smart technology and how his certification program will work. The following is an edited transcript of their conversation.  

David Brancaccio: This is about putting human needs first when it comes to autonomous robot self-driving technology?

John Havens: Well, I think in general it’s sort of an idea of a contract, right? When you buy a piece of technology and bring it into your home, the expectation is it will be safe and all that. But with autonomous and intelligent systems, there’s just new things that people have to understand are trustworthy. So certification is basically a valid, peer-reviewed validation where both sides say, “hey this is what this thing is supposed to do. Here’s A. B is we’re proving that it’s doing A.”

Brancaccio: And part of this is transparency but another thing is reporting your algorithmic bias. Give me a sense of what that really means.

Havens: Sure, well first off bias tends to have, understandably, a negative connotation to it sometimes because we hear about racial bias which usually implies racism but we’re all biased, right? You and I are both from the West for instance, or if you’re male. Being biased, again, you have to recognize what these things are so that you can say “what are the negative aspects of bias?” Which could then influence not just how the technology is created but how it’s interpreted once it’s being used by a particular person.

Brancaccio: So for instance as engineers work on autonomous vehicles, self-driving cars and trucks they have to program in different preferences. If a self-driving car has a choice of saving the occupants of the car in an impending crash, but at the expense of killing pedestrians, would the certification program demand that we know about that particular choice?

Havens: Yeah, I think the point is that … you’re bringing up a famous … what’s called “The trolley problem” and the logic is to write out all the potential scenarios that could happen. And the logic is you never want to have to make those types of choices. But right now a lot of times those conversations happen outside the context of how many humans unfortunately are killed with cars as they are right now. So the logic is being assisted, it’s sort of a “yes, and?” It’s always intended to be a complement with these certifications or the sort of contracts as I said. Or a handshake to say “look, we the people [who] created that technology are doing even more due diligence to try to be safe and trusted before you bring this stuff you know home for you and your family.”

Brancaccio: So this is the famous trolley problem as it was originally posed. I gave it as a self-driving car problem. But the way the technology answers that question, is that something that the certification process wants to know more about or wants the end user to know more about?

Havens: In a sense. I’ll give you an example, it’s more of a cultural thing than an ethics. You know, a lot of times people think of ethics as morals. Where a lot of our work is to reframe ethics. There’s a phrase called “values-based design” or “values-sensitive design” or even “end-user or participant-design.” The assumption amongst engineers and programmers, of course, [is that] risk is always there, there’s so many great standards and certifications around risk and harm. But a quick cultural example which I think, beyond the tunnel or the trolley problem you mentioned, is recently autonomous vehicles were tested in Australia. And the Lidar — the technology that’s used to sense the autonomous vehicle surroundings — hadn’t been programmed to recognize kangaroos. So the big kind of like bouncing, jumping around motions weren’t recognized — whereas I’m based in Jersey —so like squirrel darting motions were recognized. But the point is is that you have to ask even more questions than you had before when you have these new types of technologies and that list of certification — I mean you might get this granular: are there are there kangaroos in the region where you’re releasing this? And that actually goes back to safety. It’s just a different type of safety question than some people may have asked before but that’s the due diligence writing out these long lists. In one sense [that] is what a certification is. It’s to say also “how will this tool/system/product be interpreted?” in say a U.S. market versus a China market where there may be very different cultural aspects [and] where the harm is not necessarily physical but it could be something like an affront to someone’s culture. And a quick example there is if you build a robot that has something like eyes. In the States we look at each other in the eyes as a sign of respect. That’s how we talk to each other. In many Asian cultures, looking directly in someone’s eyes could be considered rude. So if you built a robot whose eyes were looking straight into your eyes and you sent it to Asia, it might freak people out and they wouldn’t use it. It’s not actually harming them physically but it’s a new type of cultural awareness that happens with this rethinking of design that we’re focused on.

Brancaccio: Among the many things that’s just so interesting about this is that as the technology advances it requires human beings to decide what are our values and what our values in different countries? We have to actually define them and be aware of them.

Havens: Yeah, I actually wrote a book called “Heartificial Intelligence,” and to your point, the central question that I asked is, “how will machines know what we value, if we don’t know ourselves?” And it’s often shocking to me when I do talks about our work and I’ll ask people, “give me a list of your 10 top values.” And it’s pretty easy, the first three or four. People are like “my family, my faith, integrity [and] honesty.” And by about like No. 6, people are like “Uh, I like ‘Game of Thrones,’ is that a value?” And I’m like “I’m also a fan.” But when you list out these things, then you begin to see the interpretation of the values when they’re manifested as a piece of technology. So we have this paper called “Ethically Aligned Design.” The first or final draft to be available, a version will be available in February. We’ve crowdsourced the first two versions. Over a thousand people have touched it. And a big part of this is this idea of thinking about values and especially when you come from the West, all ethics questions, like the tunnel one you brought up, or the trolley as it’s now called, often times, especially in Western media, it is coming from a Greek background of ethics meaning Western ethics. Whereas when we have over 50 members from China and Korea who brought so much insight to our work, you get this sense of like “hey, by the way, the entire other side of the planet and the global south has things called Confucian ethics or Ubuntu ethics.” And it’s a reframing of how we even, the paradigm of how we come to these questions. And so it’s when you ask those questions that much more, this is where the word safety or trust can really be elevated to more global levels and that’s what IEEE is about with consensus building globally is to say “we don’t know, we don’t know yet but we have to ask.”

Brancaccio: So some of the technology that would be impacted by this hurts the head of the non-engineers. Does this idea about bringing ethics to bear on what you’re designing hurt the heads of the engineers? I mean, some of the engineers I’ve met would like to solve the technical problem and not perhaps think more widely about its impact.

Havens: It’s a great question. I think, you know, I don’t want to say like it’s a silver bullet, it’s easy, you know. But I think our message is, look it’s the IEEE, it’s the heart of the global engineering community for over 130 years. And it’s saying “look, don’t tell an engineer about risk,” right? Because the old joke about you don’t build the bridge to fall down. Engineers know ethics; they have professional codes of ethics. IEEE just updated or redid rather their code of ethics to include aspects of autonomous and intelligent systems. The point is, as a compliment to the people creating this technology, is to say “you weren’t trained in applied ethics, you weren’t trained in social sciences.” But when you build something like a phone, for instance. How the phone screen affects someone in terms of addiction. There’s no way of knowing if you are, you know, a person from a certain background how the thing you’re creating will effect someone from a social science aspect unless you have a social scientist on the team. So in our work, the thousand people I mentioned were thrilled that it’s a cross pollination of academics, policy makers, business people and people from these different disciplines. My dad he has passed away but he was a psychiatrist. So I’m always excited when a psychiatrist or social scientist comes into our conversations. That said, sometimes the engineers aren’t excited about the ethicists and vice versa. But here’s the thing. Even in English, even Westerners in English, those conversations that we’re having which really come down to values — how are we building what we’re building now? How are we interpreting what’s being built? Engineers and social scientists, once they can talk to each other … and they are. And that’s what ethically line design is. And these standards working groups that have come from [and] inspired by that document. It’s like a first step of saying, “look everyone has to work together because this is new stuff we’re facing as a species.” And values can’t be interpreted by one discipline alone.

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.