Marketplace Logo Donate

Daily business news and economic stories from Marketplace

A responsible approach to artificial intelligence

Subscribe to our Newsletters

Artificial intelligence, or AI as it’s usually known, is gaining ground fast. Microsoft’s Cortana is all over Windows 10, and German researchers claim they have introduced emotions to Mario, a famous video game character. All of this is making some people wary.

People like Ryan Calo, assistant professor of law at the University of Washington an an affiliate scholar at the Stanford Center for Internet and Society. 

Calo recently signed an open letter that detailed his and others’ concerns over AI’s rapid progress. The letter was published by the Future of Life Institute, a research organization studying the potential risks posed by AI. The letter has since been endorsed by scientists, CEOs, researchers, students and professors connected to the tech world.

What they want is research that works toward creating socially responsible AI. That is, algorithms that don’t inadvertently “disrupt our values,” or “discriminate against people who are disadvantaged or people of color,” says Calo.

Isn’t it our responsibility to make sure that doesn’t happen? Sure, says Calo, but he doesn’t think it’s that simple or even straightforward. He thinks it’s more a question of how AI evolves and how much agency it develops. Would it develop to such an extent that an AI system could break out of it’s given role and attempt to do more?  

He says we need more research to understand how AI could be harmful, even if it isn’t at this moment. If we use AI to drive cars in the future, he adds, “it’s conceivable that they’ll act in harmful ways.”

“That’s a more plausible scenario than a robot twisting its moustache trying to plan to kill humanity,” he says. “What’s exciting about AI is precisely what’s dangerous about it.” 

What's Next

Latest Episodes From Our Shows

Listen
7:29 AM PDT
8:52
Listen
2:53 AM PDT
12:26
Listen
1:37 PM PDT
1:50
Listen
Mar 30, 2023
19:12
Listen
Mar 30, 2023
28:23
Listen
Mar 29, 2023
54:03
Listen
Mar 29, 2023
11:27
Exit mobile version