Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report

It's not delivery. It's the grocery store.

Dec 5, 2019

Latest Episodes

Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Tech
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Marketplace Morning Report
Download
HTML Embed
HTML EMBED
Click to Copy
Download
HTML Embed
HTML EMBED
Click to Copy
Make Me Smart with Kai and Molly
Download
HTML Embed
HTML EMBED
Click to Copy
Have something you want Kai to explain? Let us know!

A responsible approach to artificial intelligence

Ben Johnson and Aparna Alluri Jan 26, 2015
Share Now on:
HTML EMBED:
COPY

Artificial intelligence, or AI as it’s usually known, is gaining ground fast. Microsoft’s Cortana is all over Windows 10, and German researchers claim they have introduced emotions to Mario, a famous video game character. All of this is making some people wary.

People like Ryan Calo, assistant professor of law at the University of Washington an an affiliate scholar at the Stanford Center for Internet and Society. 

Calo recently signed an open letter that detailed his and others’ concerns over AI’s rapid progress. The letter was published by the Future of Life Institute, a research organization studying the potential risks posed by AI. The letter has since been endorsed by scientists, CEOs, researchers, students and professors connected to the tech world.

What they want is research that works toward creating socially responsible AI. That is, algorithms that don’t inadvertently “disrupt our values,” or “discriminate against people who are disadvantaged or people of color,” says Calo.

Isn’t it our responsibility to make sure that doesn’t happen? Sure, says Calo, but he doesn’t think it’s that simple or even straightforward. He thinks it’s more a question of how AI evolves and how much agency it develops. Would it develop to such an extent that an AI system could break out of it’s given role and attempt to do more?  

He says we need more research to understand how AI could be harmful, even if it isn’t at this moment. If we use AI to drive cars in the future, he adds, “it’s conceivable that they’ll act in harmful ways.”

“That’s a more plausible scenario than a robot twisting its moustache trying to plan to kill humanity,” he says. “What’s exciting about AI is precisely what’s dangerous about it.” 

Fall of the Berlin Wall
Fall of the Berlin Wall
The financial lessons of Germany's reunification 30 years ago.  
Check Your Balance ™️
Check Your Balance ™️
Personal finance from Marketplace. Where the economy, your personal life and money meet.

‘Tis the season to give back!

 

Donate today to TRIPLE your impact, thanks to the Kendeda Fund.