It's Discount Week! 🎁 Pick up new Marketplace gear at a discount when you donate today! Get My Gear!
Well, AI got quite the talking to this week
Apr 23, 2021

Well, AI got quite the talking to this week

HTML EMBED:
COPY
The Federal Trade Commission told companies to hold themselves accountable for their algorithms or “be ready for the FTC to do it for you.”

The Federal Trade Commission issued a strongly worded post Monday, warning companies against unfair or deceptive practices in their use of artificial intelligence as well as violations of fair-credit rules. It told companies to hold themselves accountable for their algorithms or “be ready for the FTC to do it for you.”

Also, the European Union this week drafted detailed legislation that would regulate AI, including banning some surveillance and social-credit scores. I spoke with Ryan Calo, a law professor at the University of Washington, and he said the FTC post was a surprise. The following is an edited transcript of our conversation.

A photo of Ryan Calo, a law professor at the University of Washington, standing next to a bookshelf.
Ryan Calo (Photo courtesy of University of Washington)

Ryan Calo: As I’m reading through it, my eyes are getting bigger and bigger and bigger. And I’m just sort of marveling at the language that the staff attorney has used. But basically, it’s a shot across the bow for those people who are using and selling AI systems, warning them that if they exaggerate claims about AI, or if they sell AI that has a racially discriminatory effect, that they should expect scrutiny from the Federal Trade Commission.

Molly Wood: My response when I first saw that note, though, was we don’t really have metrics for this. There’s no consensus on how to judge whether AI is biased, the FTC is understaffed. How meaningful is this threat, really?

Calo: Remember that the Federal Trade Commission doesn’t necessarily need to establish this or that algorithm is fair according to some metric. What they have to establish is that the company engaged in unfair or deceptive practice. Take, for example, the warning that you shouldn’t make claims about what artificial intelligence can do that are not supported by the evidence. That’s quite analogous to another context, where you might exaggerate the efficacy of some vitamin supplement. However, your point about bandwidth is very well taken. I’ve long thought, and encourage policymakers to think about every chance I get, how understaffed the FTC is, relative to its mission, its charge.

Wood: Meanwhile, as this is happening, the European Union has proposed over 100 pages of potential AI regulations. Do you have a sense of what would change if some of that became law?

Calo: The proposal in Europe is, in fact, a comprehensive set of rules that if they were ultimately passed, if they’re ultimately promulgated, would create some significant obligations and limitations on the use of AI in Europe. So for example, there will be certain things that you simply wouldn’t be able to do. And there would be other things where if you did them, you’d have to have a plan to mitigate risk. I think also, it will have an effect here. And even our own lawmakers on [Capitol] Hill, in D.C., would look to Europe.

Wood: I mean, taking all of these things together and other bans on facial recognition, is there a sense that we may be learning the lessons of the past and potentially getting at least a little bit ahead? Or maybe neck and neck with AI, in terms of regulation, while it is also becoming a bigger part of society?

Calo: I think there’s an opportunity here because I think that a lot of the long-standing societal ills, things that are certainly not new to the last decade, have really come to the fore, come into visibility because of our fascination and concern about artificial intelligence. And so what I’m hopeful about is that AI as a technology will get the kind of scrutiny that will help to — not totally — but help to dismantle some of these societal ills that are long-standing. That’s sometimes a role that technology has: It brings to the fore long-standing problems, long-standing failures to live up to our values. And that’s what I’m hoping is going to be happening here. I think the danger is to just stop at AI. So to say, “Hey, we’re going to ban facial recognition. There, we’ve addressed the problem.” I think everybody who works in this space is realizing that bias and inequity is baked into many different aspects of technology, and that AI is very visible and draws our attention. But to really address the kinds of problems we have in this country and others, we need to think more systemically.

Wood: Do you have any examples of what the EU law might do?

Calo: The way that the legislation is structured is to place different use cases for AI into different categories of risk. And so the most high, most concerning things are actually prohibited. So one of the prohibited artificial-intelligence practices is to use an AI system that does something that you and I have talked about on this show before, which is to manipulate human behavior to their detriment. So if you create an AI system that exploits vulnerable people — that’s actually prohibited, you can’t do it. There’s also a prohibition on using a general-purpose scoring system, the way that China apparently has like a scoring-assessment tool, a social-credit scoring tool — that’s prohibited. And so that’s a direct response, I think, in many ways to how China is approaching AI. Indiscriminate surveillance — off the table. So there’s these things that are not allowed. And what’s so interesting about it is that the things that are not allowed are digital harms. They’re kind of ephemeral. They’re bits, not bones, I like to say. That is to say that these are fundamental human rights who are being affected through code, through bits, through data.

Then you get down to high-risk systems, which are systems where you can still do it, but you have certain obligations. And those things are all of a sudden things that affect physical safety and critical infrastructure. In other words, where bones are on the line. These are high-risk systems where the consequence could be that someone gets hurt physically or infrastructure gets hurt physically. In the United States, it’s often very difficult to escape liability when you hurt people physically. And too easy to escape accountability when you hurt them in a digital context. Whereas here, the EU is saying, “There are some no-go territory,” that are really these dignitary digital harms. And that’s very, very interesting to me. And then, there are also some very specific requirements around documenting your system, being transparent, having a certain amount of human oversight, having enough robustness and accuracy. You have to document that you have certain quality-management systems and so on. So there are a lot of regulatory obligations that would come with this bill.

And so it really is a sea change from the first part of our conversation, where basically the FTC, one of our big consumer watchdogs, is saying, “We’re paying attention to this space, and we’re going to use our amorphous preexisting authority to go after unfair and deceptive practice to look at the AI industry.” Versus “Here is a comprehensive scheme that is broken down by the kind of AI we’re talking about and has affirmative, specific obligations.” These are very far apart in terms of ways to manage, to govern AI. And yet they have in common that they’re both taking AI seriously and taking the harm seriously and thinking of it as being an important, transformative technology that requires change to law and legal institutions.

The US Federal Trade Commission seal is seen during a press conference in Washington, DC on January 3, 2013.
(Mladen Antonov/Getty Images)

Related links: More insight from Molly Wood

I encourage you to read the FTC’s post because, as you probably know, government agencies aren’t always the most plain-spoken. And the FTC, in this case, is telling companies to not exaggerate what their algorithms can do and to be on alert for discriminatory outcomes. It’s also telling companies to be transparent and empower independent researchers to audit their code and tell the truth about how they use data. It’s common sense, in some ways, and it’s interesting to read in light of the more than 100 pages we mentioned coming out of the EU, because in some ways that’s the alternative, right? There’s also some writing about the EU’s risk-based approach and what its global impacts might be.

One thing I should note about the FTC versus the EU: The EU’s proposed regulations include big fines — up to 6% of global sales for violators — while here in the U.S., the Supreme Court this week gutted the FTC’s ability to recoup money for consumers in court when it finds that a company has violated regulations. For example, its 2016 settlement with Volkswagen in the diesel emissions scandal, where it won nearly $10 billion to return to people and agencies harmed. The agency can still order other consequences to companies but is asking Congress to restore its ability to recoup money for consumers.

Also here’s a good Bloomberg story looking at the history of Google’s ethical AI group and how it had some internal ethical challenges of its own, even before the well-publicized exit of Black researcher Timnit Gebru and the later firing of her co-lead, Margaret Mitchell. Among other things, there were complaints of racism and sexual harassment and apparently an ongoing argument over whether the researchers in the ethical AI group, including Gebru, would be allowed to look into data around Waymo, the self-driving car unit. And whether its pedestrian-detection sensors took into account skin color or whether someone was using a wheelchair or a cane, for example. The kind of transparency and independent auditing the FTC is suggesting, you might say.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer