Biden’s executive order aims to limit the harms of AI
Oct 31, 2023

Biden’s executive order aims to limit the harms of AI

The order requires artificial intelligence developers to share test data to ensure safeguards are being considered.

In 2017, then-MIT graduate student Joy Buolamwini shared the challenge of getting facial analysis software to notice her.

“Hi camera, can you see my face? You can see her face. What about my face?” she asks her webcam (see video below).

It couldn’t “see” her until she wore a white mask. The reason was because of algorithmic bias, argued Buolamwini, who is Black. Fighting that bias is one goal of the executive order on AI unveiled Monday by the Biden administration.

Buolamwini is author of the new book “Unmasking AI,” and told Marketplace’s Lily Jamali the executive order is a step in the right direction. The following is an edited transcript of their conversation.

Joy Buolamwini: This executive order sets out the framework for what standards should look like, what guidelines should look like. But really, we’re going to need to look at the implementation across a number of different agencies and offices. And so with what’s been laid out, it’s quite comprehensive, which is great to see, because we need a full-press approach given the scope of AI. It’s not a siloed kind of conversation, nor would it take a siloed set of provisions.

Joy Buolamwini (Courtesy Naima Green)

Lily Jamali: And how do you get that right, implementation, if you are the Biden administration?

Buolamwini: I think part of it is continuing what they have been doing in terms of making sure that you have a wide range of stakeholders involved in setting the frameworks and guidelines and standards. So in the past, one thing we’ve had to be really careful about is corporate capture. So if you only have the tech companies advising on what legislation should be and what regulation should be, it’s not too surprising that it would favor the company. So I think it’s critical to continue to include the voices of civil rights organizations. It’s critical also to include youth voices. And this is an area that I think could be expanded.

Jamali: And are there any areas where this executive order falls short?

Buolamwini: Where I certainly think the executive order could be strengthened is around biometric protections. And so a lot of my work focuses on bias in facial recognition technologies. Some of my earlier research showed gender bias, skin-type bias, mapping a racial bias in AI systems coming from Amazon, Microsoft, IBM and a number of other companies. And my concern is with biometric rights, we could be moving toward greater government adoption of these tools without having alternatives in place. A few years ago, I wrote about the IRS adopting a third-party vendor without public scrutiny as a way of accessing basic tax services. And so I would want to see biometric protections put in place at the federal level so that you are not required to submit your face data to access government services, and that the alternatives provided, whether it’s going to a local post office or something else, are accessible and convenient so that it’s truly a real alternative.

Jamali: So if I’m not mistaken, this is something you’ve also flagged in the context of the Transportation Security Administration as well. Tell me about that.

Buolamwini: Yes, so we know that the TSA is planning on expanding facial recognition scans to over 400 airports. They have been piloting it in a couple dozen so far. And at the Algorithmic Justice League, over the summer, we launched a campaign for travelers to share their experiences if they saw any signage, if they knew they had the right to opt out. And what we were seeing appears to be coerced consent, or uninformed consent, at best, where people don’t even know they have the right to opt out. And so what we were seeing with reports coming in from the Algorithmic Justice League are the stated practices you might read on the TSA website, clear and visible signage, etc., versus people’s lived experiences were not matching up. And so I think it’s really important people have a clear choice when it comes to if their biometric data is used.

A book cover with the text "Unmasking AI: My Mission to Protect What Is Human in a World of Machines." Joy Buolamwini, the author's name, is written at the bottom. The cover illustration shows a Black woman with white-framed glasses holding a white mask that covers half her face.
(Courtesy Malika Favre)

Jamali: What else?

Buolamwini: As a new member of the Authors Guild and also the National Association of Voice Actors, having just written a book and also recorded the audio book for it, I’ve been looking more at the creative economy and the impact of generative AI on many of these areas. And so I do think it’s important when it comes to creative rights, that first of all, there’s consent for the use of copyrighted data, which we haven’t seen since the development of so many systems, that there is compensation, that there is control whether or not you want to be in it in the first place, and also where there is credit. And I definitely think many artists, creatives, writers have not been credited for their contributions to AI. And, in fact, it would not make sense to credit them if their copyrighted data was taken without permission. And so those four C’s for creative rights, I certainly would want to see strengthened with future actions from the administration.

Jamali: I want to get your reaction to something from NetChoice, which is a trade association that includes a lot of big tech platforms. Their response to this is they’re calling this executive order an “AI red-tape wish list.” They say it will stifle new companies and competitors from entering the marketplace and significantly expand the power of the federal government over American innovation. What do you think when you hear that kind of language, the “red-tape wish list,” specifically?

Buolamwini: I understand that industry doesn’t want to be regulated, and it’s easier to position accountability as being anti-innovation, which is not. It reminds me of when cars were introduced. The car companies, they didn’t want seatbelts or licenses or traffic lights. But eventually, because people were pushing for safety, those guardrails were put in place and it didn’t stop innovation. But it did say we can innovate within parameters that minimize risk and minimize harms. And that is the job of the government: to do what industry is not incentivized to do.

Jamali: And does this executive order give Congress something to work with long term?

Buolamwini: Absolutely. I do think that process will be long and ongoing. But oftentimes you hear “how do you move from principles to practice,” and we’re getting a roadmap here.

More on this

It’s a lot easier to believe algorithmic bias is real when you see it for yourself. To that end, here is Joy Buolamwini’s demo from her 2017 TED Talk.

Buolamwini said preventing harm from algorithmic discrimination is critical, but so is redress for harms that people have already suffered. A white paper from University of California, Berkeley’s Center for Long-Term Cybersecurity notes there aren’t extensive mechanisms in place now to do this. Buolamwini said having systems in place to keep track of such incidents would help.

Housing discrimination fueled by AI against protected groups is just one real-world example. Monday’s executive order directs Housing and Urban Development to draft guidance to fight that, including things like tenant screenings, within 180 days.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer