There's just a few days left to snag some Marketplace swag at a discount when you... Donate Today! 🎁
Bias generated by technology is “more than a glitch,” expert says
Apr 11, 2023

Bias generated by technology is “more than a glitch,” expert says

HTML EMBED:
COPY
Meredith Broussard’s latest book examines how artificial intelligence systems enhance and exhibit bias when they’re trained on problematic data.

Artificial intelligence is practically all anyone in the tech world can talk about these days, as many of the biggest names in the industry compete for dominance with ever more powerful AI. But recently, some experts called for a timeout in development efforts to evaluate the harms these tools could cause.

Meredith Broussard, a journalism professor at New York University, says you don’t have to look far to identify some of those harms. Even before the latest generation of chatbots and image generators came on the scene, Broussard was tracking the ways technology exhibits and amplifies bias.

Marketplace’s Meghan McCarty Carino spoke to Broussard about her latest book, “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech,” which was released last month.

The following is an edited transcript of their conversation.

Meredith Broussard: We tend to talk about bias in tech like it’s a momentary blip — a glitch is something that’s easily fixed in the code. When we see social problems manifest in technology, we need to stop treating them like glitches, and we need to understand why social problems manifest inside our technologies, and we need to not pretend that these things are really easy to fix. So, when Google Images labels images of Black men as gorillas, or when ChatGPT generates text that looks like it’s grooming a 13-year-old for a child predator, these are not just easily fixed code issues. These are human problems. And we can’t contribute to magical thinking about AI.

Meghan McCarty Carino: When we think about the biases that could be magnified by technology like this, what is particularly dangerous?

Broussard: I think that it really depends on context. One of the things that I’m excited about is the European Union’s proposed Artificial Intelligence Act, which divides AI into low-risk and high-risk uses. So, a low-risk use of facial recognition AI might be using facial recognition to unlock your phone. I’m not too worried about that. But a high-stakes use of facial recognition is something like police using facial recognition on real-time video feeds. That has real potential for harm because facial recognition is not as good at recognizing people with dark skin as it is recognizing people with light skin, so people with darker skin tend to get misidentified by facial recognition and are thus more subject to surveillance, to false arrest if facial recognition is used in this way. That’s a high-risk use — let’s maybe not do that. So, it’s all about context. It’s all about what kind of AI are you using, how you’re using it and for what purpose.

McCarty Carino: There is a form of AI that is sucking up a lot of oxygen lately — generative AI and specifically large language models and chatbots. How do you see the ideas in your book playing out around this technology?

Broussard: The way a large language model works is you take a whole bunch of data, you feed it into the computer and you say computer, make me a model. The computer says OK and it makes a model of the mathematical patterns that it sees in the data. Then, it can reproduce those patterns, and it can make new patterns that look like the old patterns. So, these systems can make new professional headshots based on whatever you put in, and DALL-E can generate images based on images that it’s seen before. It’s really cool and it’s really fun to play with, I really encourage people to open up ChatGPT or a similar tool and check it out. But once the novelty wears off, I want you to think about how these things are made. It’s fed with tons of data scraped from the internet — which is both wonderful and incredibly toxic — and it makes new patterns based on what it’s seen, which again, some wonderful, some toxic, and none of it is guaranteed to be true.

McCarty Carino: When it comes to the likelihood that these tools will magnify some of the biases and problematic patterns that they’re trained on, is it particularly dangerous that they sound human?

Broussard: Humans have this wonderful tendency to anthropomorphize things — we really want to recognize the human in other things. But when it comes to AI, we can’t get confused. We can’t imagine that all these Hollywood scenarios are actually happening. Our real AI is just math. It’s just machine learning models that are fed with data about the past, and it has all of the problems of the past built in.

Tuesday’s episode with Broussard is the first of a two-part series about her book “More Than a Glitch.” We’ll have more from her Wednesday about AI being used to assist with medical decisions, bias and all.

If you’re still not sure how to feel about AI, you can check out some of last week’s shows. We featured three different perspectives on the development of AI and the calls for a slowdown — from an AI-hype skeptic to one of the voices calling for an experimentation pause and a researcher who says slowing development could do more harm than good.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer