❗Help close the gap: We still need to raise $40,000 by the end of March. Donate now
Twitter wants bounty hunters to help fix its image-cropping algorithm
Aug 4, 2021

Twitter wants bounty hunters to help fix its image-cropping algorithm

HTML EMBED:
COPY
Rumman Chowdhury of Twitter explains the strategy behind fixing the tool that favored white faces over Black ones and women over men.

Back in May, Twitter partially disabled an algorithm that cropped photos posted by users in ways that revealed certain biases. A company audit, and plenty of people on the internet, found the algorithm preferred white faces over Black faces and women over men. Now, as part of the hacker conference DEF CON, which starts Thursday, the company is offering a cash bounty to help fix the problem.

I spoke with Rumman Chowdhury, director of machine learning ethics, transparency and accountability at Twitter. Before that, she was founder and CEO of Parity, which helped other companies identify bias in their algorithms. Chowdhury told me the cropping algorithm was based on data tracking where real people tended to look in photos. The following is an edited transcript of our conversation.

A headshot of Rumman Chowdhury, director of Machine Learning Ethics, Transparency and Accountability at Twitter.
Rumman Chowdhury (Photo courtesy Chowdhury)

Rumman Chowdhury: It’s such a great example of how bias can accidentally be embedded into models unintentionally coming from the source data. There are actually many ways in which this model could have been biased; and again, this is why we’re hoping that the public entering the algorithmic bias bounty will be able to think of the things that we didn’t think of ourselves.

Meghan McCarty Carino: What does what happened with that algorithm tell you about how biases permeate algorithms and what some possible solutions might be?

Chowdhury: One thing I’m thinking a lot about is systems-level bias. So it’s pretty obvious: You go to a website to go buy something. There’s probably some sort of an algorithm deciding what order to show you things. But there are probably lots of algorithms before then that are somehow influencing this algorithm to define a profile for you, or to make certain assumptions about you that you’re never going to be able to see and you’re not going to be able to touch. So one thing I’m thinking about quite deeply is this notion of systems-level harm, and what does it mean when multiple algorithms are working together?

Bounty reward

McCarty Carino: The rewards for the bounty challenge aren’t huge amounts of money — $3,500 for first place; $1,000 for second. Did you think about offering more?

Chowdhury: (Laughs) Right now, to be honest, in the field of responsible machine learning and responsible [artificial intelligence], people do this work for free. And people should be compensated for their work. So it’s greater than zero, so we’re proud of that.

McCarty Carino: As part of this process, Twitter is making public some details about the algorithms. The tech industry isn’t always known for being transparent about that stuff. What made you choose to share this publicly?

Chowdhury: Twitter has a commitment to transparency. We have a goal of sharing all of the work we’re doing and building in the open. It is, of course, a labor of love to put some of this work out there and to be responsive and listen to what people want. But ultimately, we strongly believe that it leads to better output.

McCarty Carino: You mentioned that Twitter had audited the algorithm. This is sort of a new industry of auditing algorithms. You ran a company that did this called Parity. There’s another one run by Cathy O’Neil called ORCAA. How much growth do you see in this space?

“Immense market” for algorithmic audits

Chowdhury: There is an immense market for this kind of work, in part because the public is more aware of algorithmic bias. People are increasingly concerned about how algorithms are being used in their daily lives, but also because we increasingly see regulation. The European Union has already passed guidelines for artificial intelligence. We see the FTC as well as other U.S. entities making moves. It is something that companies are concerned with, but also something the public is concerned with and the governments as well.

McCarty Carino: To that point, to what extent do you think it’s useful for audits to be coming from third parties like this? Does it, in some way, let companies off the hook for not fixing things maybe upstream from there?

Chowdhury: It takes a village and it requires an entire community of people to do this work well. There’s a really great paper by a UK-based organization called the Ada Lovelace Institute that looks at different ways of doing algorithmic audits. One of the things I really loved about that paper was it highlighted different roles that different actors play, what resources they have at their disposal and the kind of work they’re able to do.

As somebody who sits outside, maybe as a regulator, you have certain legal oversight, and you’re allowed within that legal remit to do particular kinds of work. If you send a civil society organization, again, you have a particular perspective, you have particular information that maybe other people don’t have. For someone like myself who sits at a company, we often get access to data and models that somebody sitting on the outside wouldn’t necessarily get access to. So it’s not that I can solve the problem or a regulator can solve [the] problem, or a group like Ada Lovelace or a startup could solve the problem. We all need to be engaged in this kind of work. We need multiple perspectives. We need different skill sets, and the only way for that to happen is for people to be at all angles attacking this problem.

McCarty Carino: Now that you are at Twitter, how are you changing the company’s workflow or informing the company’s workflow to address some of these issues further upstream?

Responsible machine learning

Chowdhury: Responsible machine learning is a companywide initiative. We’re also instituting good model governance, so I’m working on risk assessments, looking at the models we have today. This field is so young, and this field of responsible AI and responsible machine learning, especially in practice, is very, very new, and we’re all really aiming to build this technology in the right way.

McCarty Carino: Twitter is a tech company, so it’s very much in your wheelhouse to think about these things. But how should other types of companies that may be using algorithms more and more in their work start to grapple with these issues?

Chowdhury: I think that companies that are not inherently tech companies are probably the best places for us to institute ethical and responsible algorithms. And here’s why: A company that serves customers directly is probably more concerned with ensuring that whatever product it is they’re selling is being positioned in the best light. And they may be less concerned about “Is this algorithm cool?’ which sometimes is what happens at overly tech-oriented places. I think that the clients that I’ve had, the folks that I’ve worked with during my time at [technology services company] Accenture that were in nontraditionally tech companies, sometimes ask the best questions. The best way, I think, is to be knowledgeable on what the questions should be. One of the things about the bias bounty is that our rubric is public. We want people to pick this up and think about things the way our community has contributed to, and maybe this will spark questions that people can apply in their own industries.

Related links: More insight from Meghan McCarty Carino

Rumman Chowdhury mentioned a paper from the Ada Lovelace Institute, named, incidentally, after a fascinating historic figure. She was a 19th-century mathematician and also the daughter of the poet Lord Byron, and her ideas are seen as foundational to computer programming. The paper brings up the role of outside regulators in evaluating bias in algorithms. A couple months back, the Federal Trade Commission came out with guidance emphasizing that companies need to address bias in their AI or risk regulatory action.

History professor Mar Hicks at the Illinois Institute of Technology writes in MIT Technology Review about how the industry continues to erase the voices of women in tech, especially women of color. Including Rumman Chowdhury. A recent New York Times piece about algorithmic audits mentioned only that the company Parity had been built “around a tool” Chowdhury created, but not that she was the founding CEO.

Correction (Aug. 5, 2021): A previous version of this story misstated how Twitter’s algorithm was biased. The text and audio have been corrected.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer