A New York law will require AI hiring systems to be audited for bias
Apr 14, 2023

A New York law will require AI hiring systems to be audited for bias

HTML EMBED:
COPY
The Automated Employment Decision Tools law will require companies to regularly audit their hiring software and notify job candidates of its use. Vikram Bhargava of George Washington University wonders whether these tools will undermine HR accountability.

New York City is gearing up to start enforcing a first-of-its-kind law. It requires employers that use artificial intelligence tools in making hiring decisions to have those systems audited for bias.

Since the law passed in 2021, the use of AI in hiring has only increased, Vikram Bhargava told Marketplace’s Meghan McCarty Carino. He’s an assistant professor of strategic management and public policy at George Washington University. The following is an edited transcript of their conversation.

Vikram Bhargava, assistant professor at George Washington University, smiles at the camera, and is wearing a black shirt with a brown sweater on top.
Vikram Bhargava (Courtesy George Washington University)

Vikram Bhargava: Algorithms have been used for, for example, screening resumes and to narrow down the set of applicants that then the [human resources] manager might look at in more detail. It’s also been used for things like video interviewing-related tools, and then their answers are scored according to some sort of algorithmic basis. Now, with the generative AI tools, I think what it does is it takes a lot of aspects of this process that would have involved human decision makers, at least to some extent, still. For example, initial conversations with the recruiter, crafting rejection letters. It’s in a fairly wide range of tasks in the HR process that this can come to bear.

Meghan McCarty Carino: So this New York City law going into effect will require AI systems in hiring to be audited for bias. I mean, what could that actually look like in practice? Are there standards for this process?

Bhargava: You know, I think the thought is that it’s going to be third-party auditors. There’s a challenge of who is well-equipped to engage in these audits, there’s a challenge of whether the audits themselves can permissibly be done given the privacy issues related to client data. And I think that even if companies are able to satisfactorily pass, or pass with flying colors, this audit, it nevertheless doesn’t settle the question of whether that’s sufficient grounds to automate the process and then defer entirely to the output that the algorithm recommends because there could be something lost there. Namely, the value of us being able to choose whom we relate to in the workplace.

McCarty Carino: Now, this New York City law was crafted basically in ancient history now, when we’re thinking about AI development, and may not account for some of the ways that these new generative AI technologies are being used in the hiring process. I mean, are there places where problematic biases or patterns could enter the workflow with what we know about how these new AI tools might be used in hiring?

Bhargava: It’s entirely possible. I think that, for example, ChatGPT is being proposed to be used in performance reviews, another important dimension of HR processes. And this is also something that it might pick up on certain keywords or correlations that don’t obviously seem bad, but may nevertheless be sort of correlated or gendered. So let’s suppose an employee — people characterize this employee as “bubbly.” This is often something that is a gendered term, that’s used to sort of characterize women. And this might, for example, unwittingly result in outcomes that don’t quite capture the HR dimension that was originally intended. And then the second set of issues that I flagged is even if it doesn’t do that and those problems are solved, there’s still the question of whether we should use these technologies, for example, for reasons related to the value of choice that I characterized [to choose whom we relate to in the workplace].

But also, there’s this question. If, let’s say, currently my dean or my boss leaves me a performance review that I disagree with or I take issue with, I have somebody who I can approach who’s responsible for this in some ways. Now, if there’s an employee who gets a performance review from an HR algorithm, or a ChatGPT solution, and it sort of gives them a negative review, who is it that they should hold to account for this? So it risks yielding what’s sometimes called a responsibility gap or responsibility vacuum, where these negative outcomes in a performance review, or something like that, is given to a candidate. And they, understandably, may be frustrated and seek clarification or perhaps even dispute something in there. And if they approach their manager and they say, “Well, why did you do this?” the manager may not know. The manager may very well be “Well, I’m not quite sure, but this is what [ChatGPT] said to do.” And that seems deeply unsatisfactory.

I think there are a range of overlooked questions here. And I think one important question is, to the extent that firms want to benefit from these algorithms that they don’t understand — and there are, of course, significant benefits — there’s a question of why is that that they should also be insulated from taking responsibility, even if they don’t understand. So there seems to be an asymmetric sense in which, well, the bad outcomes due to these algorithms, well, we’re not sure who did it. Seems like, well, the engineer didn’t anticipate the situation. The manager didn’t know what was going on in the black box. It yields these situations where some bad outcome comes about. And it’s not clear who to hold to account.

McCarty Carino: We are talking about this New York City law because it is so unique. There is not a lot of regulation in this space. Do you see this law having ripple effects?

Bhargava: I suspect that many jurisdictions have their eye on how this is going to unfold in New York. There are going to be interesting issues that arise with respect to enforcement, especially when you’re using algorithms that may not be straightforwardly explainable or interpretable. So this is not to say that I [don’t] think that there are, of course, significant downsides with overregulating technology as well. And there are a lot of important empirical questions that need to be studied to find out how to appropriately craft regulation.

More on this

Recently on the show, we talked about how algorithms can be used in firing employees and how deep learning could actually reverse bias against workers without college degrees. Meghan McCarty Carino also did some reporting back in 2021, when this New York City law passed, about the risk of AI magnifying bias in the hiring process.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer