❗Let's close the gap: We still need your help to raise $40,000 by April 1. Donate now
Should an algorithm play a role in child welfare decisions?
Jun 8, 2022

Should an algorithm play a role in child welfare decisions?

HTML EMBED:
COPY
An AP investigation into an algorithmic tool used by social workers in one Pennsylvania county suggests the risk outweighs the benefit.

The deployment of algorithms and artificial intelligence can have unintended consequences.

Back in April, the Associated Press published an investigation into an algorithm used by one Pennsylvania county to help decide which families to investigate for child neglect and abuse.

Researchers found that, if not for the intervention of social workers, the algorithm would have exacerbated racial disparities. Since that report, the state of Oregon stopped using a similar tool.

Sally Ho, investigative correspondent with the Associated Press and co-author of the report with Garance Burke, joined Marketplace’s Kimberly Adams to discuss the story.

Below is an edited transcript of their conversation.

Sally Ho: A tool like this predicts the risk that a child will be placed in foster care in the two years after they’re investigated. And the algorithm is a statistical calculation based on detailed personal data collected from birth records, Medicaid records, substance abuse, mental health, jail, probation, among sort of other government datasets. And the algorithm then spits out a score, between one to 20, that is presented to a social worker who is deciding if that family should be investigated in the first place. The greater the number, the greater the risk.

Kimberly Adams: Can you talk about what concerns of bias there were in the previous system before this algorithm, and then what happened once the algorithm was introduced?

Ho: The child welfare system itself has historically had very punishing effects on families of color and Black families in particular. That data is firm that black children are more likely to end up in foster care and the least likely to ever get reunified with their families. So a prevailing concern for using data algorithms in child welfare work is that this could lead to “garbage in, garbage out,” or the idea that a flawed data premise will lead to a flawed data calculation. It’s well established that human bias is the problem. So does the algorithm that’s built on that human bias data mitigate the human bias? Or can algorithms actually harden, or have the potential to worsen, existing racial disparities? Because now, social workers who are on the fence or otherwise swayed by this tool can then make their decisions with the confidence of science.

Adams: What did you see what’s happening in Allegheny County?

Ho: The AP obtained exclusive research from Carnegie Mellon University that showed that Allegheny’s algorithm, in its first years of operation, showed a pattern of flagging a disproportionate number of Black children for mandatory neglect investigation when compared with white children. The CMU researchers found that from August 2016 to May 2018, the tool calculated scores that suggested 32% of Black children reported for neglect should be subject to mandatory investigation, compared to 20.8% of white children. The fact that the tool calculated risks higher for Black children was really concerning for the field, because there’s been prevailing concerns that this will only serve to harden the racial disparities.

Adams: What has been the response to the criticisms about this program from officials who are piloting these programs?

Ho: In Allegheny County, you know, they’ve said time and again, that this tool is an advisory tool, it doesn’t actually make the decisions. So the social workers are sort of, you know, can mitigate some of those disparities, because it’s meant to advise them, it’s not meant to take over their duties of deciding who should be investigated. But we do know that in other places since our story has ran — in Oregon, for example — the state is actually dropping their tool together. The Oregon Department of Human Services announced that they’re dropping their Allegheny-inspired screening tool and will use a different process that they say will make better, more racially equitable decisions.

Adams: You and researchers working on this have also spoken to social workers who use this tool. What do they say about it?

Ho: The Carnegie Mellon study found that social workers largely disagreed with the tool philosophically. In their study, the hotline workers who were using it to determine which families were investigated — and they had concerns over both [the] technical and philosophical — they noted that the algorithm couldn’t actually compute the nature of the allegation, for example, or take into account how serious or not the actual report was. The social workers also felt that the tool was designed for a different question than the one that they were operating under as humans. Whereas social workers felt their job was to assess immediate safety threats, the tool is designed to actually gauge future harm. Of course, the social workers also reported concerns about racial disparity, knowing that a wealthy family paying for drug rehab wouldn’t show up in the algorithm in the same way as a poor family on Medicaid would.

Adams: When you say that the algorithm didn’t take into consideration the severity of the allegation, what do you mean?

Ho: The tool itself is pulled from historical family data, so the tool is really gauging how risky you are. You know, what are your risks as a family? And so they’re looking at things like jail, probation, truancy, things like that, you know, which is debatable if that information is relevant when someone’s reporting a hungry kid. You know, is it fair to compare one kind of risk bucket with the actual allegation in hand? And social workers actually had concerns about sort of the other side of that, which is that, you know, a family that doesn’t have these risk markers, but the allegation is very, very serious — like, a child witnessed somebody’s death or something like that, was an actual interpretation that a social worker had — the tool itself couldn’t actually weigh the seriousness of what was being reported.

Adams: What did the developers say when presented with these criticisms about their tool?

Ho: You know, the developers have presented this tool as a way to course correct, as they’ve said. That this can be a tool that can change the status quo in child welfare, which is really, you know, sort of universally felt to be problematic, that there are people on both sides of the algorithm discussion who acknowledge that this is a field that has been troubled, that generations of dire foster care outcomes are proof that the status quo is not working either. And I think that’s part of the reason to consider really revolutionizing how cases are processed.

You can read the AP’s investigative piece here as well as its coverage of Oregon’s announcement to stop using their algorithmic risk tool.

Allegheny County also gave its own response to the AP investigation into the Allegheny County Family Screening Tool, as it’s called. The County says the tool was only ever meant to “complement and augment the decisions that workers and supervisors make” and worked with other researchers who found the tool was “ethically appropriate.”

As Sally said, the AP based its reporting on research from Carnegie Mellon. Sally also pointed to a report from the American Civil Liberties Union on how other child welfare agencies are considering or using predictive algorithms.

The ACLU says it found that agencies in at least 26 states as well as the District of Columbia have considered using such tools in their “family regulation systems.” And 11 states had agencies using them at the time of the survey last year.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daniel Shin Producer
Jesús Alvarado Associate Producer