Algorithms are everywhere. They are what advertisers use to target users online and what search engines use to cough up all those results in a particular order. Even the data collected by governments is used to build algorithms that are then used to track, flag or analyze whatever the government is looking to track, flag or analyze.
But there’s a growing fear that these algorithms are learning stereotypes, and therefore abetting data discrimination. Some algorithms, for instance, make an assumption about an individual’s ability to pay debt based on race. Basically, a lot of data goes into these “black box algorithms,” as they are known, and they produce results that are often discriminatory.
“I call it a black box because we don’t have access to these sorts of algorithms,” says Frank Pasquale, a University of Maryland professor of law. He explores the subject in his new book, “The Black Box Society: The Secret Algorithms That Control Money and Information.”
The algorithms produce results based solely on the data that was fed to them, but the trouble is no one knows exactly how the algorithm is crunching that data. Yes, algorithms are racist, Pasquale says, but they are also “reflecting the preferences of thousands and possibly millions of users.”
He sees this as a problem because it’s likely to influence even those who don’t buy into such stereotypes. And they may start thinking like the algorithm. He recommends something akin to “an anti-discrimination type of approach.”
If it’s true that we can never know how these algorithms work, then we must not allow certain results, he says. “We need to move beyond saying we just reflect what people think,” Pasquale says, “and make them [algorithms] more progressive.”