What does it mean to “destroy” an algorithm?
Mar 22, 2022

What does it mean to “destroy” an algorithm?

HTML EMBED:
COPY
The Federal Trade Commission has ordered some companies to destroy algorithms built on deceptively gathered data, a penalty that could affect them more than a monetary fine.

The Federal Trade Commission recently reached a settlement with WW International, the company formerly known as Weight Watchers. The FTC and Department of Justice accused WW of illegally collecting data from children as young as 8 without their parents’ permission through a weight loss app called Kurbo.

That data included details such as what the kids ate and their exercise habits. In addition to fining WW $1.5 million, the FTC ordered the company to “destroy any algorithms derived from the data.”

Kate Kaye is a staff writer covering artificial intelligence and data for the website Protocol. She explained what it means to “destroy” an algorithm.

Kate Kaye: As we understand it, it means removing algorithmic systems or algorithms or machine learning models that have been built using data that was gathered through deceptive means. That’s the simple description that we think it means in terms of how the FTC is defining it.

Kimberly Adams: So that’s how the FTC is roughly defining it. How is the FTC deploying it?

Kaye: So the FTC has deployed it just three times now over the past three years or so, most recently in a case against the company formerly known as Weight Watchers, WW. The FTC has deployed this twice before over the past few years, once against Cambridge Analytica in the Facebook-related case against that company. And more recently against Everalbum, is the name of the company, and they had a photo-sharing app that was reusing facial recognition to identify [and] to tag people in photos. And the FTC said, in addition to not being clear and open with people about the fact that you’re using this technology, you gathered data from them in an illicit way. So the facial recognition tool that the company was building using that data, the FTC said, you have to destroy the product or the algorithmic systems that you’ve built using that information.

Adams: I’m curious about this. So the FTC orders these companies to not only delete the data they’ve collected, but also potentially the algorithm attached to it. But how do they enforce that?

Kate Kaye (Courtesy Kaye)

Kaye: That’s a very good question. We don’t really know. All we know with a lot of these settlements is that the FTC requires companies to keep compliance reports. It doesn’t really say much beyond that. People who nit-pick around this language and try to understand what it all means — people like me — are wondering what that means. There’s a kind of second story here in how a company would technically, actually delete ill-gotten data and the algorithmic systems built with it. It’s an extremely complex process. A lot of times, these things are really intertwined throughout lots of processes. And the data might be replicated or it might be distributed in all sorts of ways. And so how does the FTC really know outside of just, hey company, submit us a report every year? But we don’t really know.

Adams: I’m imagining if I’m a coder or an engineer working at a company, I probably know how to build a similar algorithm after we were forced to delete the old one. And it may take some time, but the point is, what’s to stop them from just rebuilding the algorithm they had to destroy?

Kaye: Think of an algorithm as a recipe or a set of instructions that are then fed with, in this case, tainted data, right? They can absolutely write a new recipe or a new set of instructions that would then be built using data that is obtained in a fair and permitted manner. And certainly companies every day decommission or kind of take out of production algorithmic models because maybe they just aren’t working the way they want, or the feature of the app isn’t something they’re using anymore, whatever it might be. So there’s a distinction between just turning the thing off and actually deleting it. [There’s] no reason why they couldn’t invest in recreating a similar process that is employing ethically sourced data.

Adams: You’ve reported that the FTC has used this penalty three times. Do you anticipate that this is going to become more common?

Kaye: Every person in the policy and privacy sphere that I’ve talked to about this see this as the indication from the FTC that they’re going to use this enforcement mechanism in the future. They have these other two examples, now they’ve got this new one. And one of the keys here with this new one is that it is based not only on the FTC Act, but on COPPA, the Children’s Online Privacy Protection Act. In terms of the kind of ability that the FTC has to enforce against companies, they have a limited set of things that they can leverage.

Adams: How effective is this type of penalty, compared to, say, a huge monetary penalty?

Kaye: Well, a lot of people who look at this stuff suggest that the monetary penalties are considered by companies to be the cost of doing business. So in other words, “This is a risk. We’re willing to accept the risk.” It’s just something they factor in again. And so this, some people think, is a real deterrent because data and algorithmic systems built with it are how just about any business of a certain size kind of operates and builds products and services. These things are intertwined throughout the entire process of a company, potentially. So disentangling it is difficult, but it’s also something that because it touches so many things, it’s not just like paying off a fine. It’s something that can affect their entire business.

Related links: More insight from Kimberly Adams

The FTC’s official statement on the WW settlement. It includes some details from the original complaint against Kurbo that it encouraged kids to lie about their age to use the app and that parents who actually did sign their kids up weren’t given enough information about how their kids’ data would be used.

If you need more evidence to suggest the FTC will keep this tool at the ready for similar cases, there is at least one vocal supporter of “algorithmic disgorgement,” the formal name for the process.

Current Federal Trade Commissioner Rebecca Slaughter wrote about it in a paper last year for the Yale Journal of Law & Technology, in which she says, “This innovative enforcement approach should send a clear message to companies engaging in illicit data collection in order to train AI models: Not worth it.”

Kaye had a followup piece that lays out some of the many challenges ahead in figuring out just what it means to destroy or disgorge an algorithm.

For example, let’s say the same model of an algorithm used to create a problematic data set was used to create other, nonproblematic data sets. Does that mean you have to delete everything?

And what about other programs or companies that may have already used that problematic or illegal data? What do they have to destroy?

These are all questions the FTC will need to answer to fully deploy this tool against tech companies that illegally gather consumer data.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daniel Shin Producer
Jesús Alvarado Associate Producer