This week, facial recognition software company Clearview AI settled a lawsuit with the American Civil Liberties Union. The group sued Clearview in 2020 for allegedly violating the Illinois Biometric Information Privacy Act. While the case deals with a state law, the settlement has national implications, including limiting who can access the company’s faceprint database.
Clearview AI says that database now contains some 20 billion facial images. I spoke with Calli Schroeder, global privacy counsel at the Electronic Privacy Information Center, who said the lawsuit focused on the use of biometric markers, including faces. The following is an edited transcript of our conversation.
Calli Schroeder: Essentially, if you’re using a biometric marker to track a person’s identity in Illinois, there are consent procedures and permissions that you have to go through in order to do that. And Clearview had not received consent or even, as far as we know, sought consent from any Illinois residents in their collection and use of face prints. So that was what the ACLU is bringing suit with. They were actually bringing suit on behalf of several plaintiffs — so survivors of domestic violence and sexual assault who maybe don’t want their face markers online so that abusers can’t identify them, undocumented immigrants, current and former sex workers or other communities that are at a heightened risk of identification that way.
Kimberly Adams: So it’s been about two years since that lawsuit was filed. The agreement is in. What are some of the major components of this settlement?
Schroeder: One of the biggest ones is that Clearview is permanently banned from granting paid or free access to its facial recognition database to private entities. That includes both private companies and private individuals. And that part of the agreement is a nationwide portion, not just in Illinois. They also have to have an opt-out request form on the website so that Illinois residents can request that they not be included in search results, or they not be included in the database. And then, they’re also banned from granting access to the database to any state or local government entity in Illinois, so that includes law enforcement, and that’s for a period of five years.
Adams: Clearview AI hasn’t just dealt with scrutiny here in the U.S. How does this settlement compare to some of the other penalties the company has faced here in the U.S. and abroad?
Schroeder: Abroad, it’s an interesting proposition. In Italy, they faced an actual monetary fine, I believe it was 20 million euros [$21 million]. In the UK, it was a 17 million pound [$21 million] fine. In France, I don’t believe there was a monetary fine, but in Italy, the UK and France, all of them ordered Clearview to remove photographs from their database, which is a time-intensive project and also lowers substantially the amount of face prints that they can say they have there. And it’s interesting to see whether U.S. entities like the Federal Trade Commission would be interested in trying to exercise that level of authority when it comes to these kinds of practices.
Adams: What questions or privacy concerns about Clearview AI are not addressed in the settlement?
Schroeder: The broader question of whether and how we allow facial recognition to be broadly used in society is completely unaddressed here. And part of that is because it’s much more of a philosophical-existential question that’s maybe not appropriate for a lawsuit specifically. But there is this ongoing debate about how appropriate it is to allow a technology to proliferate when it’s based on something that you can’t change. You can’t change your face the way you would a password. If an account got compromised — your face is your face. So if you’re going to be functioning in public or functioning in the world, that’s a piece of information about you that is always visible and that you can’t change. So ongoing discussions about what the appropriate use of that is, what’s appropriate when it comes to tracking whether there should be a full ban on facial recognition or whether it should only be allowed in certain circumstances with warrants — this is an ongoing discussion that I think we have to have because this technology is not going to go away unless there are such strict bans and restrictions on it that it no longer appears to be worth it.
Adams: What are the limitations and strengths of statewide privacy laws? And how are we seeing them show up?
Schroeder: The benefits of state privacy laws are that there’s frequently an ability to pass a stronger law than what you may be able to get on a federal level. We’ve been pushing for a federal privacy law for a very long time, and there is just not a lot of movement there. So individual state laws are a very good way to address privacy protections more quickly and protect individuals in those states. And we hope that by getting good privacy laws in multiple states, eventually you kind of hit a critical mass of companies saying, “Well, we have to comply with these standards in this state, this state and this state. Why don’t we just make this our baseline standard?” The problem with the state-level approach and the kind of patchwork approach to privacy that we have is: Until you get a real critical mass, it is a patchwork. There are people in certain states that functionally have more rights when it comes to their privacy and their information than others in other states in the country.
Related links: More insight from Kimberly Adams
Both the ACLU and Clearview AI are calling this settlement a win — or, as Clearview said in a statement emailed to Marketplace: “a huge win.” A company representative went on to say, “Clearview AI will make no changes to its current business model. It will continue to expand its business offerings in compliance with applicable law. And it will pay a small amount of money to cover advertising and fees, far less money than continued litigation would cost.”
The ACLU calls the settlement a big win and is urging more states to implement tough privacy laws.
Meanwhile, other tech companies are clearly paying attention. Axios reports that Facebook has turned off filters and avatars that use augmented reality for users in Texas and Illinois — you know, how some add cat ears or virtual sunglasses to your face while chatting. Parent company Meta says it doesn’t think this kind of AR counts as facial recognition technology, but better to be safe than sorry, it seems.