Real-world RoboCop: The ethics of using robots to apply lethal force
Dec 12, 2022

Real-world RoboCop: The ethics of using robots to apply lethal force

HTML EMBED:
COPY
Ryan Jenkins, a professor of philosophy and expert in ethics and emerging sciences, believes police agencies are likely to consider deploying weaponized robots when lives are at risk — raising the risk that violence becomes a more common tactic.

Last week, officials in San Francisco decided to scrap a plan that would have allowed law enforcement to use robots in situations that may require “deadly force.”

Specifically, according to the language of the ordinance the city’s board of supervisors initially approved, when “risk of loss of life to members of the public or officers is imminent and outweighs any other force option available.”

The plan was rolled back after a public backlash, but the technology is out there and it may be just a matter of time before it’s used by local police departments.

Marketplace’s Kimberly Adams spoke with Ryan Jenkins, a professor of philosophy and senior fellow at the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo. There’s a concern, he said, that the deployment of robots would lower barriers to the use of force, making violence a more common occurrence in police work.

The following is an edited transcript of their conversation.

Ryan Jenkins: So I think it’s unfortunately not terribly surprising that city officials, and that especially police departments, would find this kind of technology attractive or perhaps one day even irresistible. Anytime we have an opportunity to protect police officers from unnecessary harm or more harm than is required for them to accomplish their goal, we usually think that that’s a good goal to accomplish. So this is why, for example, we think it’s totally fine that police would have bulletproof vests. But there’s a deep anxiety that’s also quite understandable, that somewhere there’s a line being crossed between bulletproof vests as a kind of protection against force and lethal, remotely piloted or remotely operated robots.

Kimberly Adams: Whenever we hear about the use of force, in the case of police officers, you often hear this narrative that they felt like their lives were at risk and that’s why they used force, or someone else’s life was at risk. But if you’re talking about remotely detonated robots or some other kind of remote technology to imply force, doesn’t that kind of eliminate the reasoning behind why they’re using force in the first place?

Jenkins: So I think that this is a natural thing to think, although I think that the police have a good argument here. So you can imagine someone who’s threatening civilians or someone who’s holding someone hostage, for example. So it’s clear in that case that the lives of citizens are being actively threatened. And the police would be acting as a third party to intervene in that situation. And the question is whether the police would be risking their own lives by storming in, or whether they would send in a robot instead. Now, if it were merely property, or if it were merely damage against an object, like a robot that was being threatened, then I think that that significantly undermines or perhaps totally negates the argument for using lethal force. But I think that there are situations that we can imagine where the lives of police or the lives of citizens would be threatened, where this kind of force would be understandable.

Adams: And why lethal force rather than, say, disabling force? Like, I don’t know, shoot a tranquilizer dart at somebody.

Jenkins: Now that’s a very good question. And that, I think, it really does go to the heart of the issue. The police are supposed to use force as a last resort. And in general, we think that if anyone’s going to use force, even if they have a good reason to do it, they should try to use as little as possible. That is, it should only be directed at people who are legitimate targets, and it should be proportionate. So it shouldn’t greatly outweigh the significance of the threat. So I think it’s exactly the right question to ask, to say do we really need lethal force in this case? Or could something like an electric shock or a tranquilizer or any other number of uses of force that are short of lethal or less than lethal, would those be able to accomplish the same goal while harming people and harming citizens less? And I think that’s a very worthwhile and a very serious objection.

Adams: Do we have any examples of police using this type of technology already?

Jenkins: So one example that comes up is several years ago, in Dallas, I believe that there was an active shooter, where the police used a kind of improvised explosive device that was attached to a remotely controlled robot, and detonated that in order to incapacitate or kill the person who was threatening them or threatening citizens. That example made quite a splash at the time. But that concern and that furor died down quite a bit. I mean, even though a lot of folks like me saw that as a harbinger of things to come, I think that that was much less concerning than a city adopting an ordinance or a general policy that permits and licenses this kind of use of force regularly. And that’s a very different kind of decision that was being made.

Adams: How likely do you think it is that in the future, this becomes a more common proposition among local law enforcement?

Jenkins: I think it’s very, very likely. I think the reasons for that are quite understandable. The concern is that it might psychologically license the use of force in cases where it’s not as necessary. And by lowering that threshold to the use of force, it becomes much, much more common unless a regulation’s come in that put a stop to it and put a stop to it proactively and in no unclear terms.

Adams: How has the tech industry responded to this in terms of having their tools, their robots, being used in this way?

Jenkins: So some companies, most notably Boston Dynamics, has pledged not to weaponize their robots. Now I’m not sure if that means they won’t weaponize them themselves in-house, but maybe they’ll sell them to others and allow them to become weaponized or weaponized [in the] aftermarket. And that would be very difficult to crack down on. In general, I think that there’s a very widespread anxiety and concern and trepidation about the weaponization of robots, especially on domestic soil. And I think that that widespread feeling, it partly explains why the city made an about-face on their decision so quickly. So I think that in general, you’d find very few companies that are willing to weaponize their robots, even if their partners were folks like law enforcement. However, I do think that you’d be able to find some companies willing to do it. I just think that they’d be few and far between.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daniel Shin Producer
Jesús Alvarado Associate Producer