How police are experimenting with AI
The push to integrate artificial intelligence — like large language models — in the workplace is hitting almost every industry these days. And that includes policing.
Reporter James O’Donnell with MIT Technology Review got an inside look at the ways in which many departments are experimenting with the new technology when he visited the annual International Association of Chiefs of Police conference back in October.
The event, which bills itself as the largest gathering for police leaders in the U.S., is not generally very open to the media. But O’Donnell was able to attend for a day to see how artificial intelligence was being discussed. He said police are using or thinking about AI in a wide range of applications.
Marketplace’s Meghan McCarty Carino spoke with O’Donnell to learn more about those applications, starting with AI-powered virtual reality training. The following is an edited transcript of their conversation:
James O’Donnell: Rather than having instructors and actors come in and guide police departments through different scenarios, which could range from de-escalating a situation out on the street all the way to active shooter scenarios, the pitch is that VR can do that more realistically and in a more engaging way for police officers. And the one that I tried was a little bit lacking. You know, the company said they had some connectivity issues on the expo floor with their Wifi and internet and everything. And the point of the VR training was to to go through a de-escalation process with this person, to talk them down, to figure out what the issue was. And to be honest, I found it a little bit unconvincing. There was some lag involved, and I can see some of the benefits of it. But, you know, I think it’s still — like many pieces of VR and AR technology — I think it’s still sort of in the beginning [phases].
Meghan McCarty Carino: Another use you write about is kind of applying AI to analyze all of the many streams of data that are collected by police departments and increasing, you know, streams of data, tell us about how this might work.
O’Donnell: Yeah, so this is a trend that is going on in not just police departments, but government more broadly, [like] the Department of Defense. And the idea is basically that every piece of hardware or every sensor that you can have out in the world is producing lots of data. So for police departments that could be cameras, license plate readers, gunshot detectors, all of these sensors that police departments are increasingly deploying collect lots and lots of data. But up until now, it’s been really hard to sort of sort through all of that data to find insights to make much use of it. And there’s a lot of companies who are pitching AI as sort of the solution to that. And so there are cases you can think of where that would be really, really useful and important, right? I give them that.
There are some cases where, if you imagine, you know, you have a kidnapping, or you have a really time sensitive situation where someone’s life is in danger, and you’re trying to piece together, you know, security cameras from around the city with other sensors and get a real time picture of what’s happening, you could imagine that the AI is really useful for sort of finding the connections between those different data sources. On the other hand, you have a lot of privacy experts and people from organizations like the ACLU who consider this to be basically one more step towards a surveillance state, to basically over-police in certain areas and maybe under-police in others.
McCarty Carino: You also write about the application of AI in more banal ways, just doing boring, grunt work, filling out paperwork. I mean, what does that look like in policing?
O’Donnell: For police officers, one of the big complaints that departments have is that their officers spend far too much time writing up police reports. So one company I write about in the story is called Axon. They basically take body camera footage, they transcribe the audio from that body camera footage, and then they use that information in the audio to create a first draft of a police report. And in theory, it would save a lot of time. You know, officers do a lot of this report writing at the end of the day. Maybe their memory is foggy, maybe they’re fatigued. So there is an argument that using AI to save some of that time would be beneficial.
On the other hand, AI is fallible. It makes mistakes. And, you know, leaning on AI and writing these police reports would introduce AI into a document that actually plays a huge role in the in the criminal justice system. And to be fair, the companies building these police report generators have taken a lot of steps to make sure that police officers actually have to read and edit the report. I’ve spoken to some people who say it’s a problem if you take the body camera footage and you use it to generate a report, because it’s basically like showing a police officer footage of what happened before they’re supposed to write this report. So it introduces this kind of variability, and in some ways, takes what was supposed to be two distinct sources of information, and in some ways reduces that to one source of information.
McCarty Carino: In general, you write that the way us police are adopting AI is “inherently chaotic.” Tell me more about what you mean by that.
O’Donnell: For the Department of Defense or the CIA or other national security agencies, they are going to be bound by one agency-wide policy of how they’re going to deploy AI, what they can use it for, what they can’t use it for. Police departments are just not organized that way. There are, you know, some federal rules that they of course, have to abide by, but there’s far from a universal federal policy of how police departments can use AI and how they can’t use it. And what’s more likely going to determine how those departments use AI is just department by department. So I think what’s lacking from a privacy perspective and an ethical perspective is one overarching federal rule of how these departments can and cannot use artificial intelligence.
McCarty Carino: And given the change in administration that we are about to see, the probability of an overarching federal rule seems lower?
O’Donnell: Yeah, and I think [in] Donald Trump’s first term in office, as well as in his campaign for president, he has spoken a lot about rolling back some of the police reform regulations put forth by the Biden administration, some of which have not been enshrined in law, but Trump has spoken pretty outwardly about rolling back some of the police regulations that have come about since since 2020. He’s talked about giving more protection to police officers. He’s talked about increasing the use of practices like stop and frisk, and I think it’s likely that police departments who want to use AI in a really bold way in the next few years will probably be even more emboldened by his administration.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.