Are state and local governments embracing or banning generative AI?
Oct 3, 2023

Are state and local governments embracing or banning generative AI?

Government employees, like many of us, will use technology to make their jobs easier and more efficient, says tech reporter Todd Feathers. He found that agencies are seeing the need for oversight and limits on how artificial intelligence is deployed.

A couple of weeks back, the news broke that a school district in Mason City, Iowa, was using ChatGPT to implement Iowa’s ban on books that include descriptions of sex acts.

One book flagged was Buzz Bissinger’s classic “Friday Night Lights.” The thing is, that book includes no such descriptions, according to the author himself.

“The banning of any books should be taken with the utmost seriousness and what bothered me about Iowa and Mason City was they didn’t even read the book,” Bissinger said in an interview with local TV station CBS 7 in Odessa, Texas, where the story was set.

Although the district reversed course, it’s an example of how more government officials are using artificial intelligence at work, in some cases leading to restrictions on tools like ChatGPT.

Marketplace’s Lily Jamali spoke with journalist Todd Feathers, who covered this recently in Wired. The following is an edited transcript of their conversation.

Todd Feathers: I think the most common use of generative AI that I heard about is using generative AI to summarize meetings, to create PowerPoint presentations. But there are city governments that are exploring more creative ways to use this technology and really see it as a way to improve some of the slowness of bureaucracy and the denseness and the inaccessibility of government material. So for example, Jim Loter, the interim chief technology officer in Seattle, told me that employees in Seattle had been considering — they haven’t launched this yet — but had been considering using generative AI to summarize reports from the Office of Police Accountability. So this is an office that investigates, you know, accusations of police wrongdoing, they put together these long, dense, comprehensive investigation reports. And the city, you know, could see a use for generative AI to simplify those reports, make them more accessible to the public. But there are a lot of risks that are involved in that. You’re putting the public’s very sensitive information into corporate databases, and it can be used in ways that are not entirely strict to the truth.

Lily Jamali: And you also note that there are governments blocking or creating restrictions. What are their main concerns about having public employees use these tools?

Feathers: So the state of Maine, for example, has put a six-month ban on executive branch employees using any kind of generative AI technology. And state officials told me that this is really just because they want to see how the cybersecurity risks play out with generative AI. You know, in the early days, when OpenAI launched its first few versions of ChatGPT, we saw examples of, you know, prompt injection attacks [manipulating user input to large language models]. There were examples of users being able to see chat histories from other users, all these kinds of quirks and risks in a new technology that usually get kind of worked out before government is ready to adopt it. In this case, generative AI tools hit the scene so fast that places like the state of Maine feel the need to just say no until they can really assess it.

Jamali: Were there any noticeable differences in how smaller municipalities or states are trying to use this technology compared to larger places, larger cities and states?

Feathers: It’s a really good question. I don’t think so, at least I wouldn’t be comfortable making that kind of claim yet. I think what’s really interesting to me was not the distinction between big cities and small cities, but just between, you know, different localities in general and the personalities that make up those governments. So for example, the cities of Seattle and Boston are two of the first cities to release preliminary generative AI guidelines for their employees, saying, you know, you must cite that you have used generative AI if you use it for government purposes. You know, there has to be a human in the loop to confirm that the information is accurate. So a lot of the rules are the same between these two cities’ policies. But Boston has framed theirs in the sense that we want our employees to experiment with generative AI, we see this as a tool that has great potential, whereas Seattle has taken a slightly more cautious approach. If you’re an employee of Seattle, and you would like to use this technology, really justify that it is going to improve life for the citizens of Seattle and then really follow these strict rules. It’s not a “use first” kind of policy, if that makes sense.

Jamali: Yeah. And what are the politics involved here? You know, in your article, one of the interesting examples you raise is out of Mason City, Iowa, where city officials were using ChatGPT as a first step to implementing a book-ban policy. And I wonder if you have a sense of whether use of these tools skews one way or the other on the political spectrum.

Feathers: I don’t know that it skews one way or the other, that it breaks down along political lines. I think the Mason City case is very interesting because there you have an assistant superintendent and really a school district that was not thrilled about having to implement a book-ban law. They didn’t want to do this. And so ChatGPT was a tool that they thought would, you know, just kind of quicken the process. They had to meet this tight deadline for a thing they didn’t want to do, so they plugged some questions into ChatGPT about books, asked if they contain descriptions of sex acts and then did further research on the books from there before removing them.

Jamali: And they got a lot of backlash for that.

Feathers: They got a lot of backlash. And while that’s certainly a case that, you know, touches on some hot-button political issues right now, I think what it really illustrates is that government employees, like a lot of us, will turn to technology to make our jobs easier, and that frequently that will happen when we don’t want to do the job or the job is particularly complicated. And it really increases the risk of using these tools that are prone to spitting out false, convincing information.

Jamali: So do you think we’ll see more cities adopting AI technologies going forward?

Feathers: Yeah. It’s not really up to them, actually. One of the interesting things that I learned from talking to a lot of government employees was that, you know, these generative AI tools are just being inserted into the products that they’re already using without going through the normally very long and comprehensive procurement process that the government is used to. Because a lot of these generative AI tools can be bootstrapped into widgets and kind of integrated into existing products. And so you’re seeing government employees just kind of have access to these things, whether or not their governments have made any decision.

More on this

You can read more of Todd Feathers’ coverage of the ways state and city governments are figuring out how to use — or not use — generative AI here.

There’s also a more recent example from Amarillo, Texas, which plans to create a sophisticated chatbot “assistant” that will provide help in several languages.

It’s scheduled to be up and running early next year, according to Amarillo’s chief information officer.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer