❗Let's close the gap: We still need your help to raise $40,000 by April 1. Donate now
Could pausing AI development do more harm than good?
Apr 7, 2023

Could pausing AI development do more harm than good?

HTML EMBED:
COPY
As thousands call for a pause in the development of artificial intelligence, Will Rinehart at Utah State University warns against applying old regulations to new technology.

We’ve been talking this week about the call by many experts to slow down artificial intelligence development. There are those who say we need time to mitigate its potential harms, and those who think this discourse overhypes the technology.

Others, like Will Rinehart, a senior fellow at Utah State University’s Center for Growth and Opportunity, argue that a pause now could do more harm than good.

Marketplace’s Meghan McCarty Carino spoke to Rinehart about the potential damage he feels could be caused by a temporary halt to the work.

Will Rinehart: I think the big harm that people are worried about is that effectively China and actors within China would be able to use the time period to integrate this technology and upgrade it for their, for their own uses. Specifically, it would give them the ability to catch up and start deploying these technologies at scale. I think the other big part of this harm is much more of a branding harm. There’s this call for a pause and a call to effectively create a set of standards for the industry, but no guidelines on what those standards should be. So, it creates the sense that there is a solution set out there and if we were to just pause for six months, then we would be able to implement them.

Meghan McCarty Carino: And what do you make of the push for regulation at this time?

Rinehart: The interesting thing about the pause and the letter is that there was a line within the letter that effectively said, if companies don’t do this voluntarily, then governments of the world should step in and implement a six-month moratorium. What I think is interesting about that is the backstop of government regulation and the government regulation angle within all of this. We’re having this conversation about ChatGPT and the related transformer technology, and simultaneously Europe has been going through its own process on an AI regulatory act and it doesn’t really seem that ChatGPT fits into the categories that they’ve laid out. It doesn’t seem as though ChatGPT naturally would be regulated under their act. The Europeans are creating a categorization system where you have something that’s completely banned — that’s one category; something that would be regulated is another category, and something that would be unregulated as a third category. The three categories kind of define all technologies, and ChatGPT seems to fit into both a regulated and an unregulated category. So, to me, it really seems as though previous efforts to regulate AI really don’t apply all that well. And that really worries me — this idea that we’re trying to rush to either regulate or ban something that we don’t really have a full, clear grip on just how these things work in practice and what the regulatory system should be.

McCarty Carino: The European example that you gave really illustrates a central problem with regulating technology — especially when it’s moving as quickly as AI is right now — of the scale of movement in government and regulation and legislation compared to how fast things move in tech. What kind of problems arise from that and trying to marry the two?

Rinehart: The big problem that I’ve seen in the last couple of years when trying to fit technologies into a certain kind of regulatory structure is that oftentimes, they don’t naturally fit within those structures. For Facebook and Google, there’s been conversations about trying to make regulation very similar to that of telephone systems, which I don’t think naturally apply. You’ve seen this idea of trying to apply old ideas of regulation onto new technologies, and for better or worse, we need to rethink what these regulatory structures should look like. Personally, I think what that really comes down to is probably something that puts the emphasis on the Federal Trade Commission and the Consumer Product Safety Commission to go after nefarious actors. The real way to deal with this is, is the way that we’ve dealt with problems with large companies in the past, which is that the FTC and the Department of Justice bring cases against companies that have harmed consumers. That really probably is the only way that we’re, in the long term, actually going to be able to create the best products for individuals and ensure that consumers have all the benefits but are saved effectively from the worst harms.

McCarty Carino: We are already seeing some harms from these generative AI tools happening in the world. We’ve heard from creatives saying that they’re already losing work to these kinds of tools. How do we address some of these harms without regulation?

Rinehart: That is a very difficult problem that I don’t think that we should be comfortable assuming that there’s easy solutions to. At the end of the day, what ends up happening almost universally over time is that we’ve seen that people transition into new jobs. And generally, everyone does better, consumers do better, people make more money and productivity goes up. The big problem is what happens between here and the near future and during that transition. I’m not going to deny that transition is oftentimes very bumpy. I think we need a much better understanding of how to retool industries and economies as they are shifting into newer technologies. This is kind of a fundamental problem that we’re seeing writ large in the United States.

McCarty Carino: Some companies in this space, including OpenAI, have said that there should be regulation, and often this can level the playing field when there are conflicting incentives in the market.

Rinehart: Yeah, and that’s interesting because they’ve said that they want this, but what exactly does it look like? To me, that’s probably the most important part of all of this. You can say we want regulation, but you kind of need to lay out what that regulation should look like, or you should have some understanding of why that new world is different than the current world. I tend to go towards the tried-and-true things that have worked in the past, like the Department of Justice or the FTC going after bad actors when they harm consumers. We have to recognize that there are benefits to these technologies, but there are downsides, and we have to deal with them as they come to us.

I mentioned this isn’t the first time we’ve discussed the call for a six-month pause in the development of AI. Earlier this week, we talked to Gary Marcus, an AI expert and one of the authors of the book “Rebooting AI: Building Artificial Intelligence We Can Trust.” He’s also one of the now thousands of people who signed the Future of Life’s letter calling for that pause in AI experimentation.

I asked him about national security concerns around pausing AI development and in particular, the risk Rinehart mentioned about China leapfrogging U.S. progress during the hypothetical six-month downtime.

Here’s what he said:

“People use ChatGPT to write drafts of recommendation letters and it’s really handy for that, but if you could do that 100 times better, would that mean you can take over the world? What is the actual thing that you would do with this system that we know is unreliable and can’t even play a decent game of chess without making up the moves? You’re not going to be able to use that for strategic planning and I’m not panicked over that. I think people are imagining that [China is] going to have this oracle that is going to be able to solve climate change or build new kinds of missiles. I don’t even really understand the specific worries. If I saw one specific thing where there was some strategic objective that could be accomplished with this technology, I would understand that argument against the moratorium, but I don’t actually see it.”

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer