❗Let's close the gap: We still need your help to raise $40,000 by April 1. Donate now
The complications of regulating AI
Apr 27, 2023

The complications of regulating AI

HTML EMBED:
COPY
The idea that advancing technology outpaces regulation serves the industry's interests, says Elizabeth Renieris of Oxford. Current oversight methods apply to specific issues, but "general purpose" AI is harder to keep in check.

When it comes to regulating artificial intelligence, do we need to reinvent the wheel for every new advance?

It certainly feels like AI has been moving faster than policy. In the time it took to go from GPT-3.5 to 4, Congress has … well, you know.

In recent months, experts have called for a pause in AI development to let human systems catch up. Already, these models are raising alarms about misinformation, job losses and intellectual property protection.

But do we need new legal tools to regulate these new tech tools?

Not necessarily, according to Elizabeth Renieris, a senior researcher with Oxford University’s Institute for Ethics in AI who says our existing legal frameworks can do the job. She told Marketplace’s Meghan McCarty Carino that the building blocks of technology — people, companies and data — are already subject to existing regulation and legal principles.

The following is an edited transcript of their conversation.

Elizabeth Renieris: When you think about what these technologies are, they come down to sort of three building blocks. There are people, there are companies and there are data. People, companies and data are all subject to existing regulations, from all manner of legal sort of fields and disciplines. So again, if we think about this in terms of consumer protection, we have sort of basic deceptive and unfair trade practices precedent. If we think of this from IP and copyright, certainly there are many bodies of law to bring to bear there. If we think about this from data protection and privacy, we can address some of the data-related aspects of this. If we look at this from a broader human rights lens or framework, again, there are many rights and freedoms that are implicated and impacted which we have dealt with in the context of other tools and technologies. So I think that it’s inherently the wrong lens to look at this from the perspective of AI. However, it serves a lot of powerful interests to really focus in on the technology and to lose sight of the fact that again, this is really just people, companies and data.

Meghan McCarty Carino: What do you mean it serves these interests?

Renieris: Yeah, well, I think that this has been sort of a strategy of, you know, frankly Silicon Valley and the private sector for many decades now, which is this sort of defer-delay-distract, right? Pretend there are no laws and regulations that apply, call for new laws and regulations, knowing that those will potentially take, you know, years if not decades. And there’s a sort of, you know, constant kicking-the-can-down-the-road mindset and mentality, and the way that they sort of perpetuate that is by pretending that there’s some new magical technology that we cannot, you know, that we can’t wrap our heads around because it’s so unprecedented and it’s so magical. So I think the same thing is happening around AI.

McCarty Carino: Do you see any big gaps in our existing policies when it comes to combating some of these harms?

Renieris: So I think one of the biggest challenges, which Europe’s really grappling with right now, for example, in their AI act, is this question of general-purpose AI. I think that does really complicate things a little bit in the sense that, you know, when someone designing or developing these systems doesn’t have an intended purpose in mind, they really are general-purpose in the sense that anyone can then go and access or use or integrate them into their own tools for different purposes in different contexts, different sectors. It is tricky to sort of fully assess the foreseeable and unforeseeable harms and potential use cases, you know, when you’re designing something and you don’t know whether it’s going to end up in a, in a medical chatbot, for example, or in a learning tool or, you know, in the entertainment context. I think that part of it is, is really where we need to sort of close the loop. And that’s where in some sense, it is helpful to have these sort of sectoral-specific laws, such as the ones I mentioned earlier, where states are applying some of this in certain contexts, like employment.

McCarty Carino: One thing that’s come up a lot in our discussions on the show is the threat of kind of industrial-scale misinformation coming from these tools and concern, looking at the difficulty that, you know, our infrastructure has had just dealing with social media on that front. I mean, what do you see kind of as tools there?

Renieris: This is something that definitely keeps me up at night, I have major concerns about 2024. I don’t think we’re prepared for what we’re about to see. I think if we thought 2016 was bad, you know, we’re in for a really unpleasant surprise. I think, again, the scale of these technologies is very different. They are much easier for anyone, from an individual to smaller entities to large entities to nation-states, to adapt and to integrate and to deploy in a way that wasn’t true of, of previous technologies. And so I think all of the concerns we’ve seen around mis- and disinformation on social media are just going to be that much more amplified. I think if there is, you know, any policy intervention that’s really going to help there, unfortunately, I think it means leaning more into the ex ante side of things, as in limits and rules that apply sort of pre-deployment.

McCarty Carino: So what would, what would that look like?

Renieris: Of course, it depends on the use case and depends on the technology. For example, again going back to what the [European Union] is trying to do, they’re trying to look at these sorts of high-risk use cases and impose, you know, additional ex ante requirements on those before they’re sort of released on the market. I think another thing that’s worth noting here that’s very tricky from this standpoint is there’s a very blurry line between, for example, research and development and commercial deployments in the context of AI. Companies, especially when it comes to sort of the companies that are developing these tools, you know, they’ll say but they’re sort of doing research until the point where something is suddenly on the market. And it’s now a commercial deployment. And because it was in that sort of research context, you know, we sort of backed off from a regulatory standpoint. And then by the time we intervene, it’s too late. And so I think even in the R&D context, we probably need to revisit, you know, are there limits that can be imposed there?

McCarty Carino: There’s this perennial concern raised whenever we talk about regulating new technology that the pace of policymaking fundamentally can’t keep up with advances in technology. Do you see that as a problem?

Renieris: I think it’s tempting to adopt that view because in a way, it sort of lets us off the hook. Right? And we start to feel, you know, that there’s a certain inevitability to the fact that these technologies are going to outpace us. I think one way to counteract that, again, is to stop starting from the point of view of the technology and to always return to the point of view — I mean, I catch myself having to do this. It’s very easy to get distracted by the tech. And so the more we can return to the human experience of this technology, I think the better chance we stand because then you really can contextualize these things.

Renieris’ book, “Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse,” came out in February. Much like the focus on new technology she talked about with us, she argues that our focus on protecting “data” can obscure the real-world harm that is happening to people and societies.

The call for a pause on AI development was first publicly posed in a letter now signed by thousands of scientists and tech leaders. However, other academics are skeptical of the hype surrounding AI.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer