Donate today and get a Marketplace mug -- perfect for all your liquid assets! Donate now
EU’s tech regulatory framework protects its consumers, but can slow down innovation
Dec 19, 2023

EU’s tech regulatory framework protects its consumers, but can slow down innovation

HTML EMBED:
COPY
Andrea Renda, director of research at the think tank Center for European Policy Studies, says the U.S. can learn a lot from European regulation, but focusing legislation that's sector-specific would be more flexible and pro-innovation.

When Google unveiled Gemini, its answer to OpenAI’s ChatGPT, earlier this month, the pitch was: AI that can run efficiently on everything from data centers to your smartphone.

But it came with a caveat for users in the UK and the European Union: you can’t use it there, for now. After the EU’s recent passage of the AI Act, Marketplace’s Lily Jamali spoke with Andrea Renda of the Center for European Policy Studies.

He says Google is trying to convince European lawmakers that Gemini complies with the continent’s tough privacy law, the General Data Protection Regulation (GDPR). Renda says the GDPR is likely why Gemini hasn’t made it to Europe, yet. The following is an edited transcript of their conversation.

Andrea Renda: This mandates that whoever makes use of potentially personally identifiable data, secures the consent, express consent of the users. But also that if the system is potentially available to minors, that there are specific warnings in place. And there are several safeguards with respect to the use and the management of data that has to be in place. And that is the same reason why, for a while, ChatGPT was actually suspended in Italy first and in other countries in the EU, not because it was really blocked but there were some elements of compliance with existing regulation that were not there yet, were not fulfilled.

Lily Jamali: Is the spread of AI complicating the implementation of the GDPR? Is the GDPR maybe not written a certain way that that’s kind of prepared for this moment?

Renda: I was about to say 100%. But actually, I would say 1,000%, because it’s really exponentially making things more difficult. So we’re not just on the internet, in general, today, we’re not looking anymore for only legislation that restricts the flow of data, but also legislation that enables that collection and the making available data for powerful AI use. And so there’s a lot of people that say, GDPR only sees things one way, but there is a flip of the coin, which is data have to be available, as much as possible, also for the EU to be competitive on AI, for example. Or for services to be given to European citizens to be as accurate as possible, because the more you use data, the more these systems tend to become accurate.

Jamali: Well, I can tell you that here in the U.S. there is the sense, among certain people, that at least the EU is moving fairly quickly on regulating AI. But are there concerns about the EU falling behind when it comes to adopting AI into daily life and to work, etc?

Renda: There are enormous concerns. There is a hefty debate in Europe on the potential effect of the AI Act on innovation. But at the same time, there’s people that say that the AI Act doesn’t go far enough in protecting people against the risk of AI. And the debate is very confusing. There’s people that say that there’s a lot of opportunities in AI and so that the AI Act insufficiently focuses on the opportunities, it’s just focuses on risks. There’s people that say the AI Act only focuses on high risk AI applications, but all AI is risky. And the rights of people should be protected across all applications of AI, not only on the high risk ones. So the debate is not over yet. But I think the AI Act doesn’t make some of the obvious mistakes. It’s taken a long time; I think that you could have sped up the process a lot more, especially easier. But every time you tried to finish it, there’s something new coming up…

Jamali: Well, something new being in the middle of this: the public release of ChatGPT, among other things.

Renda: Exactly.

Jamali: So when we look at Google’s Gemini, what is the path forward? When do we potentially see it operational in the EU?

Renda: It might take, I would say, a few months. We’ve seen a similar thing with Meta Threads. [That] has also been launched much later in Europe. Some people said it was just not a good social network.

Jamali: And they couldn’t implement it in the EU because of the laws there right away.

Renda: Because, in particular of GDPR, but also potentially on some of the rules on content moderation. Social networks that have been approved recently in the Digital Services Act. So there is a web of legislation at the moment that has just been approved, or is about to be approved and published in the EU that makes it very difficult to assess whether you’re actually complying or not. And so in the case of Gemini, we don’t know exactly what the concern is because we don’t have the full specification of what is being negotiated. But if it’s about GDPR, I think it should not take too long. It should probably take three or four months maximum, or probably less before Gemini could actually be launched.

Now the issue is, is Gemini good enough? Is it likely to make mistakes? Is it likely to hallucinate? Is it likely to show some flaws that are also potentially leading to legal problems? We don’t know yet because, again, there have been lots of speculations around the launch of Bard, but also the launch of Gemini. Is it as accurate as the DeepMind people say or not? So when you deal with these things on the launch of frontier AI models, almost nothing that has been told is actually true, or is only partly true. So the reason for this, I mean, if Gemini worked properly, and sufficient safeguards are in place on how to handle personal data, I would not see the reason to wait for many, many more weeks. So it’s probably checking on whether the system could be considered as compliant with GDPR. But I don’t think it will take long.

Jamali: Given that the U.S. is moving forward with some kind of legislation on AI, what are some lessons, do you think, for policymakers here? What can we learn from what’s happening on your side of the ocean?

Renda: In my opinion, the U.S. has gotten it quite right, in the sense that what they’ve tried to do is to be flexible in the legislation, and to be sector specific, because the [AI] executive order in the U.S. really mandates different actions to different federal agencies. Now, that said, who’s going to get it right or wrong? Certainly the U.S. system is going to be more pro-innovation and a little bit less protective of users’ rights. And so we in Europe, consider that our system is going to be more protective. But this depends very much on how well it will be implemented, how often it will be updated, and the starting point is not bad, but the implementation is everything.

More on this

How do you make laws to govern something that’s changing as rapidly as developments in artificial intelligence? Renda likes the phrase “adaptive regulation,” rules that are built to stay flexible as things change without requiring those rules themselves to be rewritten again and again.

He recently wrote that “[W]hile the subject matter evolves quickly, the underlying principles and goals typically remain unchanged.” Renda is a fan of the EU’s AI Act, details of which are still being worked out. But he acknowledges for it to work, “the mode of implementation and compliance” will require constant attention.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer