Without AI regulation, the “information apocalypse” looms, expert says
Apr 5, 2023

Without AI regulation, the “information apocalypse” looms, expert says

HTML EMBED:
COPY
Gary Marcus says he signed an open letter urging a pause in artificial intelligence development based on concerns about misinformation. He worries that digital manipulation will make the 2024 election a "train wreck."

Last week, more than 1,000 scientists and tech leaders, including Elon Musk, signed an open letter calling for a pause in the race to develop more powerful artificial intelligence models.

The letter channeled a certain dread that it seems many are feeling about this fast-changing technology. It also became a lightning rod for criticism, from both AI boosters and skeptics.

Gary Marcus is a signatory. He’s a professor emeritus of cognitive science at New York University and co-author of the book “Rebooting AI: Building Artificial Intelligence We Can Trust.”

Marketplace’s Meghan McCarty Carino spoke with Marcus about his reasons for joining the cohort calling for a time out.

Gary Marcus: We haven’t put in the work yet as a society, especially not at an international level but even at local levels, to really know what to do. There’s no question that we need regulation, and there’s no question we don’t have any yet. We’re about to enter an information apocalypse. It is so easy now to make up fake videos about anything, or at least fake stills that are pretty convincing. We have to worry about bad actors taking advantage of these tools to make industrial-grade or wholesale-grade misinformation that looks really good. What’s the consequence for that? Who’s liable for that? There’s just not a lot of regulation or thought yet about how to do it. And so the idea with the letter was let’s, let’s take a little time.

Meghan McCarty Carino: What are your most immediate concerns?

Marcus: First of all, I think that the 2024 election is going to be a train wreck. That is my most immediate concern. It’s going to be very easy, for example, to put out fake news stories that look like they are from authentic publications, with photographs and with multiparagraph stories. And it will be easy to put out many variations of that story to completely flood the zone with nonsense that looks so good that nobody’s going to be able to tell the difference. Anytime anybody says anything, there are going to be these countercampaigns with complete garbage. I’m afraid people are going to wind up in this place where they really don’t believe anything.

McCarty Carino: It sounds like you are more concerned with what already exists out there in the world than with the potential harms of future developments. I’ve seen you express online that you’re more concerned about malicious humans using this technology than you are about the machines becoming malicious.

Marcus: That’s 100% true, and also, as a scholar, I recognize both my own limits and recognize that these things are not exclusive. I guess some people signed the letter because they’re very afraid in the long term that we will lose control of the machines altogether, and you can be worried about that or not. Already, we don’t really have control of the machines in the sense that ChatGPT is a very unpredictable beast. I like to use the phrase that it’s like a bull in a china shop. It’s powerful, it’s reckless and we don’t really know how to control it. That’s only going to get worse. The other side of it is people are just giving the AI more and more authority. There are companies in Silicon Valley now trying to hook these things up to everything, like nuclear structure and cars. The surface area of connecting this technology, that is fundamentally unreliable and not tightly controlled, to everything, which is what the goal of Silicon Valley seems to be this month, is just a bad idea.

McCarty Carino: Right now, there’s basically a race between these companies. Does there need to be some sort of incentive for them to slow down?

Marcus: Obviously, it’s not in the immediate short-term interest of these companies to do that. But they’ve all come out, actually, in favor of saying that we do need some regulation here. I don’t know that they’re going to agree to a pause, but they all acknowledge the need for regulation. Most of these harms are still things that we anticipate, but they’re not things that have happened yet. Some of them are difficult to measure, like misinformation may have already risen, but we don’t really know. But at some point, the public will say, you’ve got to stop. At that point, the AI companies will say, instead of stopping how about if we do X instead? I keep thinking of horses leaving the barn and the option of closing barn doors before the horse leaves or after. I’d like to see us close some of the doors before the horses leave.

McCarty Carino: Are there other domains that you can look to where something like this pause, or cooperation on a mass scale, around something that feels inevitable has happened?

Marcus: I’m not an expert here, but it has happened multiple times in the history of molecular biology. People have been concerned about the risks of the things that they’re building. There have been some voluntary pauses that have been internationally accepted like on cloning, for example, or germline alterations of various sorts. There have been a number of things where people have said, this is risky. Let’s pause here and figure out how we can do this safely or even whether this is safe at all. So yes, people in other domains have thought about these things. What worries me right now is that whereas the biologists talked a lot to the ethics people and really tried to work this all out, so much in AI is being driven right now by people’s desire to just scale up GPT and see what they can do with it. I don’t think that ethicists are being consulted that much, and I think it’s mostly being driven by money. I fear that that may get us in trouble.

McCarty Carino: Whose job do you think it is to regulate technology like this?

Marcus: The only way it’s going to work is if we get the governments and the tech companies to agree that they need to work together. We need international structure around this, which is actually in the interest of the tech companies. They don’t really want to have 500 different laws in 500 different countries. Interestingly, just in the last week, we saw Italy ban ChatGPT and the United Kingdom put out a white paper saying they wouldn’t have a central regulator. Those are two pretty opposite extremes. And you can imagine, if every country winds up somewhere in the middle of that continuum with their own rules, that’s not really ideal for the tech companies or for society. So, I think we need people to sit down together and try to work out what they can all live with, that reflects consensus values of humanity and not just profit motives.

You can find more of Marcus’ perspective on advancing technology and artificial intelligence in his podcast, “Humans vs. Machines.”

This week, we featured an interview with Emily Bender, a computational linguist at the University of Washington who is skeptical of the Pause Giant AI Experiments letter, which was posted at Futureoflife.org. She’s a co-author of the often-referenced paper about large language models called “On the Dangers of Stochastic Parrots.”

In a statement to “Marketplace Tech,” Bender said she agreed with some of the policy ideas in the letter but not how it framed the dangers with language she called “unhinged AI hype.” She said the focus shouldn’t be on the imagined doom of as-yet-unrealized superhuman AI, but instead on the risks of these tools that have already become apparent.

Many of those risks are the same ones Gary Marcus cited as reasons for signing the letter.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer