Donate today and get a Marketplace mug -- perfect for all your liquid assets! Donate now
“Model collapse” shows AI doesn’t have the human touch, writer says
Sep 21, 2023

“Model collapse” shows AI doesn’t have the human touch, writer says

HTML EMBED:
COPY
"To make a really good AI, you need real prose written by real humans," says Clive Thompson, author of "Coders."

Chatbots like OpenAI’s ChatGPT have become pretty good at generating text that looks like it was written by a real person. That’s because they’re trained on words and sentences that actual humans wrote, scraped from blogs and news websites.

But research now shows when you feed that AI-generated text back into the models to train a new chatbot, after a while, it sort of stops making sense. It’s a phenomenon AI researchers are calling “model collapse.”

Marketplace’s Lily Jamali spoke to Clive Thompson, author of the book “Coders” and contributing writer for The New York Times Magazine and Wired, about what could be a growing problem as more AI-generated stuff lands on the web.

The following is an edited transcript of their conversation.

Clive Thompson: The problem with AI output right now is that it’s really good, but it’s sometimes just a little off. Like, you’re talking to ChatGPT and it’s 99% of the way there, but it’s still 1% inhuman in a kind of a weird way. So what model collapse is is those little inhuman things getting compounded and rolled up because the AI is being trained on a previous one, so it’s kind of learning that weirdness, and then it starts turning into a little snowball.

Lily Jamali: It starts to get really bad, right?

Thompson: If you do it over several generations, the first generation is kind of making sense and the second one is starting to say odd words. An analogy might be if you were to take a photo of the “Mona Lisa” or a picture of King Kong on top of the Empire State Building, and you were to photocopy it, and then photocopy the photocopy, and then photocopy the photocopy again, eventually, it starts to sort of look really weird, because the photocopy is 99% accurate. But that 1% inaccuracy will change maybe the contrast or the color, make it a little too white, little too black. And then after the 100th time, you’ve got a really weird-looking picture. And that’s a little bit like what happens with model collapse.

Jamali: Walk me through how researchers discovered this phenomenon of AI model collapse. What was their process?

Thompson: The researchers were interested in the situation right now where there’s people using ChatGPT and similar chatbots to create prose and then they’re posting that to the internet. They thought if OpenAI is scraping stuff off the internet to train its next models, what’s going to happen if they if they start encountering a lot of AI-written prose when they’re scraping it? So, they tried to recreate that scenario themselves. They used the same techniques that OpenAI or Meta or another company will use to create a language model and they fed it data. They said, “Here’s a bunch of data. Ninety percent of it is human prose and the other 10% is machine-generated prose.” They would use that data to build a model, and then they would use it to generate prose. And then they could see what would happen if they did that over several generations, sort of rolling the snowball three or four iterations, and they could see how progressively unglued it got.

Jamali: The researchers include this series of text paragraphs that show the degradation of text output over generations of AI models, and they start by inputting this paragraph about the construction of church towers in the 14th century. Nine generations of output later, the model spits out what is essentially gibberish. It says, “In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-” There are some @ signs sprinkled throughout the text. Very weird. What is going on there?

Thompson: Basically, that is the ninth generation of researchers taking the output of a model and feeding it to a new model to train it. So, it’s a robot being trained on what a previous robot says. Essentially, all those little errors have compounded over and over again, until by that ninth generation, the bot is just completely collapsed. It is no longer remotely answering the question or the prompt about a church tower. In the output from earlier generations, if you were to look back, you could see it could still talk about churches. By the seventh generation, it’s got the word architecture in there at least, but it’s vague. The ninth one is just babbling about jackrabbits. That’s kind of like the 15th or 20th photocopy of the “Mona Lisa,” with all the errors beginning to emerge.

Jamali: What are the implications of model collapse?

Thompson: The implications are that maybe all of these language models are, over the next few years, going to start to become worse and worse and worse. That’s one possibility. If model collapse is really a serious problem, and OpenAI and Google and Microsoft and everyone just keeps on scraping the internet and feeding it to train their models, they could get much worse models, and we could be using models that answer even more unpredictably than they do now.

I doubt that’s the way it’s going to go, because I think all these people that create these models are going to see this happening and get very worried about it. They’re either going to not release a new model that’s even more deranged than existing models, or they’ll probably try and find some way to cope with it or fix it. For example, they could pay humans just to write new prose for them. Like, “I need a billion more lines of stuff, please just write stuff, write anything so that we can feed it to the model.” That’s one thing they could do.

The other thing is they could maybe try to save shards of the older training datasets and use them to sort of freshen things up. There’s a lot of different AI techniques you can use, and I think they’re going to have to lean into discovering new ones to cope with this over the next five to 10 years.

Jamali: What do you think model collapse says about the relationship between these language models and human expression?

Thompson: I think model collapse really draws a bead on the incredible value of real human communication, whether that’s an email or a blog post or a tweet. What this shows is that if you want to make a really good AI, you need real prose written by real humans, and there’s clearly some sort of lightning in a bottle that we have that the artificial intelligence systems don’t yet have. Or maybe they’ll never have, we don’t know. That’s what model collapse really shows us.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer