With modern chatbots, AI weirdness reaches new heights

Kai Ryssdal and Maria Hollenhorst Mar 23, 2023
Heard on:
HTML EMBED:
COPY
AI researcher and blogger Janelle Shane tried to convince ChatGPT to impersonate a squirrel. The results revealed some bugs in AI chatbots. Chip Somodevilla/Getty Images

With modern chatbots, AI weirdness reaches new heights

Kai Ryssdal and Maria Hollenhorst Mar 23, 2023
Heard on:
AI researcher and blogger Janelle Shane tried to convince ChatGPT to impersonate a squirrel. The results revealed some bugs in AI chatbots. Chip Somodevilla/Getty Images
HTML EMBED:
COPY

In 2019, “Marketplace” host Kai Ryssdal interviewed artificial intelligence researcher and blogger Janelle Shane about AI-generated cats. At the time, algorithms were quite good at generating faces but less good at generating pictures of cats.  

In experiments for her blog, AI Weirdness, Shane asks AI to do all kinds of things — like write love poems and recipes or design novelty socks. The results, in keeping with the name, are often weird. 

When a former Google employee sparked conversations about AI becoming sentient last year, Shane did an experiment in which she convinced an advanced AI to impersonate a squirrel. Yes, a squirrel. 

“This was sort of in reaction to [people worrying] when these algorithms would claim to be self-aware,” she said. “And I said, ‘Look, just because it describes the experience of being a self-aware robot from science fiction doesn’t mean that it actually is one, because this one’s describing the experience of being a squirrel.”

Now that ChatGPT, an application built with Open AI’s GPT technology, has made AI more accessible, Ryssdal spoke with Shane again. 

“These modern chatbots are all trained to imitate internet text,” Shane told Ryssdal. “And what people who are selling these as products are trying to do is trying to get them to predict a web page in which there is a dialogue between a user and a helpful chatbot.”

ChatGPT has been given rules by its creators meant to discourage that “helpful chatbot” from using offensive language, supporting violence or, as Shane discovered when she attempted to redo her squirrel experiment using ChatGPT, pretending to be a squirrel. 

But because ChatGPT adapts as users interact with it, those rules can be broken with a series of prompts. The following is an edited transcript Shane and Ryssdal’s conversation. 

Ryssdal: Is it sort of like leading the witness? I can guide its responses? 

Shane: Exactly. And this is why you will see its responses getting guided and all sorts of different ways. People describe “hacking” these chatbots, [but] it’s more like convincing it that it’s on a different website or telling a different sort of story. Like maybe [it thinks], ‘Oh, actually, this dialogue is happening in the context of a science fiction story’ and so, now it’s starting to fill in the lines of a science fiction chatbot. 

Ryssdal: And that’s how, broadly speaking, we got to the whole Kevin Roose story in The New York Times where it starts calling himself Sydney and falls in love with Kevin Roose. 

Shane: Yeah. And the longer the dialogue gets, the more likely it is to say, “Well, you know, chances are this is a science fiction story we’re in,” and the more the user nudges it toward, “Oh, what are your feelings? Don’t you feel upset?” the more likely it is to fall into these very well-worn science fiction tropes. It’s all us. It’s all a mirror of science fiction. 

Ryssdal: All right, so look, let me get you to the “Marketplace” of this thing. People are monetizing this already. You know, Google’s out with its new thing, and Microsoft is doing its thing, and big companies are involved. My question is how long before it becomes ubiquitous in this economy, do you suppose?

Shane: Good question. I think it has more to do with what companies are feeling pressured into, coming out with a product because their competitors are, versus what actually makes sense as a solution to a problem.

Ryssdal: Yeah, say more about that, right? Because what is the problem that this kind of AI is trying to fix?

Shane: It’s like a solution — and AI chatbot — in search of an application. And so some of the applications we’ve seen it applied to, like the big one that Bing is using now, they’re using it as an internet search algorithm. And the problem is that it’s trained to generate statistically probable text in a dialogue between a human and a helpful chatbot. It will find you the answer to what you’re looking for, whether or not it actually exists.

Ryssdal: Which is terrifying on a lot of levels. But look, you are an expert in this, and it’s not out of the realm of possibility that some company big or small is going to come to you and say, “Janelle, we want to engage your services as a consultant to help guide us through the monetization and the introduction into a product of these chatbots.” What is your advice, as we sit here in early-ish 2023?

Shane: I think it is most useful if it’s giving suggestions to a human who knows that its suggestions are not to be trusted. Maybe a customer service situation where it’s suggesting responses, and the agent can say, ‘Ah, yes, that sounds good’ and click on it. Things like that, where you have a human in the loop serving as a check — these kinds of things I see as much more possible to make work than something that’s very open-ended.

Ryssdal: Last thing and then I’ll let you go. What is your level of concern about getting this out into society and into the economy without really having thought through the ramifications? Which is a very human thing to do, if I can mix my metaphors here.

Shane: Yeah, I think there are problems with what we’ve done so far. I mean, we’re seeing the workloads on teachers increase as they have to figure out what to do with AI-generated essays. There’s a science fiction magazine, Clarkesworld, that had to close their submissions because people kept flooding it with [AI-generated stories] — 

Ryssdal: Yeah, yeah!

Shane: It was exponential, this increase. The Federal Trade Commission just released recommendations trying to sort out this sort of AI fakery, [that it] should not be on the consumer, it should really be on the people releasing these tools to think about how they could be misused.

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.