❗Let's close the gap: We still need your help to raise $40,000 by April 1. Donate now
Do we have an AI hype problem?
Apr 3, 2023

Do we have an AI hype problem?

HTML EMBED:
COPY
Thousands of experts are sounding alarms about a potential dark future created by artificial intelligence. Computational linguist Emily M. Bender, however, says we should be more concerned about the harm the technology is already causing.

Last week, Elon Musk and more than 1,000 experts in science and technology signed an open letter to labs developing advanced artificial intelligence, asking them to pause the “out of control race” to train ever more powerful systems.

The letter warns that these “non-human minds” might eventually outsmart us, risking the “loss of control of our civilization.”

But the way the issue is framed misses the mark, according to Emily M. Bender, a computational linguist at the University of Washington. Bender is skeptical of AI hype, but that’s not to say she doesn’t have concerns about this technology.

She and her co-authors lay out a number of those concerns in their 2021 paper, “On the Dangers of Stochastic Parrots.” That’s how she describes large language models like ChatGPT.

Marketplace’s Meghan McCarty Carino spoke with Bender about what she sees as the real dangers in these models, starting with the way they use language itself.

The following is an edited transcript of their conversation.

Emily M. Bender: Text is very convincing. Language is something we associate with other humans, and when we understand language, what we’re doing is imagining the mind behind the language. When the text didn’t start from a human, but instead it started from one of these probabilistic text synthesis machines, we are still going to be prone to imagining a mind that’s there and being fooled into thinking it is understanding, reasoning and having thoughts about the world, when it’s not.

Meghan McCarty Carino: One of the dangers you have not mentioned is the idea that AI could turn evil or put us all out of our jobs and take over the world. What do you make of that kind of rhetoric?

Bender: I think that kind of rhetoric is a large distraction and a sales technique. It’s this idea that if this thing is so powerful and it might turn into this world-ending monster, then the company that built it has made something extra-super-powerful and extra-super-special. It’s also a big distraction from the real harms that are happening now, like when the system puts out noninformation that gets interpreted and then becomes misinformation, including things like repeating racist remarks, for example. Those are the real harms that are happening now and aren’t part of some speculative fiction.

McCarty Carino: Is it fair to say you think that is contributing to a bit of a hype problem around AI?

Bender: I think we have an enormous hype problem around AI, starting with the term AI. The term “artificial intelligence” suggests what it is that people want to build. There is a much better term that was proposed by Stefano Quintarelli — his term is systematic approaches to learning algorithms and machine inferences. That’s a mouthful, but the point is, the acronym is SALAMI. When you use that term, you can tell how ridiculous it is to ask, does this SALAMI have emotions? Is the SALAMI sentient? Does the SALAMI have good morals? So, it really is important to talk about this in terms that helps us see it for what it is. And obviously, it’s not salami, but it’s an artifact. It’s a computer program. It’s a set of approaches to using statistics over the distribution of words to come out with more strings of words. It’s not actually intelligent.

McCarty Carino: I was experimenting with one of these new chatbots and I got some weird answers. I asked it about some of the recent bank failures just to see what it would reproduce. And I noticed it was especially bad with getting dates and sequences right. It was consistently getting dates of the bank failures totally wrong. When we think about the actual process that’s happening behind the scenes, can that shed some light on some of the errors that we might see like this and how we should really regard the text and information that these tools generate?

Bender: I think it’s best to think of that text as noninformation. You put in a question that is of interest to you, and this sounds like it’s a case where you really do have knowledge of the subject.

McCarty Carino: Yeah, I was able to check the information against my own reporting.

Bender: So that’s the safer use case. If you have an information need and you put in the query, then you’re not in a position to check it so well, right? So, you put in this query and its job is not to understand the query. It’s not going to go into some database and find relevant documents, but rather, given that set of words, it’s going to determine what’s the likely word to come next, and then after that, and then after that, and then after that. Because it’s got such an enormous set of training data and because the algorithm is put together pretty cleverly, it can do that in a way that looks plausible. That makes the noninformation even more dangerous because it’s going to be plausible noninformation, instead of obviously false noninformation.

McCarty Carino: And right now, it’s pretty near impossible to identify text that has been generated by one of these tools, if you come across it accidentally.

Bender: Exactly. It’s because the whole thing about it is it’s meant to mimic what a human would say, in that kind of a style. And it’s very good with stylistics. There are no requirements of transparency, nothing that says this has to be watermarked or this has to be tagged as synthetic. And that’s scary to me.

McCarty Carino: When we look at some of the use cases that are being proposed and tried out here with some of these large language models, what concerns you the most?

Bender: One thing that I think feels the most urgent right now is the pollution of our information ecosystem. We have these models that are up and running and anyone can go play with them and then take this noninformation and post it as if it were information. Famously, Stack Overflow, which is a website where computer programmers answer questions for each other, banned the use of ChatGPT very quickly after it became available because the whole value of Stack Overflow is that people ask questions and other people give answers. Noninformation on there is just going to dilute that. If it gets to the point where it is harder to find trustworthy information, it might also be harder to trust the sources that should be trusted. That’s going to have implications for a lot of things in our public life if we can’t share information with each other and know what information is actually a good, reliable source.

You can read the full letter published by the Future of Life Institute calling for a pause in AI development here. And stay tuned: We’ll have a conversation with one of those signatories on an upcoming show.

You can also check out the response to that letter published by Bender and her “Stochastic Parrots” co-authors condemning the “fearmongering” and “AI hype” of the letter’s authors.

The “Stochastic Parrots” paper has become pretty iconic by now. Before it was even published, it was at the center of a controversy that led to the 2020 departure of two high-profile members of Google’s AI ethics team — Timnit Gebru and Margaret Mitchell, who are co-authors of the paper.

Now, the term “stochastic parrot” has become something of a catchphrase for skeptical critiques of generative AI as well as for the countercriticism of those critiques.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer