❗Let's close the gap: We still need your help to raise $40,000 by April 1. Donate now
What are the ethical hazards in the effort to commercialize AI?
Mar 15, 2023

What are the ethical hazards in the effort to commercialize AI?

HTML EMBED:
COPY
Microsoft's Bing chatbot has displayed some strange, inappropriate responses. Could training in ethics help?

Microsoft reportedly laid off its artificial intelligence ethics and society team as part of the 10,000 job cuts announced in January. The company said it still maintains several AI ethics teams and is upping investment in them.

But new generative artificial intelligence tools, like Microsoft’s Bing chatbot, raise multiple ethical concerns, said Arvind Narayanan, a computer science professor at Princeton University.

Marketplace’s Meghan McCarty Carino spoke to Narayanan about the potential pitfalls of widespread generative AI use, including misinformation, malware and inappropriate outputs, as well as what the tech industry could do to put up guardrails against such harms.

The following is an edited transcript of their conversation.

Arvind Narayanan: There are a few recurring concerns that come up. One is that these tools are trained on the labor of people who have written text online or put up images online. That could be artists, photographers. And those images might be copyrighted [that] people might not have intended for someone else, like an AI company, to make commercial use of those images. And yet, AI companies have argued that copyright does not protect the use of online text or images for training AI programs. And once trained, these AI programs are often able to output the very kind of thing that threatens the jobs of some of the people who generated the training data. Another concern is that these bots could produce certain types of harmful outputs. There was that well-known New York Times article by Kevin Roose where the Bing chatbot had a so-called unhinged conversation with him where at one point, it suggested that he leave his wife. So chatbots might end up saying things that are very emotionally disturbing to users. They might even suggest real-world harm. A third set of harms is misuse. Are people going to be using these chatbots that are capable, not just of generating text, but also code? So are people going to be using chatbots to generate malware, for instance? So is it going to enable someone to create malware who is not a programmer and who otherwise would not have been able to engage in that kind of cybercriminal activity? And then one last one I’ll mention — this is not an exhaustive list, but these are some of the main ones— is that often people don’t understand the limitations of these chatbots. They can sound very convincing. And so it appears that they are giving accurate information. So if students start to learn, start to use these as learning tools, for instance, it could be that a lot of answers that are wrong, but unless you already know the answer, you can’t tell that it’s wrong. And so it could really misinform people.

Meghan McCarty Carino: So when we look at all of these examples that proliferated early on of the chatbot becoming aggressive or defiant, or in the case of a New York Times columnist, suggesting that he leave his wife. What do you see going on there?

Narayanan: So they’re trained on all of the conversations on the internet. And so that includes not just formal and polished text, like journalism or Wikipedia or books. It also includes things like Reddit and perhaps 4chan. And when that’s the case, the bot essentially learns all of the styles of conversation that exist out there. And unless there are specific mitigation methods put into place, and those methods do exist, but if that is skipped, any of those quote-unquote “personalities” can come out in any given conversation. And that might strongly depend on how the user interacts with the bot. And the bot has seen in its training data that when one person talks in a particular tone, the person responding often tends to have a similar tone, if that’s one reason that the bot might display that kind of behavior.

McCarty Carino: Yeah, it seems like a lot of people are surprised when the bot acts exactly like how a bot might act in science fiction or the concerns that have been raised about artificial intelligence, maybe in lots of news articles or reporting or studies. But these are all kind of part of the bots’ training, right?

Narayanan: They are part of the bots’ training. And we should see bots by default as being capable of these types of very undesirable behaviors. If we want bots that behave in more appropriate ways, that requires a very painstaking training process, and it appears that ChatGPT has had more of that kind of training, called reinforcement learning from human feedback. Whereas the Bing chatbot, at least in its first iteration, seems to, again from what we can tell from its public behavior, seems to have had much less of that type of training.

McCarty Carino: So these are both based on the OpenAI large language model, but they kind of have distinct systems. Tell me more about the process of training and kind of putting guardrails in place.

Narayanan: There are three steps primarily in training chatbots of this kind. The first step is called pretraining, and what pretraining involves is generating a massive corpus of text from the internet and training the bots to be able to statistically predict, given a sequence of words, what the next word is likely to be. And this seems like a trivial thing, but in that process, chatbots learn a lot about the syntax of language and even some properties of the world. The second step is to turn it from an autocomplete bot to a chatbot. And that is called instruction fine tuning. And what that involves is training it to respond as if the user is giving commands to respond to, as opposed to giving texts to autocomplete. And the third step is by far the most important one from an ethical perspective. That’s called reinforcement learning from human feedback. And what that involves is training the bot to recognize when a request might be inappropriate and to politely refuse that request, and also in its own outputs to steer that output away from racist rants or disturbing conversations to something that is more appropriate for a bot.

McCarty Carino: So just to be clear, when we see an AI chatbot expressing what seems to be emotion or love or desires, these very human types of things, does that mean that the chatbot is sentient?

Narayanan: I don’t believe so. There is a wonderful paper from a couple of years ago. It’s called “Stochastic Parrots: Can Language Models Be Too Big?” and that paper makes the point that when these chatbots mimic human language, people who interact with those AI agents end up inferring humanlike, you know, intent or internal states when that’s in fact not the case.

The newsletter Platformer was the first to report on Microsoft’s cuts to its AI ethics and society team.

And if you somehow missed all the drama that Bing’s chatbot was causing early on, New York Times columnist Kevin Roose wrote about how his long conversation with the chatbot turned weird. It made more than a few disquieting statements, including urging him to leave his wife.

The Verge reported other stories of the chatbot acting like an “emotionally manipulative liar,” including that it insisted the year was 2022 in an exchange that devolved into insults.

As we noted, these reports were from early iterations of the Bing chatbot.

When we tried it out last week with Joanna Stern of The Wall Street Journal, we didn’t get anything quite so spicy, which was actually kind of a problem ’cause we were asking for recipe recommendations.

Microsoft released a statement in response to Marketplace’s request for comment. It reads:

“As we shared with Platformer, the story as written leaves the impression that Microsoft has de-invested broadly in responsible AI, which is not the case. The Ethics and Society team played a key role at the beginning of our responsible AI journey, incubating the culture of responsible innovation that Microsoft’s leadership is committed to. That initial work helped to spur the interdisciplinary way in which we work across research, policy, and engineering across Microsoft. Since 2017, we have worked hard to institutionalize this work and adopt organizational structures and governance processes that we know to be effective in integrating responsible AI considerations into our engineering systems and processes. We have hundreds of people working on these issues across the company, including net new, dedicated responsible AI teams that have since been established and grown significantly during this time, including the Office of Responsible AI, and a responsible AI team known as RAIL that is embedded in the engineering team responsible for our Azure OpenAI Service. Less than ten team members were impacted on the Ethics and Society team and some of the former members now hold key positions within our Office of Responsible AI and the RAIL team.”

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer