Customer service is being automated. Will bots take over those jobs?
Apr 25, 2023

Customer service is being automated. Will bots take over those jobs?

HTML EMBED:
COPY
Forrester analyst Christina McAllister advises companies to keep human agents involved in those calls and chats. That’s because generative AI models hallucinate, their responses can’t be controlled and they can jeopardize customer relationships.

Before ChatGPT took the world by storm, wowing users with its prose-writing prowess, most people knew chatbots as those annoying website pop-ups that offered basic and not always useful customer support.

Even before chatbots could pass the Law School Admission Test, customer service was moving toward greater automation, often in an effort to cut costs. Human agents are an expensive and finite resource, causing those long, Muzak-filled waits and limiting the hours of service.

So will the current artificial intelligence boom push humans even further out of the customer support game? Marketplace’s Meghan McCarty Carino spoke with Christina McAllister of Forrester, who says, “Not so fast.” McAllister is a senior analyst who works on customer service research and strategy. The following is an edited transcript of their conversation.

Christina McAllister: In recent years, we’ve heard Frontier Airlines, for example, had very famously announced that they were cutting all of their phone support out and a number of things like that. And it’s very tricky to do that when you don’t really have, frankly, the AI capabilities today for chatbots and that kind of thing just really aren’t advanced enough to be able to have meaningful conversations. ChatGPT and all of these announcements, they’re getting there. I mean, we’re getting closer, but we at Forrester don’t recommend anyone use those capabilities for consumer-facing use cases because it’s just too risky right now.

Meghan McCarty Carino: Tell me more about that. Why would you not recommend using these new tools, which have made such a splash, ChatGPT among them?

McAllister: I mean, I like using ChatGPT. I’m one of the people who likes playing around with it, for sure. And there’s a lot of value there. But right now, the value should be for augmenting your internal employee audience because the risk of these large language models, generative models like ChatGPT and similar — they make things up. A lot of us at Forrester were playing around to see who of us does ChatGPT know about. And for many of my colleagues, [it] would just invent their educational background — “They went to school for biophysics” — [and] it’s like no, they did not. This is obviously an open area of research, new things are happening all the time. And I know that there are a lot of companies that are really focused on solving this problem. But, at least as of today, there is no safe option for enterprises, basically, to control what it is that a generative model says if it’s based on a large language model. There are some ways you can put a bit of boundaries around it, but it’s still a little too early in the development life cycle for the products that an enterprise would buy to really shape this up in a way that would be safe for external audiences so that you don’t want to end up being the company that is being used for the next 10 years as the example of a chatbot gone rogue.

McCarty Carino: Now, what about emotion-recognition AI? How is that being implemented?

McAllister: This is something that is more common, like in analytic software. It’s kind of used in two ways: both to categorize a number of the customers’ emotions in their post-contact analytics, so if I wanted to review all the calls that were about billing and I wanted to understand what my customers felt about the issues that they had with billing, for example. Then, what is called sentiment analysis, but emotion analysis a little bit deeper, [where] you step into a bit more of emotional categorization versus just positive, negative or neutral. But then there’s the other side, for agent-facing guidance when the machine detects emotion of some kind to be able to help the agent navigate that conversation. It’s imperfect, though.

McCarty Carino: How does that work? What does it do if it detects someone is frustrated or emotional, what happens?

McAllister: The most common example, actually, is something that basically the prompt would show up as a real-time cue or like a tip kind of thing, like a notification of some kind for the agent. And it would say something like “Your customer is frustrated. Show empathy.” But, of course, the agent knows that. If the machine knows that, a good chance the agent knows that. And the agent is usually pretty neutral because that’s their job — to solve customers’ problems. So in real time, my clients have basically stated that they don’t see a lot of value in detecting and surfacing the emotion of the customer to the agent because it just doesn’t necessarily change the course, and a lot of the times it’s pretty inaccurate. But most often, it’s based on something like tonality, where you’re able to say, OK, this person has a more agitated tone of voice or they’re more elevated in their pace of speech or the loudness of their voice and things like that. But some people are just loud speakers. And some people are really animated speakers. And some cultures actually don’t represent emotions in the same way. Some cultures get really quiet when they’re mad, [or] they’re [neurodivergent], where you don’t necessarily have the same representation of emotion if you communicate as an autistic person or as someone with a neurological difference. So it’s challenging to use those buckets to make meaningful decisions because it is not necessarily as nuanced as it needs to be.

McCarty Carino: Why are human agents still so important to this kind of work?

McAllister: One of the most meaningful drivers of customer experience quality is the engagement between the customer and the agent. So it’s meaningful because there’s three things that customers care about: Did the agent answer all of my questions? The first time I called you, did it get resolved effectively? Is the agent empowered to resolve issues without escalating to their supervisors? And, again, they only call in maybe two to three times a year. But their experience in that moment is so impactful from a customer experience perspective and in turn from a revenue perspective because customer experience is aligned with higher revenue. To not invest in the agent is, frankly, a bad business decision, especially with companies that pride themselves on having good customer experience.

More on this

Late last year, after ChatGPT became available but before its more powerful successor, GPT-4, came out, McAllister and multiple colleagues at Forrester posted an article warning businesses about moving too fast to integrate this tech into operations because of these tools’ tendency to hallucinate or, as they called it in the article, generate “coherent nonsense.” Something that’s probably not great in a customer service context.

Another ethical dilemma came up in a piece published by Harvard Business Review by several technologists at the professional services firm Accenture. They point out that conversational AI can be very persuasive, and without the moral compunctions of a human customer service representative, it could potentially manipulate customers into buying more stuff than they can afford.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer