Can AI learn to understand human emotions?
Mar 13, 2023

Can AI learn to understand human emotions?

HTML EMBED:
COPY
Aniket Bera, computer science professor at Purdue, is training AI models on data from diverse communities, trying to bring emotional intelligence to artificial intelligence.

It’s getting easier and easier to talk to machines, from digital voice assistants like Siri and Alexa to the latest generation of AI chatbots.

Natural language processing technology has made it possible to engage in pretty humanlike conversations with some forms of artificial intelligence. But can a bot ever really “get” us?

Marketplace’s Meghan McCarty Carino spoke with Aniket Bera, an associate professor of computer science at Purdue University. He’s trying to teach emotional intelligence to artificial intelligence because, he says, language is a lot more than just words. The following is an edited transcript of their conversation.

Purdue University professor Aniket Bera sits on the floor to pet a dog-like robot.
Aniket Bera (Purdue University/Rebecca McElhoe)

Aniket Bera: There is an inherent emotion when we are growing up: We’re looking at our parents, we’re looking at our friends and family. So these kinds of human-level learnings, we’re trying to replicate some of that from an AI perspective. So we are trying to train these AI models on human data — like, this is happiness, this is sadness, learn on these things. These are subtle aspects. So we are trying to teach them, let’s say, how a parent would teach their child.

Meghan McCarty Carino: What are some of the real-world applications of emotionally intelligent AI?

Bera: So one of the projects we worked on is when COVID started, there was a big rise in mental health cases throughout the country. So we talked to therapists, we discussed this with psychiatrists, and we collaborated with them to build this therapy AI agent, to not replace real therapists but to have these intermediate therapy sessions with a virtual agent. It’s sort of like a normal conversation. It’s not a detection platform, but it will help you connect with your therapist and stay in the loop for a longer time than between two therapy sessions, which today in big cities, it can go for upwards of more than a month.

McCarty Carino: Do you have concerns about developments in this space maybe being used in less altruistic ways? I can imagine it could be used in ways where it makes judgments of a person’s emotional state or their tone or extrapolates things that maybe could cause harm to someone.

Bera: Right. And, I mean, you’re absolutely right. There are potentially many harmful implications of these kinds of things. So we as a community, as a research community, we have to make sure that our system understands our cultural backgrounds, our system needs to understand racial backgrounds, needs to understand gender backgrounds.

McCarty Carino: Right, because there are a lot of things encoded in how we interpret emotion that are based in how we perceive things like gender and race and all of these social constructs.

Bera: Absolutely. So we collect data from all possible cultures, races, genders, geographic locations. And the more diverse data the AI will be trained on, it will become less and less racist, less sexist. There are cultural differences, and we to decouple the problem of culture being a separate culture, race, gender being a separate problem, as well as looking at the specific aspects of emotions which are not culture sensitive.

Here’s more on Bera’s projects. He points out that talking to AI could be especially helpful for those who are neurodivergent or have social anxiety and might find it challenging to share with a human therapist.

He says understanding emotion could also be important for robots, like self-driving cars. Not necessarily to better understand the passenger, but to better understand an environment, the road, which is full of unpredictable humans. As drivers we do this all the time. For instance, when we slow down to make eye contact with a pedestrian to understand which direction they might be headed.

Of course, we touched on the potential harms of emotion-recognition AI. A piece in Wired in December noted that previous studies of this technology show that it still echoes human biases. One study, for example, found that it consistently perceived Black faces as angrier.

But, as Wired pointed out, more and more companies are incorporating this into their products, which our producer Jesús Alvarado smartly noted was going to be a story to watch this year. For example, Zoom is introducing Zoom IQ, which could analyze users’ emotions and engagement during meetings.

Nothing good can come of this.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer