Marketplace Logo Donate

Daily business news and economic stories from Marketplace

Can AI learn to understand human emotions?

Heard on:
Humanoid robot finger and children finger meets in front of blackboard.

Getty Images

get the podcast

It’s getting easier and easier to talk to machines, from digital voice assistants like Siri and Alexa to the latest generation of AI chatbots.

Natural language processing technology has made it possible to engage in pretty humanlike conversations with some forms of artificial intelligence. But can a bot ever really “get” us?

Marketplace’s Meghan McCarty Carino spoke with Aniket Bera, an associate professor of computer science at Purdue University. He’s trying to teach emotional intelligence to artificial intelligence because, he says, language is a lot more than just words. The following is an edited transcript of their conversation.

Aniket Bera (Purdue University/Rebecca McElhoe)

Aniket Bera: There is an inherent emotion when we are growing up: We’re looking at our parents, we’re looking at our friends and family. So these kinds of human-level learnings, we’re trying to replicate some of that from an AI perspective. So we are trying to train these AI models on human data — like, this is happiness, this is sadness, learn on these things. These are subtle aspects. So we are trying to teach them, let’s say, how a parent would teach their child.

Meghan McCarty Carino: What are some of the real-world applications of emotionally intelligent AI?

Bera: So one of the projects we worked on is when COVID started, there was a big rise in mental health cases throughout the country. So we talked to therapists, we discussed this with psychiatrists, and we collaborated with them to build this therapy AI agent, to not replace real therapists but to have these intermediate therapy sessions with a virtual agent. It’s sort of like a normal conversation. It’s not a detection platform, but it will help you connect with your therapist and stay in the loop for a longer time than between two therapy sessions, which today in big cities, it can go for upwards of more than a month.

McCarty Carino: Do you have concerns about developments in this space maybe being used in less altruistic ways? I can imagine it could be used in ways where it makes judgments of a person’s emotional state or their tone or extrapolates things that maybe could cause harm to someone.

Bera: Right. And, I mean, you’re absolutely right. There are potentially many harmful implications of these kinds of things. So we as a community, as a research community, we have to make sure that our system understands our cultural backgrounds, our system needs to understand racial backgrounds, needs to understand gender backgrounds.

McCarty Carino: Right, because there are a lot of things encoded in how we interpret emotion that are based in how we perceive things like gender and race and all of these social constructs.

Bera: Absolutely. So we collect data from all possible cultures, races, genders, geographic locations. And the more diverse data the AI will be trained on, it will become less and less racist, less sexist. There are cultural differences, and we to decouple the problem of culture being a separate culture, race, gender being a separate problem, as well as looking at the specific aspects of emotions which are not culture sensitive.

Here’s more on Bera’s projects. He points out that talking to AI could be especially helpful for those who are neurodivergent or have social anxiety and might find it challenging to share with a human therapist.

He says understanding emotion could also be important for robots, like self-driving cars. Not necessarily to better understand the passenger, but to better understand an environment, the road, which is full of unpredictable humans. As drivers we do this all the time. For instance, when we slow down to make eye contact with a pedestrian to understand which direction they might be headed.

Of course, we touched on the potential harms of emotion-recognition AI. A piece in Wired in December noted that previous studies of this technology show that it still echoes human biases. One study, for example, found that it consistently perceived Black faces as angrier.

But, as Wired pointed out, more and more companies are incorporating this into their products, which our producer Jesús Alvarado smartly noted was going to be a story to watch this year. For example, Zoom is introducing Zoom IQ, which could analyze users’ emotions and engagement during meetings.

Nothing good can come of this.

What's Next

Latest Episodes From Our Shows

7:33 AM PDT
2:38 AM PDT
1:43 PM PDT
Mar 27, 2023
Mar 27, 2023
Mar 22, 2023
Dec 8, 2022
Exit mobile version