We're 30% of the way to our goal of 2,500 donations by Friday! Help us catch up ⏩ Give Now
Polling response rates are dropping. AI chatbots could be the solution.
Sep 19, 2024

Polling response rates are dropping. AI chatbots could be the solution.

HTML EMBED:
COPY
Accurate political polling often relies on people answering the phone and giving honest answers. Harvard’s Bruce Schneier is investigating digital personas' potential to fill in the response and reliability gaps.

In this tense election year, polling is top of mind. But collecting polling data is harder than it used to be.

First, it often relies on people actually answering the phone and then speaking frankly to a pollster, both of which are becoming less common.

The result has been data that is less predictive, such as the significant underestimates of support for former President Donald Trump in both the 2016 and 2020 elections. And those repeated misses have made the public much more skeptical.

Polling, it seems, needs an update for the digital age.

Marketplace’s Meghan McCarty Carino spoke to Bruce Schneier, lecturer at the Harvard Kennedy School, who says artificial intelligence can help.  

The following is an edited transcript of their conversation.

Bruce Schneier: Right now, pollsters ask actual human beings questions, get answers and then do a whole lot of math. Maybe the math is related to who a likely voter is, or a demographic that’s not represented needs to be magnified, or we know that people are more racist than they’re willing to admit to a pollster. All these things change what humans say. And that math is basically the same kind of math that is in generative AI. So, one step further is to give an AI a persona. You are a female this age, living in this city, with this job, and you’ve expressed these political views — answer questions as that persona. And then you ask that AI questions, and you get answers. And what our research is trying to figure out is how good are those answers? And our answer is they’re pretty good, not great, could be better. But this is a way that pollsters can augment what they’re doing already. It sounds all scary because it sounds like the robots will answer the questions instead of the people. That’s really not what’s happening.

Meghan McCarty Carino: And this sort of practice of utilizing personas and kind of aggregating various demographic characteristics into a fictional persona, this has been used in market research for a long time. This is just sort of AI supercharging it, right?

Schneier: AI enabling it, making it better. You know, what you don’t want is for the AI persona to be the stereotype. I mean, I can give you a persona and we can have the stereotypical answer, but some percentage of that group of people will answer the other way. Stereotypes are not perfect, they’re just pretty good. The question is, can the AI match the curve of answers? And again, we found that it’s OK, pretty good, it can get better. It’s not great, but it’ll improve. These are the sorts of things we’re going to want.

The neat thing about AI and polling is that you can ask an AI a million questions. You can’t ask a person a million questions, they’ll hang up on you. But you can, in theory, ask an AI a million questions. You can show the AI 10,000 possible speeches and ask which one do you like best? There are things you can do with AI-augmented polling that you can’t do without. There’s one place where research showed a problem. So, the question it got wrong was, how do you feel about the Russian invasion of Ukraine? Because the AI produced the traditional conservative anti-Russia opinion, not the new conservative Russia-is-OK opinion. Because the AI was trained on old data, it didn’t know about the political shift. So, there’s a place where it’ll fall down. If the training data doesn’t match reality, the AI is going to get it wrong.

McCarty Carino: So, AI is kind of expecting, or is imitating an intellectual consistency that seems to not exist in real humans.

Schneier: But it exists on most things. Once in a while that fails, so now can you notice it failing? So, we can imagine future systems where you’re going to ask real humans and you’re going to ask AIs. You’re going to look at the two results to see if they’re different. If something’s happened, you need to ask more real humans. But if they’re the same, you know that the training data the AI has been trained on, it’s probably pretty good, and you can use the AI to ask more detailed questions and you’ll get it right. So, it’s going to be this back and forth. And in a sense, this is not much further than where we are today. You know, when there are anomalous poll results, pollsters know they have to ask more people and something weird is going on.

McCarty Carino: To the extent that we have and can rely on human polling on the issue, if we look at Pew’s polling on AI, it looks like the majority of Americans are at least a little concerned about AI. What do you think the general public’s response would be to using AI in polling in such a high-stakes arena?

Schneier: The response I’ve been getting is universal horror, and I think that stems partly from not understanding just how much math is already in polling. And this is just a tiny step. People’s reactions, I think, is going to change over the next few years. But, yeah, there’d be the thought that, what do you mean you’re not asking people, you’re asking robots? That sounds bad, but I think people don’t understand poll results. In 2016 when the poll results said there’s a 60% chance that Clinton will win the election, and 40% that Trump will win, that basically means if you flip a coin 10 times, four times out of 10 Trump will win. That’s a little less than half. That’s not what people thought. So, some of this is just not understanding the mathematics of polling.

McCarty Carino: In your view, how do you envision AI being used and figuring into polling results?

Schneier: I don’t think we know. We’re trying to look at one way it’s possible. I think what’s going to happen is what’s happening today. Pollsters will produce results. Those results will be partially the result of asking actual humans questions, and partially a result of a whole lot of computer mathematics. I think we have a problem in our society that we think these polls are as accurate as the state of the world, and not just an estimate based on some algorithms. But I don’t see much difference. I think pollsters will use the technologies like they’ve been using the other mathematic technologies over the past decades to deal with the inherent problems of polling as a science.

More on this

Earlier this year, my former colleague Lily Jamali spoke with the CEO of Gallup about some of the challenges in polling. He said response rates in the U.S. have become “abysmal.”

For instance, the New York Times Siena College poll, which still relies on live telephone surveys, has a response rate of just 2%.

Challenges in traditional polling have led some to pay greater attention to betting markets, which we discussed on the show last month. According to Axios, August was the busiest month ever on the crypto-based betting platform Polymarket, which is currently not allowed to operate in the U.S.

Nonetheless, the platform saw almost $475 million in betting activity, driven largely by the U.S. election.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer