MARY REICHARD, HOST: It’s Wednesday the 18th of November, 2020.
Glad to have you along for today’s edition of The World and Everything in It. Good morning, I’m Mary Reichard.
NICK EICHER, HOST: And I’m Nick Eicher. First up: presidential polling.
In the days leading up to the election, political analysts predicted we would have an early night. At the very least, they said we’d know the winner within a few days of polls closing. But, two weeks later, several key states still haven’t certified their votes. And election uncertainty drags on.
REICHARD: Analysts who predicted a big win for Joe Biden relied on polls to make those predictions. And as the returns slowly came in on Election Night, it became clear the polls underestimated Trump’s support again. So why do pollsters have so much trouble projecting election results in the Trump era?
Joining us now to talk about it is Karlyn Bowman. She studies polling and analyzes public opinion as a senior fellow for the American Enterprise Institute.
Good morning, and welcome back!
KARLYN BOWMAN, GUEST: Good morning. I’m delighted to be with you.
REICHARD: For months leading up to the election, polls predicted Joe Biden would win pretty decisively. To say the least, that didn’t happen! He did win about 4 million more votes overall than President Trump. So, what did the pollsters get right, and what did they get wrong?
BOWMAN: I think it’s premature to make any judgements at this point about what the pollsters got right and wrong. The American Association for Public Opinion Research has convened an expert panel not only of practitioners but of statisticians and methodologists and they’re going to dig deep to try to figure out what went wrong this time.
But the polls are having an increasingly difficult time. This isn’t true just in the Trump administration. It predates that overall. Response rates are down very significantly. About two decades ago the response rates were about 36 percent to a well-designed poll. That’s now below 10 percent. And we keep asking ourselves whether we can create samples that look like America. The pollsters still feel pretty confident about that, but there are a number of things that could have gone wrong in 2020. And some of them are things that went wrong in 2016 and I know the AAPOR committee—the American Association for Public Opinion Research—will be revisiting that.
A couple of things that they learned in 2016 that I think were very important—some of which apply to 2020, some of which do not—to begin, in 2016 there were a lot of late deciders, people who made up their minds in the last two or three days of that election. And usually late deciders are not a very big group, but they usually break pretty evenly for one candidate or the other. Now, in 2016 they broke overwhelmingly for Donald Trump, so that contributed to the polls’ error—particularly in the industrial midwestern states. There don’t appear to be as many undecided voters in 2020 as there were in 2016. All the pre-election polls showed a very small number of voters who hadn’t made up their mind. So, that’s one thing the pollsters will have to look at again.
There are other problems. We’ve heard so much about a shy Trump vote. The idea of voters just not responding honestly to the polls goes back a very long time to Tom Bradley’s election in California where the polls showed the black candidate winning and in fact he lost. The Doug Wilder race in Virginia, a similar pattern there. But, again, there are reasons in different elections why some people just choose not to respond. So, that is conceivably a problem.
Another problem, and we saw this in the Australian election where voting is mandatory in Australia and yet all of the major pollsters got that election outcome wrong, as did the exit pollsters—a very significant miss. And we’ve seen a lot of those around the globe. But one of the phenomena that you saw in the Australian election is something that we call herding, where all of the pollsters seem to move in one direction together and there are very few outliers, and that’s what we saw in this campaign overall. So it’s possible that that was another factor. But there’s still many unknowns at this point.
REICHARD: Right. The last time we had you on the program, you noted that pollsters did a pretty good job in 2016 of forecasting the national vote but a terrible job predicting state outcomes. Would you say they learned from their mistakes this time around?
BOWMAN: Some of the state polls were wildly off in this election campaign. I think what we’ll be looking at is how they constructed their samples this time, whether or not they address some of the problems they had in 2016. For example, in 2016 we know a lot of the state pollsters underestimated the percentage of people in those industrial midwestern states with less than a college degree. Most pollsters tried to correct for that, but perhaps they didn’t correct enough or perhaps there were other problems with the polls in 2020. But there were certainly some significant misses across the country. There were also some states where the polls were absolutely spot on. I think of Colorado. I think of several other states like that where throughout the campaign, the polls hit it on the nose.
REICHARD: You mentioned earlier about the shy Trump voter. There are reasons for that. Some people have privacy concerns about it. Some people fear reprisal. That kind of thing. But do you think that idea of a shy Trump voter explains the discrepancy between predictions and the election results? Can you comment on that some more?
BOWMAN: I doubt if the shy Trump voter phenomenon—to the extent that it’s real—would explain all of the misses that we saw in this election campaign. The shy Trump vote, if you look back to the analysis in 2016 and what went wrong, the pollsters didn’t see any difference in the way that people responded to anonymous online surveys and to live telephone interviews where they were speaking to a person and might have been more cautious about their own view. So, they didn’t see any significant differences in 2016. Perhaps there were much more significant differences in 2020, but I don’t think it can explain all of the errors. Let’s look at a state like Maine where every single poll conducted in the course of that election campaign in 2020 showed Collins losing, but yet she won by a fairly comfortable margin. So, something is seriously wrong in the state polls at a certain level.
REICHARD: What adjustments do you think polling companies ought to make based on what happened this year?
BOWMAN: Well, what we saw going back now for eight years is some of the major polling companies have gotten out of the business and I’m not sure I would urge that, but I think they would have to think very carefully if they’ve had problems in the past whether they should go ahead and do election polling. Polls are valuable in many ways. I’ve never thought their best use was to make policy or to make predictions. And so in that sense I would like to see the pollsters move away—to some degree—from the prediction business. But, of course, it’s been around for a long time. George Gallup asked the first questions in the 1930s: “If the election were held today, for whom would you vote?” So those questions have been around for a very long time.
REICHARD: We look to polls to tell us about lots of things besides elections. Should we be reluctant to believe polls going forward? How can we know they’re accurate?
BOWMAN: Well, I think if you look at a lot of them that ask different kinds of questions about policy issues or about personal preferences, you can learn a lot. They’re a blunt tool. They’re a very blunt tool. They’re a crude instrument. But that said, if you look at a lot of polls—never look at a single poll—I think you can learn a lot about what makes a complex country tick.
REICHARD: Here’s a philosophy question for you: humans seem to want to predict the future, know what’s going to happen. That kind of drives our emphasis on polling. But when they’re wrong, it causes so much angst. Do you think what happened this year is going to change that?
BOWMAN: It’s really hard to know. I guess people pay casual attention to these pre-election polls. I just have never been certain how much attention they’re paying. I think the pollsters now have a black eye for the second time. And in that sense perhaps people will pay a little less attention to the polls going forward. But if you use them properly and see them just as one of many tools to understand this very complex country, I think they can serve us well.
REICHARD: Karlyn Bowman is a senior fellow at the American Enterprise Institute where she studies polling and public opinion. Thanks so much for joining us today!
BOWMAN: Thank you very much. I enjoyed it. Thank you.