This time, we were prepared for President-elect Trump to win. That alone illustrates that polls did a better job showing the state of the race than in either of his two previous runs. This year’s polls showed a close race, and there was a sense in the closing weeks of the campaign that Trump’s odds were improving ever so slightly, even as most polling averages and forecasts still showed a toss-up race.
Compare that to 2016 and 2020, when Trump victories seemed like fairly remote possibilities. In 2016, his win was shocking; in 2020, the narrowness of President Biden’s win was astonishing. This year, there are a lot of feelings about Trump’s win, but that sense of “it couldn’t happen” was notably absent. We all knew it could.
That’s not to say the polls were perfect; they were not and never will be. Most of this year’s polls will be within their margins of error of the actual result, which is a great outcome for pollsters. At the same time, though, many polls still undershot Trump’s support—by less than in 2016 and 2020—in a continuation of the now-familiar pattern. This trend will be a focus of plenty of analysis, including a review by the American Association for Public Opinion Research that is already underway.
The more complicated discussion about polls is what we saw in the crosstabs. We assess any poll’s accuracy by its topline numbers of all likely voters. Yet narratives about what’s happening in an election are very often driven by subgroups within the polls—Black voters, Hispanic voters, young voters, women, etc. In this election cycle, as always, some of these subgroup poll narratives turned out to be correct, and some turned out to be wildly wrong due to something in the polls’ samples or methods.
A huge caveat before going deeper on the subgroups: Not all of the votes are counted, and the Exit Poll and AP VoteCast—the two surveys used to assess actual voter behavior post-election—will not be final until all the votes are counted and the surveys can be weighted to the full results. Additionally, when looking at crosstabs in both of these sources, we have to remember that these are also surveys and have error associated with them. For these reasons, I am not citing specific numbers in the following, only general trends.
Among the correct patterns that some polls picked up on were Latinos shifting toward Republicans and, to a lesser extent, Black men shifting in that direction. Some polls showed young voters were less Democratic-leaning than usual and seniors were more Democratic-leaning than usual. This age depolarization seems to have been real, with the youngest voters divided by gender—young men pulled the age group to the right.
Yet two of the biggest trends in some polling did not pan out. The gender gap overall was not much wider than it has been in recent years, despite pre-election polls that showed a historically wide divide. And white women did not go anywhere. They remained mostly unchanged, as did college-educated women, contrary to some polls showing these groups shifting left.
That some subgroup patterns in pre-election polls were right and some were wrong poses a significant challenge: How can we improve our methods and analysis to figure out which is which and not chase down a pathway that misleads people? It’s not that some polls were completely right in all the subgroups and some were completely wrong; most had a mix of correct and incorrect trends within the same set of results.
To be honest, I’m less worried about underestimating Trump by 2 or 3 points than I am about these subgroup patterns and their ability to mislead about who is moving and persuadable. In the media, it can create an ongoing news narrative based on an incorrect estimate, something that damages media’s—and polling’s—credibility. In a campaign, it can send finite resources in the wrong direction and cost critical votes.
Some polling critics advised the public to ignore all of the subgroups in a poll (don’t go “crosstab diving!”). But this is unreasonable: We have the data, and some of it is clearly indicating real trends.
Nor should we believe every number that a poll provides. We need a way to know the difference.
I don’t have a satisfactory answer for how to solve this at the moment, and if I do develop an answer, it will require more than an 800-word column. But the subgroups are where I’m focusing my questions. After all, if we get the subgroups right, we will get the topline right.
Contributing editor Natalie Jackson is a vice president at GQR Research.