×

Welcome to National Journal!

Enjoy this premium "unlocked" content until September 30, 2024.

Continue

Forecasts and betting markets won’t give you a better prediction than polls

You really will just have to wait for election returns to know who’s going to win.

Adobe Stock
None
Sept. 17, 2024, 3:06 p.m.

Last week I told you why polls aren’t going to tell us who will win the presidential race, no matter how many numbers we get. But polls are not supposed to be predictive. Polls are a snapshot, and nothing more. They can be useful for evaluating the current environment, even if they don’t predict the future.

Forecasts and betting markets, however, do purport to predict an outcome, or at least offer a probability of an outcome. They aren’t as plentiful as polls, but they are ubiquitous in coverage of major elections over the last dozen or so years.

These won’t tell you what is going to happen, either.

Betting markets are fairly simple to dismiss. Bettors base their moves on polls and forecasts, plus a hefty dose of vibes. These markets typically reflect what polls and data-based forecasts are saying, but they might move a bit more frequently based on a significant event, like a debate. On Polymarket, the odds of Vice President Kamala Harris winning the presidential race increased from 46 percent to 50 percent in the week since her debate with former President Trump, while Trump’s odds dropped from 52 percent to 48 percent. But there is nothing sophisticated about the calculations; it’s simply what buyers and sellers are doing at the moment. Why anyone would use these for anything more than fun is beyond me.

Forecasts based on polls and election “fundamentals,” like what FiveThirtyEight, The Economist, and Nate Silver produce, are more intriguing from an empirical standpoint. The statistical machinations are genuinely challenging and interesting to work on: combining national and state-level polls, economic factors, incumbency factors, and vote history, and then spinning it all up into a state-by-state prediction that can be used to simulate presidential elections. We’re talking thousands and thousands of lines of code.

Sounds pretty empirical, right? It is, in the sense that you build a big model that takes in a ton of data and puts out a ton of data. But every decision about what goes into the model is subjective: The modeler decides what polls are used, whether they are adjusted for the quality or past accuracy of the pollster, how much any individual poll is able to move the trend, which economic indicators to use, which political factors are important, and how all those are coded in. Make a different decision at any step, and you change the model’s predictions.

These forecast models are sold to the public as the probability of an outcome—all of those decisions boiled down to a single number. As an example, at the time I am writing this, the FiveThirtyEight forecast gives Harris a 60 percent chance of winning the election. Their analysts correctly call the race toss-up, but you don’t have to look very far to find people proclaiming this as a huge positive for Harris, because, as research shows, people are really bad at interpreting probabilities. For what it’s worth, Silver, who founded FiveThirtyEight, gives Harris just a 43.5 percent chance of prevailing in the Electoral College. That’s still basically a toss-up.

Most of the 2024 forecasts currently hover in the toss-up range, largely because that’s where the polls say we are: Harris has a national lead of a few points, but the battleground states are closely divided. The forecasts aren’t really telling us much about what will happen in November. Plus, the way most of these models work is that their architects put more emphasis on polls and less on fundamentals as the election gets closer. If the polls stay close, the forecasts won’t get any clearer.

Even if the forecasts do change over the next few weeks to indicate a clear advantage for Harris or Trump, we won’t necessarily know anything about the election’s outcome.

Take the case of 2016: The most generous model gave Trump roughly a 30 percent chance of winning, while others had his chances as low as 1 or 2 percent (I wrote one of these models). According to the models, Hillary Clinton would be the winner. Of course, when Trump won, they all turned out to be “wrong.” By probability standards, though, the modelers’ data indicated that Trump winning was indeed a low-probability event. But was it a 1 or 2 percent chance, or a 30 percent chance? We have no idea—there is no way to assess forecast models by that metric.

We can calculate probabilities all day long, but we have no idea what their accuracy is. Polls have margins of error, plus additional error. Models have error of their own. Judgment calls made by those constructing the models have error. We don't know how big all of that cumulative error is.

I know we all want something to quiet our nerves and tell us what will happen in an incredibly consequential contest, but no person or model can do that.

Welcome to National Journal!

Enjoy this featured content until September 30, 2024. Interested in exploring more
content and tools available to members and subscribers?

×
×

Welcome to National Journal!

You are currently accessing National Journal from IP access. Please login to access this feature. If you have any questions, please contact your Dedicated Advisor.

Login