With the field for the 2016 nomination nearly filled out, political pundits and prognosticators of all stripes are making their assessments about who is a viable candidate and who is not, which "lane" a given candidate is competing in and who they might be blocking, and how the overall campaign is likely to play out. One prominent forecaster, Sean Trende of RealClearPolitics.com, takes to Politico Magazine this morning to urge a little caution about how much stock to put in all of these predictions:
Election after election around the world has proven that electoral predictions aren’t always to be trusted. In March, analysts failed to foresee Israeli Prime Minister Benjamin Netanyahu’s victory. In May, most people thought the Tories were doomed in England. Last weekend, pollsters got the Greek referendum result badly wrong. For political forecasters like me, the big worry is that America is next.
The sudden emergence of Nate Silver of FiveThirtyEight.com in 2008 helped to bring scores of previously obscure electoral forecasters to the forefront of American culture, and enabled dozens of writers with similar interests to make careers out of what looked like a hobby. But the amount of faith the public now puts in us is misplaced. Electoral modelers have a nerdy little secret: We aren’t oracles. Draw back the curtain, and you’ll see that we are only as good as the polls we rely on and the models we invent. And there are real problems with both.
That’s why the “data journalism” movement contains the seeds of its own destruction. The danger lies in data journalists’ tendency to belittle skeptics and other analysts who get it wrong. Worse is the distinct tendency to downplay how much uncertainty there is around our forecasts. This is a shame, because sooner or later—probably sooner—the models are going to miss in an American presidential election and data journalism as a whole is going to suffer...
Trende provides a behind-the-scenes view of what election analysts like him do with the polling data that comes in:
For most election analysts, the raw material for a prediction comes in the form of polling data. In theory, polls represent random samples employing uniform methodologies that are lightly weighted. In reality, pollsters use a variety of sampling methods, and then heavily weight the data before (and sometimes after) pushing it through varying voter screens. Much of this is considered proprietary, so we don’t really know what is going on, but suffice it to say that pollsters aren’t just presenting “pristine” random samples.
Even worse, pollsters seem to be increasingly engaging in something called poll herding: a tendency to either re-weight an outlying poll to fall in line with other pollsters or to fail to publish outlying polls altogether. In 2014 alone we saw evidence that PPP, Rasmussen Reports, Gravis Marketing and Hampton University all refused to release polls; forecasters suspect that there are many more instances like this (at least two of these polls were released by accident), but it is unknowable just how many.
This matters, because if a race shifts, or if the herd is wrong, pollsters will be unable to pick up on the movement—there is a collective “you first” tendency when the data suggest pollsters should break out of the herd. Moreover, for technical reasons, models that are denied access to outlying results will tend to understate the uncertainty of their predictions. The result, then, can be the types of massive misses that we saw in the recent elections in the United Kingdom and Israel...
The point isn't that election analysts should be ignored of course, but it is worth keeping in mind that the experts, as Trende notes, aren't oracles.