3
Shares
Pinterest WhatsApp

The US election result came as an absolute shock to many, but it was the pollsters that took the biggest hit. The major poll-based forecasts, a lot of models, the prediction markets, even the superforecaster crowd all got it wrong. They estimated high probabilities for a Clinton victory, even though some were more careful than others in claiming that the race would be very tight.

Our prediction survey, however, was spot on thanks to the method we used for Oraclum Intelligence Systems, a start-up developed out of our academic work. We predicted a Trump victory, and we called all the major swing states in his favour: Pennsylvania (which no single pollster gave to him), Florida, North Carolina, and Ohio. We gave Virginia, Nevada, Colorado, and New Mexico to Clinton, along with the usual red states and blue states to each. We only missed three — New Hampshire, Michigan, and Wisconsin (although for Wisconsin we didn’t have enough survey respondents to make our own prediction so we had to use the average of polls instead).

The only misses directly resulting from our method were Michigan, where it gave Clinton a 0.5 point lead, and New Hampshire, where it gave Trump a 1 point lead. Every other state, although close, we called right. For example in Florida we estimated 49.9 percent to Trump vs. 47.3 per cent to Clinton. In the end it was 49.1 to 47.7. In Pennsylvania we have 48.2 to Trump vs. 46.7 for Clinton (it was 48.8. to 47.6. in the end). In North Carolina our method said 51 per cent to Trump vs. 43.5 per cent for Clinton (Clinton got a bit more, 46.7, but Trump was spot on at 50.5%).

Our model even gave Clinton a higher chance to win the overall popular vote share than the electoral college vote, which also proved to be correct. Overall for each state, on average, we were right within a single percentage point margin. Our full analysis is here — it was a risky prediction, particularly in the US where the major predictors and pollsters were always so good at making correct forecasts. But we were convinced that the method was correct even though it offered, at first glance, very surprising results.

Why did we get it so right when other more established pollsters got it so wrong?

We used a different type of survey called a prediction survey. The established poll-based forecasters usually pick up the “low-hanging fruit” polling data and run it through some elaborate model. We, on the other hand, needed to get actual people to come to our site and take the time to make a prediction for their state. So instead of just picking up raw data and twisting it as much as we could, we needed to build our own data. Given that we were doing this with limited resources explains why our sample size was rather small (N=445).

However, even with a small sample the method works. Why? Our survey asks the respondents not only who they intend to vote for, but also who they think will win, by what margin, as well as their view on who other people think will win. It is essentially a “citizen forecaster” concept adjusted for the question on groupthink. The idea is to incorporate wider influences, including peer groups, that shape an individual’s choice on voting day. Which is why our method is perfect to conduct via social networks.

Our model, in other words, did not require a representative sample to make a good prediction. And this is the biggest problem the pollsters are currently struggling with — how to make a more representative sample. Our method goes beyond representativeness, self-selection problems and random sampling, and focuses simply on trying to find out how people estimate their local conditions and sentiments. And the people are pretty good at this.

As a final sense check, we tested the same method on the Brexit referendum and it provided the same stunning results. We had six models tested, three of which showed Leave and three of which showed Remain. We did not bother with being correct at the time, we just wanted to see which method was the best one. The one method that gave us a 51.3 per cent for Leave is the same one that predicted the victory for Donald Trump. We intend to test it further. This is therefore just the beginning.

Comments

comments

Previous post

A political earthquake with few US precedents

Next post

What election analysts can learn from Trump's win