Tag Archives: methodology

Statistics and Election Forecasting

With election day past us now, I have a moment to reflect upon how uncanny were Nate Silver and crew's predictions of the election.  I became quite a FiveThirtyEight.com junky as the election approached and I think that the stunning success that they demonstrated in predicting all sorts of elections yesterday holds lessons for the way we do social science more generally.

The predictions at  FiveThirtyEight.com start with the basic premise that all polls are wrong but when taken in aggregate, they provide a great deal of very useful information.  Basically, they aggregated information from a large number of polls and weighted the contributions of the different polls based on their reliability scores.  These reliability scores are based on three things: (1) the pollster's accuracy in predicting recent election outcomes, (2) the poll's sample size, and (3) the recentness of the poll.  Pollsters who have done well in the past, typically do well in the present.  Polls of many potential voters are more precise than polls of a small number of voters.  Recent polls are more salient that polls taken a while ago.  The site provides a very detailed account of how it calculates its reliability scores, particularly in terms of pollster accuracy.  The weighted polling data were then further adjusted for trends in polls. For their projections, they then took the adjusted polling data and ran regressions on a variety of social and demographic variables for the different polled populations. Using these regressions, they were able to calculate a snapshot in time of how each state would likely vote if the vote were held on that day.  These snapshots were then projected to the November election.  Finally, they ran simulations over their projections (10,000 at a time) to understand how the various forms of uncertainty were likely to affect the outcomes.

FiveThirtyEight.com projected that Obama would win with 348.6 electoral votes.  The current count is (provisionally) 364.  Pretty darn good, given the manifold uncertainties.  What is even more stunning is a comparison of the projected/realized election maps.  Here is the actual electoral map (as of today, 5 November at 21:00 PST):

Electoral Map on 6 November 2008

 

Here is their final proejction:

Final fivethirtyeight.com Projection

Hard to imagine it being much righter...

In the social sciences - especially in anthropology -- we gather crappy data all the time. There is generally much more that we can do with these data than is usually done.  I find that the fivethirtyeight.com methodology has a lot to offer social science.  In particular, I really like the focus on prediction of observable quantities. Too often, we get caught up in the cult of the p-value and focused on things that ultimately unknowable and unmeasurable (the "true" value of a test statistic, for example).  Predicting measurables and then adjusting the weight one gives particular predictions based on their past performance seems like a very reasonable tool for other types of social (and natural) science applications. 

I need to think more about specific anthropological applications, but I am intrigued at least by the idea that one could use the clearly biased results of some assay of behavior to nonetheless make accurate predictions of some outcome of interest. In the case of elections, the assay is polling.  In the anthropological case, it might be the report of an informant in one's ethnographic investigation.  We know that informants (like pollsters, or ethnographers for that matter) may have a particular agenda. But if we could compare the predictions based on an ethnographic interview with a measurable outcome, adjust the predictions based on predictive performance and then aggregate the predictions of many informants, we might have a powerful, scientific approach to some ethnographic questions that acknowledges the inherent bias and subjectivity of the subject matter but nonetheless makes meaningful scientific predictions. 

I'm just thinking out loud here.  Clearly, I need to add some more specifics to have this make sense. Perhaps I will take up this thread again in the future.  For now, I just want to pass along kudos once more to FiveThirtyEight.com for a job very well done.