Poll of Polls

Beginning in the Campaign of 2004, and appearing to an even greater extent in the Campaign of 2008, were numerous, almost daily public opinion polls conducted by assorted news organizations, think tanks, and academic institutions. Many news organizations chose to summarize this plethora of polls by simply averaging them (in some cases, regardless of differences in sampling techniques and margins of error). News organizations such as CNN went as far as to claim that their averaging process removed all sampling error (which it did not). All polling contains error; poll results may vary due to systematic sampling bias or random sampling error. In both 2004 and 2008, polling firms varied greatly in how they measured “likely voters,” which also led to differences in predictions and created interpretation problems when different strategies were pooled together to calculate an average result.

Aggregation is most helpful when the polling questions are identical or nearly identical (as is the case with a presidential campaign tracking poll) and when the strategy for measuring likely voters is similar. In such cases, aggregation should tend to lower measurement error, because there are now multiple sample means (rather than a single sample mean) being used to calculate the probable location of the true population mean. This is the theory underlying FiveThirtyEight, a prediction-based blog run by Nate Silver, and Pollster, hosted by the Huffington Post. These polls of polls aggregate all available polls taken during a given time period and assign weights based on factors such as sample size and polling technology used (e.g., whether it is a landline-only poll, whether it is automated or has a live interviewer, whether it is conducted online). In the Campaign of 2012, Silver’s FiveThirtyEight blog predicted Democratic incumbent Barack Obama to win by a small margin over Republican nominee Mitt Romney because this is what the preponderance of the polls (thus adding up to a sufficient number of electoral votes) were predicting. The only poll that predicted otherwise was Gallup, which predicted a win by Romney. This made the Gallup poll an outlier, relative to the other polls. For Gallup to be correct, one would have to make the assumption that Gallup knew something about the electorate that every other poll did not; that is, that every other poll made a fundamental error, and that Gallup had somehow avoided committing this error. In this scenario, it makes more sense to conclude that Gallup is an outlier because Gallup is the poll that is in error. Statistically, it far likelier that one poll is in error than that all polls except one are in error.

See also Poll-Driven Campaign

Additional Resources

FiveThirtyEight. http://fivethirtyeight.com/.

Pollster, Huffington Post. http://elections.huffingtonpost.com/pollster.