With Athens behind us, a new sport has burst into the limelight: poll-watching. If you thought Olympic commentators could be predictable, stand by for four weeks of, “How should we interpret the latest poll figures, Minister?” and “Do you feel you’re the underdog in this race?” and the hoary reply, “At the end of the day, there’s only one poll that matters”. But do the polls lie? And if so, how often?
In an Australian Journal of Political Science article following the last federal election, Justin Wolfers and I noted that the two grand dames of election polling - Morgan and Newspoll - had similar success rates in forecasting the election winner. In its election-eve polls, Morgan got it wrong on three of the past six elections (1990, 1993, 2001), while Newspoll did only marginally better, incorrectly calling two of the six (1993, 1998). As for relative newcomer AC Nielsen, they correctly forecast the 2001 election, but are yet to demonstrate a long track record. Indeed, we found that in 2001, election betting markets, run by the Northern Territory bookmaker Centrebet, were a better guide than the pollsters (as in horse-racing, when there’s money on the line, bookies have a strong incentive to get the odds right).
It is hardly surprising that pollsters don’t do a perfect job of predicting elections. One problem is that voting patterns are never stable. On average, my research shows that about 10 per cent of us change our vote from one election to the next. But a bigger issue is that since a typical poll samples only 1,000 to 2,000 voters, we can’t be confident that the poll result is an accurate reflection on the whole electorate.
Advertisement
What is the right margin of error to employ? The most common approach is to use a margin of error such that in 19 polls out of 20, the gap between the real figure and the poll estimate will be smaller than the sampling error. If the poll samples 1,000 people, its sampling error will be 3 per cent either way. With a sample of 2,000, the sampling error falls to plus or minus 2.2 per cent. Sample sizes for recent polls have been Newspoll, 1,100, AC Nielsen, 1,400 and Roy Morgan, 1,900.
But although the sampling error is sometimes noted in small print at the foot of an article, it rarely makes its way into the text. By contrast, the best US papers take a much more careful approach, explicitly using the statistical margin of error in discussing the results. This better informs the reader, and can be done without needless jargon. For example, the New York Times last week said of the US Presidential contest: “the Times poll and several others released on Thursday showed the race to be deadlocked, with neither candidate holding a lead beyond the margin of sampling error.”
Taking into account sampling error, what do the polls tell us about the Australian race? In their latest polls, AC Nielsen and Roy Morgan have Labor with a lead that exceeds the margin of error. However, according to a Newspoll released yesterday, the gap between the two parties is smaller than the sampling error.
Another factor to remember is that the sampling error when comparing two polls is larger still, since both polls have their own margins of error. For example, while the usual sampling error for a single AC Nielsen poll is plus or minus 2.6 per cent, the standard error of a movement from one AC Nielsen poll to the next is plus or minus 3.6 per cent.
The bottom line? Changes in polls from one week to another are even more error-prone than the polls themselves. So statements like, “since the last poll, Labor’s vote share is up 2 per cent”, should be taken with a pinch of salt.
Accurate reporting of the polls may make for less reading of the tea leaves by the nation’s amateur psephologists. But if this clears more space for journalism about the parties’ vision for the future, that’s no bad thing.