Like what you've read?

On Line Opinion is the only Australian site where you get all sides of the story. We don't
charge, but we need your support. Here�s how you can help.

  • Advertise

    We have a monthly audience of 70,000 and advertising packages from $200 a month.

  • Volunteer

    We always need commissioning editors and sub-editors.

  • Contribute

    Got something to say? Submit an essay.


 The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
On Line Opinion logo ON LINE OPINION - Australia's e-journal of social and political debate

Subscribe!
Subscribe





On Line Opinion is a not-for-profit publication and relies on the generosity of its sponsors, editors and contributors. If you would like to help, contact us.
___________

Syndicate
RSS/XML


RSS 2.0

Do polls lie?

By Andrew Leigh - posted Friday, 10 September 2004


With Athens behind us, a new sport has burst into the limelight: poll-watching. If you thought Olympic commentators could be predictable, stand by for four weeks of, “How should we interpret the latest poll figures, Minister?” and “Do you feel you’re the underdog in this race?” and the hoary reply, “At the end of the day, there’s only one poll that matters”. But do the polls lie? And if so, how often?

In an Australian Journal of Political Science article following the last federal election, Justin Wolfers and I noted that the two grand dames of election polling - Morgan and Newspoll - had similar success rates in forecasting the election winner. In its election-eve polls, Morgan got it wrong on three of the past six elections (1990, 1993, 2001), while Newspoll did only marginally better, incorrectly calling two of the six (1993, 1998). As for relative newcomer AC Nielsen, they correctly forecast the 2001 election, but are yet to demonstrate a long track record. Indeed, we found that in 2001, election betting markets, run by the Northern Territory bookmaker Centrebet, were a better guide than the pollsters (as in horse-racing, when there’s money on the line, bookies have a strong incentive to get the odds right).

It is hardly surprising that pollsters don’t do a perfect job of predicting elections. One problem is that voting patterns are never stable. On average, my research shows that about 10 per cent of us change our vote from one election to the next. But a bigger issue is that since a typical poll samples only 1,000 to 2,000 voters, we can’t be confident that the poll result is an accurate reflection on the whole electorate.

Advertisement

What is the right margin of error to employ? The most common approach is to use a margin of error such that in 19 polls out of 20, the gap between the real figure and the poll estimate will be smaller than the sampling error. If the poll samples 1,000 people, its sampling error will be 3 per cent either way. With a sample of 2,000, the sampling error falls to plus or minus 2.2 per cent. Sample sizes for recent polls have been Newspoll, 1,100, AC Nielsen, 1,400 and Roy Morgan, 1,900.

But although the sampling error is sometimes noted in small print at the foot of an article, it rarely makes its way into the text. By contrast, the best US papers take a much more careful approach, explicitly using the statistical margin of error in discussing the results. This better informs the reader, and can be done without needless jargon. For example, the New York Times last week said of the US Presidential contest: “the Times poll and several others released on Thursday showed the race to be deadlocked, with neither candidate holding a lead beyond the margin of sampling error.”

Taking into account sampling error, what do the polls tell us about the Australian race? In their latest polls, AC Nielsen and Roy Morgan have Labor with a lead that exceeds the margin of error. However, according to a Newspoll released yesterday, the gap between the two parties is smaller than the sampling error.

Another factor to remember is that the sampling error when comparing two polls is larger still, since both polls have their own margins of error. For example, while the usual sampling error for a single AC Nielsen poll is plus or minus 2.6 per cent, the standard error of a movement from one AC Nielsen poll to the next is plus or minus 3.6 per cent.

The bottom line? Changes in polls from one week to another are even more error-prone than the polls themselves. So statements like, “since the last poll, Labor’s vote share is up 2 per cent”, should be taken with a pinch of salt.

Accurate reporting of the polls may make for less reading of the tea leaves by the nation’s amateur psephologists. But if this clears more space for journalism about the parties’ vision for the future, that’s no bad thing.

  1. Pages:
  2. 1
  3. All

First published in the Sydney Morning Herald September 1, 2004.



Discuss in our Forums

See what other readers are saying about this article!

Click here to read & post comments.

Share this:
reddit this reddit thisbookmark with del.icio.us Del.icio.usdigg thisseed newsvineSeed NewsvineStumbleUpon StumbleUponsubmit to propellerkwoff it

About the Author

Andrew Leigh is the member for Fraser (ACT). Prior to his election in 2010, he was a professor in the Research School of Economics at the Australian National University, and has previously worked as associate to Justice Michael Kirby of the High Court of Australia, a lawyer for Clifford Chance (London), and a researcher for the Progressive Policy Institute (Washington DC). He holds a PhD from Harvard University and has published three books and over 50 journal articles. His books include Disconnected (2010), Battlers and Billionaires (2013) and The Economics of Just About Everything (2014).

Other articles by this Author

All articles by Andrew Leigh
Related Links
AC Nielsen
Newspoll
Roy Morgan
Photo of Andrew Leigh
Article Tools
Comment Comments
Print Printable version
Subscribe Subscribe
Email Email a friend
Advertisement

About Us Search Discuss Feedback Legals Privacy