Like what you've read?

On Line Opinion is the only Australian site where you get all sides of the story. We don't
charge, but we need your support. Here�s how you can help.

  • Advertise

    We have a monthly audience of 70,000 and advertising packages from $200 a month.

  • Volunteer

    We always need commissioning editors and sub-editors.

  • Contribute

    Got something to say? Submit an essay.


 The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
On Line Opinion logo ON LINE OPINION - Australia's e-journal of social and political debate

Subscribe!
Subscribe





On Line Opinion is a not-for-profit publication and relies on the generosity of its sponsors, editors and contributors. If you would like to help, contact us.
___________

Syndicate
RSS/XML


RSS 2.0

Lies, damned lies and opinion polls: some essential background

By Sarah Miskin - posted Wednesday, 23 June 2004


Today, the public is polled from many different angles on a wide range of issues. Results are highlighted in newspaper, magazine and television reports. Polling methods vary from questioning randomly selected respondents in telephone interviews to tallying the numbers of self-selected respondents who call in, or click a response button on a web page. Poll results are widely regarded as an accurate gauge of the public’s mood. Apparent alterations to policy after results are published have led to accusations that today’s politicians are opinion poll-driven rather than policy-driven.

Some of the most keenly watched polls, especially in the months before an election, are those on party support, leadership and political issues. In fact, the concern about the impact of such opinion polls on voting behaviour has led some countries to ban the publication of polls immediately before elections, despite evidence of the impact of polls on vote choice being slim. Between elections, opinion polls are used to assess party leadership and policy proposals. Parties may remove leaders who are unpopular in the polls, even if the leaders are popular with their party colleagues.

However, despite the emphasis that the media and, arguably, politicians place on poll results, an important question is whether opinion polls, in fact, tell us anything useful.

Advertisement

“Off the top of the head” replies

An American academic who specialises in polling, James Fishkin, criticises “ordinary” polls on the grounds that they measure only “off the top of the head” responses to questions to which respondents have given little thought. He claims the only useful poll is a “deliberative poll”, in which respondents are taken aside (often for a day or a weekend), exposed to the complexity of an issue, and then asked for their considered opinions. Deliberative polls may answer criticisms such as those eloquently summarised by a New York Times commentator who wrote of being polled on the 2003 war in Iraq:

Please don’t call and ask me about this war. Don’t ask if I strongly approve or partly approve or strongly disapprove … [especially when I feel] gung-ho at breakfast time, heartsick by lunch hour, angry at supper, all played out by bedtime and disembodied in the middle of the night when I wake up to check the cable news scrolls.

Reporting essential details

The Australian Press Council has guidelines outlining the details that should be published in opinion poll reports. These include: the identity of the poll sponsor (if any) and the name of the polling organisation, the question wording, the sample size and method, the population from which the sample was drawn, and which of the results are based on only part of the sample (for example, male respondents). The council also suggests that reports include how and where the interviews were held as well as the date of the interviews.

It notes that space reasons may restrict the number of details that are published, but it argues that, where a poll has a “marked political content”, “more information is needed”. It adds: “The public needs to be able to judge properly the value of the poll being reported.”

Polling methods and pitfalls

Some of the details listed above are essential to understanding a poll, especially whether its results are useful. The margin of error (or sampling error) is an oft-overlooked part of polling that can have significant effects on the utility of results, especially those that are within a few percentage points of one another. [Note the difference between ‘per cent’ and ‘percentage point’. An increase from 40 per cent to 50 per cent, for example, is not an increase of 10 per cent (10 per cent of 40 is four, which would take the initial figure to 44 per cent); it is an increase of 10 percentage points. This is a common reporting error.] Generally, if respondents are selected at random and are sufficiently numerous, then their answers will deviate only slightly from those that would have been given if every eligible voter had been polled. The margin of error is the maximum likely difference between the poll result and that of the voting population at large.

Australia’s major polls are of randomly selected samples large enough to have a relatively low margin of error — about plus or minus (±) 3 percentage points or less — and a high confidence level. As one columnist summarised:

Advertisement

...surveys of that size have about a 2.5 per cent [percentage point] error margin with a 95 per cent confidence level. That means 19 times out of 20 their result is within 2.5 per cent [percentage points] of the correct figure for the Australian voting population. So when ACNielsen finds 51 per cent Coalition support, it means it is between 48.5 per cent and 53.5 per cent. Probably.

It is the possible variation in the results — the spread of 48.5 to 53.5 — that highlights the need for caution when interpreting poll results. For example, where a result is given as 51 per cent support for a party in a poll with a ± 3 percentage points margin of error, it is not accurate to claim that “more than 50 per cent of voters” support that particular party because the actual “support” result ranges from 48 per cent support (that is, minus 3 percentage points) to 54 per cent support (plus 3 points).

Unfortunately, only rarely do the media highlight this limitation. As one observer notes: “Editors don’t let the statistics get in the way of a good headline”.

  1. Pages:
  2. Page 1
  3. 2
  4. All

Article edited by Ian Miller.
If you'd like to be a volunteer editor too, click here.

This is an edited version of a Parliamentary Library Research Note.  Views expressed in this Research Note are those of the author and do not necessarily reflect those of the Information and Research Services and are not to be attributed to the Parliamentary Library. Research Notes provide concise analytical briefings on issues of interest to Senators and Members. As such they may not canvass all of the key issues. The full text can be found here.



Discuss in our Forums

See what other readers are saying about this article!

Click here to read & post comments.

Share this:
reddit this reddit thisbookmark with del.icio.us Del.icio.usdigg thisseed newsvineSeed NewsvineStumbleUpon StumbleUponsubmit to propellerkwoff it

About the Author

Sarah Miskin is a researcher in the Politics and Public Administration Group at the Australian Parliamentary Library’s Information and Research Services.

Related Links
Parliamentary Library
Article Tools
Comment Comments
Print Printable version
Subscribe Subscribe
Email Email a friend
Advertisement

About Us Search Discuss Feedback Legals Privacy