Like what you've read?

On Line Opinion is the only Australian site where you get all sides of the story. We don't
charge, but we need your support. Here�s how you can help.

  • Advertise

    We have a monthly audience of 70,000 and advertising packages from $200 a month.

  • Volunteer

    We always need commissioning editors and sub-editors.

  • Contribute

    Got something to say? Submit an essay.


 The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
On Line Opinion logo ON LINE OPINION - Australia's e-journal of social and political debate

Subscribe!
Subscribe





On Line Opinion is a not-for-profit publication and relies on the generosity of its sponsors, editors and contributors. If you would like to help, contact us.
___________

Syndicate
RSS/XML


RSS 2.0

Lies, damned lies and opinion polls: some essential background

By Sarah Miskin - posted Wednesday, 23 June 2004


Today, the public is polled from many different angles on a wide range of issues. Results are highlighted in newspaper, magazine and television reports. Polling methods vary from questioning randomly selected respondents in telephone interviews to tallying the numbers of self-selected respondents who call in, or click a response button on a web page. Poll results are widely regarded as an accurate gauge of the public’s mood. Apparent alterations to policy after results are published have led to accusations that today’s politicians are opinion poll-driven rather than policy-driven.

Some of the most keenly watched polls, especially in the months before an election, are those on party support, leadership and political issues. In fact, the concern about the impact of such opinion polls on voting behaviour has led some countries to ban the publication of polls immediately before elections, despite evidence of the impact of polls on vote choice being slim. Between elections, opinion polls are used to assess party leadership and policy proposals. Parties may remove leaders who are unpopular in the polls, even if the leaders are popular with their party colleagues.

However, despite the emphasis that the media and, arguably, politicians place on poll results, an important question is whether opinion polls, in fact, tell us anything useful.

Advertisement

“Off the top of the head” replies

An American academic who specialises in polling, James Fishkin, criticises “ordinary” polls on the grounds that they measure only “off the top of the head” responses to questions to which respondents have given little thought. He claims the only useful poll is a “deliberative poll”, in which respondents are taken aside (often for a day or a weekend), exposed to the complexity of an issue, and then asked for their considered opinions. Deliberative polls may answer criticisms such as those eloquently summarised by a New York Times commentator who wrote of being polled on the 2003 war in Iraq:

Please don’t call and ask me about this war. Don’t ask if I strongly approve or partly approve or strongly disapprove … [especially when I feel] gung-ho at breakfast time, heartsick by lunch hour, angry at supper, all played out by bedtime and disembodied in the middle of the night when I wake up to check the cable news scrolls.

Reporting essential details

The Australian Press Council has guidelines outlining the details that should be published in opinion poll reports. These include: the identity of the poll sponsor (if any) and the name of the polling organisation, the question wording, the sample size and method, the population from which the sample was drawn, and which of the results are based on only part of the sample (for example, male respondents). The council also suggests that reports include how and where the interviews were held as well as the date of the interviews.

It notes that space reasons may restrict the number of details that are published, but it argues that, where a poll has a “marked political content”, “more information is needed”. It adds: “The public needs to be able to judge properly the value of the poll being reported.”

Polling methods and pitfalls

Some of the details listed above are essential to understanding a poll, especially whether its results are useful. The margin of error (or sampling error) is an oft-overlooked part of polling that can have significant effects on the utility of results, especially those that are within a few percentage points of one another. [Note the difference between ‘per cent’ and ‘percentage point’. An increase from 40 per cent to 50 per cent, for example, is not an increase of 10 per cent (10 per cent of 40 is four, which would take the initial figure to 44 per cent); it is an increase of 10 percentage points. This is a common reporting error.] Generally, if respondents are selected at random and are sufficiently numerous, then their answers will deviate only slightly from those that would have been given if every eligible voter had been polled. The margin of error is the maximum likely difference between the poll result and that of the voting population at large.

Australia’s major polls are of randomly selected samples large enough to have a relatively low margin of error — about plus or minus (±) 3 percentage points or less — and a high confidence level. As one columnist summarised:

Advertisement

...surveys of that size have about a 2.5 per cent [percentage point] error margin with a 95 per cent confidence level. That means 19 times out of 20 their result is within 2.5 per cent [percentage points] of the correct figure for the Australian voting population. So when ACNielsen finds 51 per cent Coalition support, it means it is between 48.5 per cent and 53.5 per cent. Probably.

It is the possible variation in the results — the spread of 48.5 to 53.5 — that highlights the need for caution when interpreting poll results. For example, where a result is given as 51 per cent support for a party in a poll with a ± 3 percentage points margin of error, it is not accurate to claim that “more than 50 per cent of voters” support that particular party because the actual “support” result ranges from 48 per cent support (that is, minus 3 percentage points) to 54 per cent support (plus 3 points).

Unfortunately, only rarely do the media highlight this limitation. As one observer notes: “Editors don’t let the statistics get in the way of a good headline”.

Explaining disparate results

Several factors may contribute to different poll results. It may be that the pollsters use different calculations to "weight" their samples to reflect the population. Or, one polling organisation may exclude from its calculations the responses of those who do not answer or who say they "don’t know" while another may allocate them according to the respondent’s political "leaning".

Both approaches may create differences between poll results and election results. Discarding the "uncommitted" responses and recomputing the percentages based on definite answers assumes that the undecided will cast their votes as the more committed voters do. Urging "uncommitted" respondents to select a party, or assigning these responses on the basis of party identification or "leaning", assumes that undecided voters will "come home" and vote for that party at an election.

Party identification

A problem with urging those who "don’t know" to nominate a party is the assumption that all voters identify with a party strongly enough to vote for it at an election. However, election specialist Professor Ian McAllister has shown that, although most voters still identify with a party, more now have no party attachment or are less attached to their party than previously. McAllister’s figures show that the number of voters who do not identify with a major party has increased from 5 per cent of respondents in 1987 to 15 per cent of respondents in 2001. In addition, the strength of party identification has declined substantially: in 1979, 34 per cent of respondents had "very strong" identification with their party; in 2001, only 18 per cent had such a strong attachment.

Thus, it cannot be assumed that ‘don’t know’ respondents will vote for the party they lean towards or the party for which they voted previously. Assigning them on this basis could distort poll results vis-àvis election results.

Some additional pitfalls

Other factors that may affect poll results can be discussed in the context of the 2001 pre-election polls. A week before the election, Newspoll had the Coalition at 45 per cent and Labor at 39.5 per cent while Morgan had the Coalition at 38.5 per cent and Labor at 43.5 per cent. Newspoll’s figures were close to the election outcome (Coalition 42.7 per cent; Labor 37.8 per cent). This is not to argue for one pollster over the other.  In fact, a recent academic comparison found that, over the longer term, election betting was a better predictor of election results than opinion polls.

How then can we account for the disparity between polls on the same issue? Ultimately, it is impossible to explain with certainty, although several factors may contribute, including those discussed above.

Other factors, such as different timing of interviews or different question wording, do not help here because both polls asked the same thing at the same time. The polls use different methods to gather data, which may have some effect — Newspoll uses telephone interviews while Morgan uses face-to-face interviews. Those in the polling industry disagree as to how the different techniques affect results, and academics, too, are undecided on this issue.

Morgan’s executive chairman, Gary Morgan, claims that the electorate changed its mind in the last week, and notes that re-interviews after the election showed that 20 per cent had changed their minds in the last days of the campaign. Morgan highlights the Tampa crisis and the September 11 terrorist attacks in the United States as turning points in the fortunes of the Coalition government.  

Election analyst Antony Green had predicted that asylum seekers and security would decide the 2001 election, despite polls showing that health, education and the economy were the top issues in voters’ minds. This suggests that the important poll results were not those on the importance of issues per se, but those on the party preferred to handle those issues at the forefront of the campaign — immigration and defence. On these issues, the Coalition  score higher than Labor.

Thus, the prominence of an issue at the time, as well as the perceived party differential on that issue, may have more effect on how people cast their votes on polling day.

The nation’s pulse

None of the factors mentioned above offers a definitive explanation of the different poll results before the 2001 election. For example, Morgan’s claim that voters changed their minds in the last week does not account for Newspoll’s accuracy a week before the election. In fact, Murray Goot notes that the Newspoll and ACNielsen polls in the last week showed little sign of movement, and concludes that Morgan’s explanation “is not plausible and is not supported by other polls”.

Perhaps the most important point about opinion polls is that polling is not an exact science. In the words of American humorist E. B. White:

The so-called science of poll-taking is not a science at all but a mere necromancy. People are unpredictable by nature, and although you can take a nation’s pulse, you can’t be sure that the nation hasn’t just run up a flight of stairs.

  1. Pages:
  2. 1
  3. 2
  4. All

Article edited by Ian Miller.
If you'd like to be a volunteer editor too, click here.

This is an edited version of a Parliamentary Library Research Note.  Views expressed in this Research Note are those of the author and do not necessarily reflect those of the Information and Research Services and are not to be attributed to the Parliamentary Library. Research Notes provide concise analytical briefings on issues of interest to Senators and Members. As such they may not canvass all of the key issues. The full text can be found here.



Discuss in our Forums

See what other readers are saying about this article!

Click here to read & post comments.

Share this:
reddit this reddit thisbookmark with del.icio.us Del.icio.usdigg thisseed newsvineSeed NewsvineStumbleUpon StumbleUponsubmit to propellerkwoff it

About the Author

Sarah Miskin is a researcher in the Politics and Public Administration Group at the Australian Parliamentary Library’s Information and Research Services.

Related Links
Parliamentary Library
Article Tools
Comment Comments
Print Printable version
Subscribe Subscribe
Email Email a friend
Advertisement

About Us Search Discuss Feedback Legals Privacy