Saturday, September 24, 2016

Field Guide To Opinion Pollsters: 45th Parliament Edition

Just before the 2013 election I posted a Field Guide to Opinion Pollsters, which has become one of the more enduringly accessed pieces on this site.  However, over time parts of its content have become dated or specific to that election, and with more and more pollsters emerging as others disappear, the thing has got too long.  I've decided therefore from now that I will post a new edition shortly into the life of each parliament, editing it through that parliament as the need arises.  Pollsters not expected to be active in the life of the current parliament will be removed, but the old edition text will remain on the previous page.

There are a lot of polls about in Australia these days.  But how do they all work, which ones have runs on the board and which ones can you trust the most? This article describes what is known about each pollster and its strengths and weaknesses and includes coverage of general polling issues.

The gold standard for success for an opinion pollster is seen to be that its polls at election time get the result as close to right as possible.  However, some pollsters are little-tested against actual elections, and getting a specific election right is a combination of skill and luck.  In elections where there is a swing on the last day or two of the campaign, a pollster that is actually not polling correctly may have its errors cancelled out by the swing, and hence record a lucky hit.  There is more to being a good pollster than just getting it right at election time - a good pollster should also provide useful data between elections and do so using well-designed questions that are easy to interpret.  And a pollster should also present their data in a way that makes sense and isn't misleading or confusing.

Some Common Themes

There are some general issues that affect a number of pollsters that I should go through before I move onto individual pollsters.  If you just want to look up a given pollster, scroll down, and then you can scroll back to this bit if you see something you want to look up; it might be here.

House Effect

The issue variously called lean, house effect, skew or bias refers to the tendency of a pollster to produce results that are better for one major party or other (or for specific minor parties) than what is likely to be the real situation.  The term "bias" is a poor one for this issue because it carries connotations of the pollster themselves liking one party more than the other or intending to assist one side, but there is no evidence that this is actually true of any major pollster in Australia.  The extent to which the house effects for each pollster are stable, or change in response to slight methods changes or political circumstances, is often a subject of debate.


The issue often referred to as bouncing, but more technically as overdispersal or underdispersal, refers to how much a poll tends to move about from sample to sample even if voting intention isn't changing very much.  A given poll has a maximum margin of error based on its sample size, meaning that in theory 95% of the poll's results (once adjusted for the pollster's house effect) will be within that margin of error of the true value, but most of them will be much closer to the true value than that.  As the sample size increases, the maximum margin of error decreases, but the decrease isn't proportional.  For instance, for a 50-50 result from a sample size of 1000, the margin of error is +/- 3.1%, but for a sample of 5000 it is about +/- 1.4%, meaning that national polls with sample sizes above a few thousand are usually not worth the effort of producing them.  In practice, some polls tend to vary from sample to sample by much more than would be randomly expected, and these polls are bouncy or overdispersed.  Some polls are very static (except or sometimes even when voting intention actually changes sharply), and these are underdispersed.

In theory underdispersal is nice, because a pollster wants to accurately reflect the national mood rather than releasing polls that are wrong by several points.  No one wants to issue a rogue poll that everyone then ignores.  But a poll that is underdispersed may in some cases be so because it is slow to pick up major shifts when they occur, or indeed doesn't pick them up fully at all.  There is also the problem that there is no way to make a poll under-disperse when using truly random sampling from the entire Australian population, so if a pollster's results are very steady the question must be asked: how are they doing it?  Is it really a pure and random poll, or is the pollster allowing data from other pollsters to influence the way they fine-tune assumptions that create the final outcome?  (The latter practice is known as herding.)  Other possibilities include that underdispersed pollsters are using tracking from their own poll or other modelling assumptions to chop rough edges off their results, or surveying the same respondents too often.

Mobile and Landline Phone Polls vs Online Polling

No major Australian pollster only polls landlines.

In the lead-up to the 2013 federal election it was widely argued that the rising proportion of mobile-phone-only households (which contain mostly young voters) meant that landline-only polling skewed to the Coalition.  Yet at that election there was no such skew, and not much difference in performance between landline-only phone polling and polls that called mobiles. The most accurate final poll at that election polled landlines only.  Partly this was because unrepresentativeness in landline-only polling can be overcome by scaling (see below) and partly this was because the political attributes of landline and non-landline households seem to not be as different as might be expected.  See Christian Kerr's report of Newspoll surveying.

The 2013 election, at least, supported the view that purely online-panel pollsters have bigger problems to contend with than landline-only pollsters (and again the Newspoll study above is relevant).  Online panel polling, whatever its recruitment method, may have biases that cannot be removed, because online respondents are people who like filling out surveys (often in return for rewards) and are comfortable with technology.  Not everyone is like that, and it is a difficult thing to predict by demographic attributes alone, and one that may skew political opinion.

In the 2013-6 parliament, landline-only polling entirely disappeared from the federal scene.  All major pollsters now either call mobiles as well, or have included some other kind of surveying (such as online panel surveying) as part of their sampling mix.


Getting a truly random sample of the Australian national population is difficult.  Some types of voters are simply much easier to contact than others.  One option is to keep contacting potential respondents until you get exactly the right demographic mix.  However this can introduce time delays and increase the costs of polling if you are using phone polling.  Another option is to "scale" the responses you have by applying corrections based on which group you have less of in your poll than others.  For instance, suppose that voters in age group A are 10% of the voting population but only 5% of your sample, while voters in age group B are 25% of the voting population but 30% of your sample.  A simple scaling method would then be to double the value of each response from group A and multiply the value of each response from group B by 25/30.   In practice, scaling is much more complicated and a given response might be scaled on many different criteria at once, some of which might increase its weighting and others of which might decrease it.

Scaling effectively increases the margin of error and bounciness of a poll, because any sampling error in a small group that is scaled up could be magnified in the overall total.  There is also a risk that if a demographic group is hard to poll, then the voters who can be polled within that group might not be a fair sample, and that any error caused by that might then be magnified.  For instance, young voters are hard to reach using landline polling, excepting those living with their parents.  But are young voters who live with their parents representative of all young voters?

Some areas of Australia are simply very difficult to poll accurately by any method.  The Northern Territory is one of them.  Inner city electorates are also hard to poll because of high rates of enrolment churn and non-enrolment in the electorate.

Internal and external

Many prominent pollsters conduct both "public polls" and "commissioned polls".  A public poll is a poll either conducted by the pollster themselves without anyone paying for it, or commissioned by a major media source, for which full details of results are usually publicly released.  Although there is the potential in theory for the party biases of media sources towards a party to result in them hiring a pollster to present results in a good light for that party, there is really no evidence that this happens in Australia.

Commissioned or internal polls are polls paid for by a political party or by a group with an interest in an issue (such as a lobby group or company).  Commissioned polls usually ask standard voting intention questions, but it is the choice of the client whether to release results, and it is common for internal polling to be only selectively released (an increasing problem with robo-polling reducing polling costs).  Often the full details of commissioned polls are not released.

Some companies produce excellent public polling while also accepting commissioned polls in which the questions are asked in a way more likely to get the result the client wants.  Often the client wants a poll that shows strong support for their cause so that they can then get more publicity for their cause and attempt to convince politicians that it is popular.

Just because a pollster does good public polling does not mean their commissioned polls should always be trusted.  As a general rule no commissioned poll reported in media should be taken all that seriously, whatever the pollster, without the full release of the verbatim wording of all the questions in the order asked, and an extensive breakdown of results.  Even with these things, the wording of the questions often turns out to be suspect. Even if there is nothing wrong with the commissioned poll at all, there is still the possibility of selective release of good polls while not releasing the bad ones.  Furthermore, the accuracy of internal polling is prone to morale bias: some parties could be more likely to hire companies that tend to tell them what they want to hear, even when it actually isn't true.

Upfront Exclusion

This term refers to the proportion of voters who are eliminated from results because they either cannot specify a preference, refuse to answer the question, or fail to complete the survey interview.  For most pollsters this proportion is slight to moderate (a few to sometimes 10%).  In theory if undecided voters had a tendency to shift to a particular party, this could make polls very inaccurate, but there is not much evidence that this issue has bitten in recent elections.  Generally, the higher the upfront exclusion rate, the more chance that those voters who do reply are not representative, but this seems to become a serious problem only with polls that upfront-exclude over 10%.

The Green Vote

Most pollsters have a recent track record of usually or always overestimating the Green vote compared to actual election results, especially when the Green vote is fairly high.  An especially stark example was the 2014 Victorian state election, in which all 17 polls published in the two months before the election had the party's vote too high, by up to eight points.  Part of the reason for this is that the Green vote is actually very soft; there may be other reasons.  Small and new pollsters, and pollsters with high undecided rates, are especially prone to this problem.   Polling of "others" and "independents" is often also inaccurate.  Smaller parties tend to be under-polled if they are not specifically named, while the category "independents" tends to over-perform in polling compared to election results.  Voters may offer "independent" as an ambit wish for a good high-profile independent candidate, but they won't vote for one if one isn't on the ballot.

Preferred Prime Minister

Preferred/Better Prime Minister or Premier polling questions are a bugbear of Australian poll commentary, which would probably be more informed if such questions did not exist.

Given that Australian politics is so presidential and that the personal approval/disapproval ratings of the Prime Minister are a driving indicator of change in the 2PP vote, it might be expected that a question about who should be Prime Minister would yield good information.  It frequently doesn't.  For whatever reason (and it seems to have something to do with the don't-know option), the preferred leader scores of most major pollsters flatter the incumbent.  For instance, in Newspoll, if the two parties are tied on the 2PP vote, and their leaders are tied on personal ratings, then the Prime Minister will typically lead the Opposition Leader by 16 points as Preferred Prime Minister.  This skewing leads to the media talking about fairly big PPM leads for incumbent PMs as evidence of dominance when they are not, or small PPM leads or deficits as evidence that the government still has an ace up its sleeve when in fact they are evidence of trouble.  See Why Preferred Prime Minister/Premier Scores are Rubbish.

The only pollsters that seem to avoid this are ReachTEL (see below) and Morgan SMS.

2PP Preferencing: Last Election vs Respondent Allocated

Most pollsters publish two-party preferred results that are based on the assumption that voters who do not vote for the major parties will distribute their preferences in the same way as at the last election.  Many pollsters who do this try to calculate the preference flow for the Greens (and often a few other named parties) separately from other parties, but some (eg Ipsos) use "overall preference flow" which assumes that the average flow from all non-major-party voters will stay the same (even if the proportion of them who vote for the Greens changes.)

Some pollsters, however, use respondent-allocated preferences, ie they ask the respondent how they will distribute their preferences.  One problem with this is that many voters will actually follow what their party's how-to-vote card says rather than decide for themselves. In any case this method has a history of strong Labor skew and is generally less accurate.

The 2016 federal election reinforced the superiority of last-election preferences, following some recent cases (2013 federal, 2014 Victorian) where the truth was somewhere between the two.  In the 2015 Queensland election, last-election preferences proved very inaccurate and it's likely respondent-allocated preferences would have been more predictive for that election, and will be so for some other such elections with very large swings.  In the 2015 NSW state election the most conservative estimates of respondent-allocated preferences were accurate.  It seems that voter choice about preferencing makes more difference in optional-preferential voting (which now exists only in NSW and the NT) than compulsory, because voters can choose to exhaust their vote.

For a detailed technical discussion at federal level see Wonk Central: The Track Record Of Last-Election Preferences.

Single Polls vs Aggregates

No matter how good a pollster is, no single poll series will consistently give a perfect and reliable picture of voting intention.  Aggregating polling from multiple polls to get a picture of the whole range of voting intention is usually more reliable than assuming any one poll or poll series is accurate.  If you just have one poll saying 52-48, you do not know for sure the leading party is in front.  If you have five with an average of 51.5-48.5, all taken at the same time and without significant house effects, you have a much better idea that the leading party really is in front.

Many people make the mistake of saying that if all the polls are within their margin of error of 50-50 then the race is as good as a tie.  Generally, this isn't true.

Different poll aggregates will give slightly different values at any given time because of the nature of the different assumptions made by those running them.  Such issues as what weight to place on given polls based on their past record, how quickly to assume a poll has ceased to be relevant and what the hell to do about Essential are not easy and different modellers will try different assumptions, and then modify them when elections provide more data.

A list of active polling aggregators is given at the base of this article.

Poll Fraud

Poll fraud occurs when a pollster simply makes up numbers, which means it can produce a "poll" without needing to spend time or money surveying anyone.  Poll fraud can be detected by various methods, including results that fail non-randomness tests in their fine detail.  Poll fraud is a problem at times in the USA.  No poll fraud in public polling has been detected in Australia to my knowledge.

The pollsters


Newspoll, house pollster for The Australian, is Australia's best-known polling brand and the one that most seems to influence political debate, election betting market moves, and public comment about party standing.   

Between election campaigns it normally polls fortnightly, but sometimes the schedule is adjusted to respond to current events, to coincide with a new parliamentary week, or to avoid long weekends.  Also the contracted schedule is actually not quite fortnightly, so sometimes there is a three-week break for no obvious reason.  The day of release (either Monday or Tuesday, with first figures becoming known about 10 pm the previous night) is also varied, mainly for the first reason.

Until July 2015, Newspoll was a telephone pollster that dialled randomly selected numbers and only called landlines.  In July 2015 the Newspoll brand was transferred away from the company previously running it (which was dissolved, with some key staff moving to start Omnipoll).  Now, Newspoll is operated by Galaxy (see below) and is a hybrid pollster using a combination of online panel polling (a la Essential) and robopolling (a la ReachTEL).  The robopolling is of landlines only, but the online polling will reach respondents who do not have landlines.

The Newspoll brand has a long history, going back to late 1985, and has asked questions in a very stable form, making it an excellent poll for past data comparisons, although how much these are affected by the mid-2015 methods change remains to be seen.  The brand has a predictive record at state and federal elections that is second to none, despite a fairly bad final 2PP figure in 2004 (as a result of a shortlived and incorrect 2PP estimation method).  The new Newspoll has performed very well at its first electoral test, including a stunningly accurate final poll.  However, far too much attention is still paid to poll-to-poll moves in Newspoll without considering the pattern from other polls.  One behavioural change following the switch to Galaxy is that Newspoll seems to have become less bouncy.

An often-discussed aspect of the old Newspoll was its upfront exclusion rates and I wrote a detailed article about that here.  Newspoll also attracts a massive volume of online conspiracy theories, most of them left-wing and virtually all of them groundless and daft.  Reading a full #Newspoll Twitter feed on a given poll night may cause permanent brain damage, and at least 90% of tweets that mention "Newspoll" and "Murdoch" together are rubbish.

A recent source of silly Newspoll conspiracy theories has been the pollster's habit of hibernating for several weeks over summer.  Historically Newspoll has always taken at least four weeks off between polls over the Christmas and New Year periods, usually at least five and in cases eight or more.  Also, Newspoll is more likely to take long breaks shortly after an election.  In 2011, Newspoll did not emerge until the first weekend in February.  In 2008, it polled once in late January (its first poll since the election of the Rudd Government) and then took another four weeks off.


Galaxy Research has been conducting federal polling since the 2004 federal election.  Galaxy's federal polling was formerly conducted mainly by random telephone surveying but its polls now use a mix phone polling (including of mobile phones) and online panel polling.  Galaxy appears sporadically between elections and is the house pollster for a string of News Limited tabloids.  It is polling less frequently in its own name following its large deal to run the Newspoll brand.

Galaxy has a formidable predictive record and is an uncannily steady (underdispersed) poll.  Earlier in its career it appeared to produce slightly Coalition-leaning results between elections, but the lean would go away during the campaign.  There is a sharp contrast with Galaxy's specific issue/attribute questions, which (presumably at the behest of sponsoring media) frequently use murky and provocatively subjective language and are often difficult to make accurate sense of.

Galaxy sometimes uses other polling methods.  For instance it has been using automated phone polling (robopolling) in seat polls. At the 2016 election, these polls were notably underdispersed - they were not only much less variable than the actual results, but less variable than they would have been expected to be even if there was no difference between seats.

Galaxy was in my view the best pollster of the 2013 federal election campaign and lead-up, and the Galaxy/Newspoll stable shared this honour again for 2016.


ReachTEL is the most commonly encountered "robopoll" and is now regularly used by Channel Seven and various newspapers.  It is by far the most commonly commissioned poll.  A robopoll conducts automatic phone calls to randomly selected landline and mobile phones, and respondents pick answers according to options stated by a recorded voice. Robopolls are cheap to run and can be conducted very quickly, but have the disadvantage that more voters will hang up on them immediately.  Therefore they require a lot of scaling, which in theory increases the chance of errors.  

ReachTEL has now established itself as a reliable and accurate national and state-level pollster, being among the top few pollsters at all elections in the last few years, including being the best statewide pollster at the 2015 NSW state election and the most accurate pollster of primary votes in Queensland in 2015.  Its electorate-level public federal polling has sometimes been skewed to the Coalition.

ReachTEL forces answers to some questions, often disallowing an undecided option and requiring the respondent to choose one option or the other.  This results in preferred Prime Minister figures that are often closer to the national two-party vote than those of other pollsters.  It also produces much worse ratings for the government on issues questions.  The suggestion is that there are many people who have slightly negative views of the government but will let it off with a neutral rating unless forced.  Forcing can cause voters to hang up but the company advises me that the percentage of hangups after the first question is very small.

ReachTEL leadership performance ratings use a middle option of "satisfactory" which seems to capture some mildly positive sentiment.  For this reason ReachTEL ratings when expressed in the form Good vs Poor seem harsher than those of other pollsters.  Lately the gap between ReachTEL and other ratings in this regard seems to be closing.

Cases of ReachTEL calling voters who do not live in the electorate being surveyed are often reported.  It appears this is a product of a small error rate but a very large number of total calls.

Fairfax Ipsos

Ipsos is a global polling brand with a good reputation.  Fairfax Ipsos started in late 2014 and is a live phone poll that samples both landlines and mobiles and operates on a similar scale and frequency to the former Fairfax Nielsen polls. Initially Ipsos appeared to lean somewhat to the Coalition but this seems to have abated.  The poll's biggest issue is that it persistently has the Green vote much too high.  It is also somewhat bouncier than other national polls, largely because of its smaller sample sizes.  At elections so far it tends to have performed well on the 2PP vote but not so well on the primaries. Prior to the start of the Fairfax polling, Ipsos conducted some other polls under the name Ipsos i-view.

Essential Report

Essential Report is a weekly online poll and the house pollster for the Crikey website's subscriber newsletter.  Essential's respondents are selected from a panel of around 100,000 voters, and about 1000 are polled each week, by sending out an email offering small shopping credit rewards for participation.  Unusually, Essential publishes rolling results that are the sum of each week's poll and the last week's poll.  The purpose of this strategy is to reduce bouncing and the impact of brief kneejerk reactions on the poll.

In its very early days Essential was a very bouncy and Labor-skewed poll that was pretty much useless, but it made changes at some stage in 2010 and delivered a good result at that year's election.  However, the poll still seems to have some problems.  It too is underdispersed (see Essential: Not Bouncy Enough), but in a way that seems to cause it to become "stuck" and to respond slowly and incompletely to big changes in voting intention, as compared to other pollsters.  Quite why this is is not entirely clear - it could be to do with the composition of the panel or with repeat sampling issues within it (against which some precautions are taken).  Essential also sometimes displays a very different trend pattern to other pollsters.  Its performance in the 2013 election leadup was idiosyncratic.  At the 2016 election it produced an impressive final-week poll but doubts remain about its tracking behaviour.

Essential asks a very wide range of useful attribute and issue based questions that often help to drill down into the reasons why voters have specific attitudes, that in turn underlie their votes.  These are sometimes marred by high don't-know rates, which are an inescapable problem with online polling formats.


Roy Morgan Research is a polling house that traces its lineage back to Morgan Gallup polls conducted from the early 1940s.  The very experienced pollster was formerly the house pollster for The Bulletin magazine (which no longer exists), and suffered badly when it predicted a Labor win in 2001.  Now unattached to any specific media, Morgan is not as much discussed as other pollsters, but the lack of media attachment is not the only reason for that.  Morgan's polling is confusing and unreliable, often not sufficiently documented, and its reputation among poll-watchers has declined in recent years.

Various forms of Morgan polls are seen including the following:

* SMS only mobile phone polling (mainly used for state polls, also for some issues polling)
* Telephone polls (mainly used for leadership polling)
* Multi-mode polls (most recently a mixture of face-to-face surveying and SMS polling)

Other combinations of multi-mode polling have been seen in the past, and at one stage Morgan used to issue a lot of pure face-to-face polls, which skewed heavily to Labor.  Morgan also usually uses respondent-allocated preferencing, which can also create skew to the ALP.  The pollster has recently displayed severe skew to the Greens in its primary votes, and some of its local panels may be unrepresentative.  The small sample size of its state polls of the smaller states is another problem - Tasmanian samples are sometimes reported in the media, but with a sample size of around 300, why bother?

Morgan's multi-mode polls that include a face-to-face component have often skewed to Labor, but skewed to the Coalition for a while after Malcolm Turnbull first became Prime Minister.

Morgan polls seem to be very reactive to "news cycle" events.  SMS sampling (apparently drawn from a panel rather than random selection from the whole population) is probably too prone to "motivated response", with responses from voters who have strong views about the issues of the day being overrepresented in the results. My view is that SMS is a suspect polling method.

In the leadup to the 2016 election, Morgan issued many seat-by-seat results, frequently based on tiny sample sizes and often accompanied by unsound interpretation.  The pollster also stopped releasing national polling in the last month of the campaign, making it impossible to benchmark its performance for the future, and its future polling intentions are unclear.  Finally, in recent state elections Morgan's SMS polling has been absurdly volatile.

As a general rule, Morgan polls should be treated with a lot of caution at all times.

Lonergan Research

Lonergan is another robopollster that has fairly recently moved into public polling (and has also done a few internal polls for the Greens and other left-leaning entities).

Lonergan had a poor 2013 campaign with its seat polls showing a massive Coalition skew and a commissioned mobile-phone-only poll proving very inaccurate (perhaps because its sample size was too small).  Its final 2016 federal poll, however, was quite accurate despite being taken nearly two months before the election.  Its NSW state election polls showed skew to the Coalition and results from its commissioned seat polls have been mixed.

Lonergan initially attracted criticism for scaling results to voter reports of how they voted at previous elections,. Some voters may not report their voting behaviour at previous elections accurately, and may over-report voting for the winner, as a result of which polling becomes skewed towards the other side.  I am not aware of the poll still employing this method.

JWS Research

JWS Research is another relatively recent robopollster.  It conducted a massive marginal-seat poll at the 2010 federal election with indifferent predictive results on a seat basis (but an excellent overall 2PP estimate) and a similar exercise at the 2010 Victorian state election with excellent results.  In the 2010-3 term it was notable for a string of aggregated marginal seat mega-polls, including some jointly commissioned by AFR and a Liberal Party linked strategic advice/lobbying firm called ECG Advisory Solutions.  These polls were often blighted by the release of clearly unsound seat projections based on them, but that is not the fault of the data.  JWS also conducted many local-level seat polls at the 2013 campaign.  Electorate-level polls released by JWS during the 2013 campaign showed a strong general lean to the Coalition.  It is likely that the series of aggregated marginal polls experienced the same issue.

In the 2013-6 cycle JWS kept a lower profile, but it releases very thorough and useful issues polling every four months in an omnibus called True Issues.

AMR Australia
See previous edition.


MediaReach is another IRV pollster (robopollster) that is reported as being owned by a firm with several years' experience in the field.  It has done, for example, state polling of WA and the NT and an electorate poll of Mackellar.  At both the Northern Territory election and the election for the seat of Solomon, MediaReach overestimated the large swing against the CLP by about five points.


Metapoll is a new online pollster sometimes published in the Guardian.  It is also the author of a deluxe polling aggregate that initially included its own unpublished data, though this was later removed except in the area of preferencing.

Research Now

Research Now is an online panel pollster similar to Essential.  It has produced a fair amount of mostly commissioned issues polling but does not seem to have published any voting intentions polling prior to elections, so its accuracy in Australia is unknown.

Community Engagement

Community Engagement produced one national commissioned poll and some commissioned seat polls at the 2016 election.  Documentation is so inadequate that it is not even clear what kind of pollster it is.  Early results were not accurate.


EMRS is a Tasmanian pollster that has surveyed state and federal voting intention in Tasmania since the late 1990s, and sometimes does commissioned voting polls inside and outside the state.  It is best known for quarterly state voting intention polling.  It is a phone pollster calling landlines, formerly on a random basis but I believe it has since become a panel pollster.

EMRS is sometimes a difficult pollster to make sense of because its undecided rates are much higher than for other pollsters, and this applies even after the initial prodding of unsure voters to say which party they are leaning to.  (Antony Green's partial defence of the company's high undecided rates here was refuted here).  At past state elections the pollster has tended to overestimate the Green vote and underestimate the Labor vote by a few points each time because of this.  A Labor majority was more or less written off (except by psephologists) in the leadup to the 2006 state poll on this basis, but it was the eventual result.

Nielsen (No longer active)

See previous edition.


Others will be added here as I come across them or on request.

Online or TV News "Polls": They're Useless!

Ah, but what about those polls on newspaper websites or Yahoo that give you the option of voting for a stance on some hot-button issue?  What about those TV news polls that ask you to call a number for yes or a different number for no?

The short answer is that these are not real polls.  They are opt-ins and they are not scientifically valid as evidence of voter intentions.  For the first thing, as regularly noted in the fine print, they only reflect the views of those who choose to participate.  If a media source tends to be read more by right-wing voters, then its opt-in polls will tend to be voted on more by right-wing voters.

Secondly, opt-ins suffer from "motivated response".  People who care deeply about an issue will vote on them, but people who really don't have a strong view (but might answer a question put in a real poll that they've agreed to spend time on) will probably not bother.

Thirdly opt-ins are prone to co-ordinated stacking.  Activist networks will send messages by email or social media telling people there is a media poll they can vote in, and this will often lead to votes being cast from way outside the area to which the poll relates.  Opt-ins are easily flooded by this method, producing very skewed results.

Finally, opt-ins are often prone to deliberate multiple voting by single voters, either by people with strong views on an issue who want to manipulate the outcome or by people who want to ruin them just because the results are taken far too seriously.  There are ways to try to stop it, but some of them work better than others. (See in this regard the brilliant work of Ubermotive and also see the guide to how to stop it here.)

It is especially unfortunate that the ABC's Lateline employs "polls" of this kind.  They should know better.

I hope this guide is useful; feedback is very welcome.

Poll Quality Reviews

The following pieces on this site have compared the performance of different polls at a specific election:

2016 Federal Election: Best And Worst Pollsters
New South Wales 2015
Queensland 2015
Victoria 2014
2013 Federal Election: Best And Worst Pollsters 

Polling aggregators

* My own, in the sidebar of this site (methods post here).  This is a relatively quick model, aggregating 2PP results using published 2PPs and primaries, and designed for fast updating as new polls come out.  It includes adjustments for accuracy and house effect.

* Bludgertrack.  This is the best known aggregator.  It incorporates state-level polling data to predict seat tallies and recorded an extremely accurate seat and 2PP projection at the 2013 federal election.  It derives its 2PP figures from adjusted primary figures rather than aggregating released 2PPs.

Andrew Catsaras formerly did the Poll of Polls segment on ABC's Insiders and now and then posts his aggregate, which provides a monthly rounded 2PP figure and now primary estimates.

Several other aggregators operated during the 2016 election cycle and links and comments on them will be added if they resurface.  New aggregators may also be added.

Friday, September 23, 2016

Federal 2PP Aggregate Methods (45th Parliament)

This is the methods post for the 45th Parliament edition of the aggregate (sometimes nicknamed "Aggedor") that I post in the sidebar of this site, and which will form the basis for Poll Roundup posts and, later, my attempts to forecast the next election.

The current version is essentially the same as the version running at the end of the 44th parliament, with some minor changes to the weightings.  One substantial methods change was made half-way through the 44th parliament, which was to switch from just using the 2PP figure supplied by most pollsters, to using a hybrid of the supplied figure and a figure calculated from their primaries.

A simpler version ran before the 2013 election and fluked getting the 2PP exactly right as a result of a preference shift cancelling out a possible late swing to the Coalition.  The 2013-6 version had a final error of 0.4 points, almost half of which resulted from slight shifts in preference flow patterns.

The aggregate is mostly a weighted average of two-party preferred polling derived from all recent polls of sufficient standard.  The 2PP figure assigned to each poll is multiplied by various weightings based on the poll's recency, accuracy and other issues, and the sum of the multiplied poll scores is divided by the sum of the weightings.

The aggregate is designed to be transparently checkable in theory and to use basic mathematics only.  However, it is not entirely codified in advance.  Decisions will be made on issues of pollster weighting and house effects, and possibly other matters, and will be updated to this page at the bottom when made.

Pollsters Included:

Polls by the following pollsters are included when available: Galaxy, Newspoll, ReachTEL, Ipsos, Essential, Morgan.  Other pollsters may be added subject to their completion of three polls of national 2PP voting intention by the same methods, and adequate methods documentation, and previously active polls may be reincluded.  Even if a poll is by an included pollster, the poll is not included if it is an "internal poll" (defined as a poll commissioned by a political party, union, business or lobby group).  Polls that are known to include aggregations of other polls are rejected.

2PP calculations:

For every poll, a 2PP is calculated from the primaries, using last-election preferences.  For instance,

Coalition 2PP = Coalition Primary + .181*Green Primary + .508*Others Primary - 0.13


Coalition 2PP = Coalition Primary + .181*Green Primary + .398*NXT Primary + .505*One Nation Primary + .529* Others Primary - 0.13

(The formula used depends on whether the poll publishes breakdowns for NXT and One Nation or not.  The -0.13 is an adjustment for the impact of three-cornered Liberal/National contests).

If the poll does not release a last-election 2PP then the above figure alone is used, rounded to one decimal place.  This also applies when the pollster was using figures from the election before last (eg polls using 2013 preferences in July-August 2016), and I reserve the right to apply it if a pollster's 2PP calculations are persistently suspicious (in which case I will log it on this page).

If the poll releases a last-election 2PP but the primaries have a greater level of detail, then the above figure alone (rounded to one decimal place) is used, provided that it rounds to the same 2PP.  If it doesn't then the closest figure that so rounds is used.

For other polls the 2PP that is used is the average of:

1. The last-election 2PP released by the pollster
2. The calculated 2PP based on the released primaries (formula above).

This is rounded to one decimal place, with the constraint that the maximum allowed difference between the number thus found and the pollster's released 2PP (prior to applying any house effect correction) is 0.2 for Morgan, and 0.4 for all remaining pollsters.  So if a Newspoll released 2PP is 51, the aggregated 2PP will be in the range 50.6-51.4.

Ipsos uses "batched" last-election 2PPs which assume preferences from all minor parties will flow the same way overall, even if the ratio of Greens to other minor parties changes.  For these I use a weighting of 40% for the last-election 2PP and 60% for the calculated 2PP.

Time weighting:

The aggregate is designed to be fairly cautious in adjusting to apparent changes, in case these are actually caused by sample noise. The base time weighting is 5 for a poll released this week, 3 for last week, 2 for the week before last, 1 for the week before that.   At present a new polling week is considered to start from Saturday with the reset being applied from midnight Friday.  Intermediate weightings of 4 for a poll bridging this and last week, 2.5 for a poll bridging one and two weeks ago, 1.5 for 2-3 weeks ago, 0.5 for 3-4 weeks ago, are used in the case of polls that use rolling data for two weeks (a la Essential) or that derive their data entirely from the middle or end of the previous week (eg ReachTEL often polls on Thursday.)

So for instance in the typical week the new Essential carries a time weighting of 4 when it is released, but this is reset to 2.5 when the next polling week starts.  A ReachTEL released on Thursday or Friday based on polling on Wednesday or Thursday is weighted at 5 but this is reset to 4 when the next polling week starts.

During election campaigns, in the last three weeks a weighting of 8 is applied for polls based entirely on data sampled (not released) in the last four days.  In the last week a weighting of 10 is applied for polls based entirely on data sampled (not released) in the last two days.  Only the most recent poll by any pollster can qualify for these weightings.

Number of polls included:

Only the two most recent polls by any pollster are included.  In the case of Essential, the most recent non-overlapping polls are counted.  This normally means the polls included are the most recent and the one from two weeks before.

Quality weighting:

Pollsters are weighted based on my view of their performance - not just in final polls but through campaign cycles and between campaigns.  I reward polls for consistently using the same methods with good results at multiple elections, and for being the best pollster at the most recent election.  Polls can be marked down for irregular tracking behaviour, for consistently having the vote of a particular party too high, for changing methods or not conducting polls very close to an election, and so on.  Based on experience in the previous cycle, I have increased the spread of ratings for this one.

The maximum weighting for a poll that has appeared seven or less times is 0.8.

The current quality weightings are:

1.35 Galaxy, Newspoll
1.2 ReachTEL
0.9 Ipsos
0.75 Essential
0.4 Morgan

If only the 2PP is released for a poll and not the primaries then the quality weighting is halved.

For my full review of pollster performance at the 2016 election see here and for the 2013 election see here.

Respondent vs Last-Election Preferences:

Most recent elections have shown that last-election preferences are the most reliable method while respondent preferences skew to Labor and create needlessly volatile poll readings.  The 2016 election was no exception to this . Therefore this aggregate ignores respondent preferences.

House Effect Correction:

Poll types by a specific pollster are applied a house effect correction when it is considered that they are displaying a house effect in favour of a particular party.  This can either be a systematic long-term feature of that poll series or can reflect a pollster developing a sustained medium-term difference from the tracking shown by other polls.   The minimum house effect correction is 1 point, so if a pollster is suspected of having a house effect smaller than that, then that is treated under Global House Effect (see below).

The current house effects applied are:

Morgan multi-mode (SMS / face to face): +1 added to Coalition

Global House Effect Correction:

A global house effect correction of up to a point is applied to the aggregate if it is considered that the pollsters included in it (after the adjustments above) tend on average to lean to one side or the other.  This is partly determined by considering the performance of polls at a range of election campaigns, with federal elections considered most important but state elections also taken into account.  However short-term changes in house effect in specific polls are also taken into account. The frequency of the polls included is also taken into account - if polls that lean slightly to one party are more common or more heavily weighted than those leaning slightly to the other, then this is corrected for.

The current global house effect correction is +0.4 to Coalition because both Newspoll and Essential appeared to be generally Labor-leaning in the previous term, although not in their final polls.  This will be reduced or removed when more polls are active.

Sample size:

No scaled adjustment is made for sample size at present (since most reliable national polls use broadly similar sample sizes which they increase at election time), except that a poll with a sample size of less than 950 automatically has its value in the aggregate halved.

Unusual polls:

From time to time a poll is available that includes data from around half the Australian voter base.  Examples are a Galaxy poll of only female voters, and a ReachTEL poll of only NSW and Victorian voters. In these cases a national figure is estimated based on known differences between the surveyed and non-surveyed components, and the poll is aggregated at half its normal value.  However such polls by pollsters without well-established national 2PP tracking (eg JWS Research bulk polls of dozens of "marginal seats") are not included.  Single-state polls of federal voting intention are also not included.

New poll types:

A new poll type by an established pollster may be either included immediately, or excluded until three polls by that type have been conducted.  The new poll type is always excluded until three polls by that type are conducted if it appears to have a substantial house effect.

Example tracking:

The following is example of tracking for the current parliament four weeks after the condition for using the aggregate was first met:

Labor has led throughout this period (and by 2016 preferences on every poll released since the election), although by my assessment this lead is relatively narrow.  The aggregate more heavily weights Newspoll than Essential and assumes both usually lean slightly to Labor based on form in the previous parliament, although neither did so in their final poll.

Here is the final output of the smoothed aggregate for the previous parliament:

Seat Projections and Interpretation:

Formal seat projections will be added in the leadup to the next election.

For the time being, while the Mackerras pendulum has Labor requiring a 50.7% 2PP based on uniform swing to win more seats than the Coalition at the next election, it really shouldn't be that hard.  Labor has a massive advantage in personal votes arising from its many seat wins at the 2016 election.  This may well be blunted by redistributions (or if any of the new MPs don't recontest), but for now I estimate Labor needs a 2PP of only 50.1% for a 50% chance of winning more seats than the Coalition (ignoring any possible crossbench changes).  It is similar to the situation at the 2001 election, at which a 1.93% swing in its favour gave the Coalition not the eight seat gains implied by the pendulum, but only two.


A smoothed version of the aggregate is used as a basis for graphs.  The smoothed version for a given week is calculated by the following formula:

(current reading*2 + previous reading + next reading)/4

or where the next reading has not happened yet:

(current reading*2.5 + previous reading*1.5)/4

Other Aggregators:

Several aggregators operated during the 2016 election cycle, some of which have since been discontinued.  Links to good-quality aggregators, doubtless including BludgerTrack, will be added here as they emerge from hiding.  Dodgy ones may be linked as well if they are high-profile.

Conditions for operation:

The aggregate is suspended (frozen at its existing value) at any time at which the number of individual polls included in it falls below three.  When the number falls below five a cautionary note is added.

Mandatory reset events (all existing polls discarded and the aggregate restarts after three new polls) are (i) a federal election for the full House of Representatives is held (ii) there is a change of Prime Minister.  Resets may be applied in other cases but this will only occur, if at all, with great reluctance!

Log of adjustment and methods changes:

All changes made to the pollster-specific and global house effects, and any other changes made, will be listed here.

Thursday, September 22, 2016

Federal Election 2016: Best And Worst Pollsters

It's been a long time coming but the recent finalisation of the 2016 House of Reps election results means it's time to present my in-depth review of how the various pollsters did.  At the 2013 election there was a widespread belief that the polls might be totally wrong, but it turned out they were accurate.  Since that election there has been a massive turnover in Australian polling methods and companies (such that only two pollsters went to this election doing the same thing as last time) and there were more reasons for concern, but the miracle has continued.  Australian national opinion polls have again proved highly accurate.  However, the picture in individual seat polling was not such a pretty story.

As usual I will present my awards in three categories.  This article is quite numbery of course, and is rated 3/5 on the Wonk Factor scale.

Tuesday, September 20, 2016

Proposed Anti-Discrimination Changes And "Don't Mess With Marriage"

The Tasmanian Government has introduced the Anti-Discrimination Amendment Bill 2016.  This differs little from a draft version that was circulated for public comment, but a notable change is that the proposed addition of a reasonableness clause has been dropped.

This is the second consecutive government to try to amend the fabric of free speech in the state.  The previous Labor/Green government introduced particularly extreme changes which were fortunately thrown out by the Legislative Council.  The latest proposals are nowhere near as concerning but they still raise some serious issues about the fair and clear application of the law to a range of differing beliefs.

There are two main backgrounds to the proposed changes.  The first is the complaint by Martine Delaney against the Catholic Church over the circulation of a booklet entitled Don't Mess With Marriage, a modestly worded but in places highly insulting defence of supposed church creed against same-sex marriage.  The complaint attracted high-profile attention and at times was frothed about in the opinion sections of the Australian on a more or less daily basis.  In fact, all that happened was the complaint was sent to conciliation as quite clearly required by the law, and we never found out whether the booklet actually breached the law because the complaint was dropped after the conciliation stage.

Sunday, September 18, 2016

2016 House Of Reps Figures Finalised

I'm not sure exactly when this happened, but sometime in the last two weeks while I have been away overseas, the AEC has released the final detailed set of federal election results.  Although all House of Representatives seat results were already known, the release of final information on two-party preference flows and preference distributions is very useful for many things, including assessing the performance of polls.  Final results are here.  This article is a general roundup of various details, and soon I will be using the figures to conduct an in-depth review of polling accuracy at the election, and after that to start a new polling aggregate model.  There is quite a backlog of other articles that I want to write, so I hope I can get through as many of these as possible in the next three weeks before another round of fieldwork.

The final two-party preferred result is 50.36% to the Coalition to 49.64% to the ALP.  This represented a clear success for the last-election method of preference prediction, which would have predicted a 2PP of 50.53% for the Coalition based on the primaries actually cast.  The respondent-preferences method (on average across polls using it) expected a shift in preferencing large enough to shift the 2PP result by at least 0.6 points (ie Labor would have won the 2PP).  This continues the superior track record of last-election preferences, and I will continue to treat respondent preferences with caution. 

Thursday, September 1, 2016

EMRS: Liberals Rebound, But Still Shaky

EMRS August 2016: Liberal 41 Labor 31 Green 15 "Independent" 11 Others 2
Intepretation: Liberal 42.5 Labor 33.5 Green 12 Others 12
Seat projection on this poll: Liberals would be slightly favoured to just retain majority (approx 13-10-2, with 12-10-3 next most likely)
No seats are projected to fourth parties/independents as no prominent fourth parties/independents are yet known to be running

Aggregate of all recent polling 12-10-3 (no majority)

Another EMRS poll of state voting intentions is out.  Also see the useful trend tracker.  After dipping to its lowest level for many years in the July poll, the Hodgman Liberal government has rebounded to a position similar to that in the May poll.  This is consistent with the July poll result having been contaminated by the Liberals' woeful performance in Tasmania at the federal election, and not too much should be read into the four-point shift in this poll for that reason.

Saturday, August 27, 2016

NT Election 2016: How Badly Will The CLP Be Beaten?

Final result 18 ALP 2 CLP 5 IND.  Result will be declared and winners seated subject to any challenges.

Because of fieldwork I have not had much time to follow the Northern Territory election to be counted tonight.  I'll use this thread to post comments on any aspect of the count of interest either while the count is going or in the following days.  The ABC has some very comprehensive coverage (and will have more on News 24) and there is also coverage and discussion at The Tally Room and Poll Bludger.