Sunday, August 11, 2013

A Field Guide To Australian Opinion Pollsters

As of September 2016 this is an archived old edition that is no longer being updated.  For the 45th Parliament (2016-) edition go here.


Introduction

There are a lot of polls about in Australia these days.  But how do they all work, which ones have runs on the board and which ones can you trust the most? 

With an increased number of pollsters now polling in Australia, it's been suggested that an article describing what is known of each pollster and its strengths and weaknesses might be useful to many people.  So I thought I'd put one up, and edit it over time as the need arises.  (Now edited post-2013 election.  Last major edit 30 Nov 2013. Last minor edits 21 June 2016.)

The gold standard for success for an opinion pollster is that its polls at election time get the result as close to right as possible.  However, many of the new pollsters are little-tested against actual elections, and getting a specific election right is a combination of skill and luck.  In elections where there is a swing on the last day or two of the campaign, a pollster that is actually not polling correctly may have its errors cancelled out by the swing, and hence record a lucky hit.  There is more to being a good pollster than just getting it right at election time - a good pollster should also provide useful data between elections and do so using well-designed questions that are easy to interpret.  And a pollster should also present their data in a way that makes sense and isn't misleading or confusing.



Some Common Themes

There are some general issues that affect a number of pollsters that I should go through before I move onto individual pollsters.  If you just want to look up a given pollster, scroll down, and then you can scroll back to this bit if you see something you want to look up; it might be here.

House Effect

The issue variously called lean, house effect, skew or bias refers to the tendency of a pollster to produce results that are better for one major party or other (or for specific minor parties) than what is likely to be the real situation.  The term "bias" is a poor one for this issue because it carries connotations of the pollster themselves liking one party more than the other or intending to assist one side, but there is no evidence that this is actually true of any major pollster in Australia.  The extent to which the house effects for each pollster are stable, or change in response to slight methods changes or political circumstances, is often a subject of debate.

Bouncing

The issue often referred to as bouncing, but more technically as overdispersal or underdispersal, refers to how much a poll tends to move about from sample to sample even if voting intention isn't changing very much.  A given poll has a maximum margin of error based on its sample size, meaning that in theory 95% of the poll's results (once adjusted for the pollster's house effect) will be within that margin of error of the true value, but most of them will be much closer to the true value than that.  As the sample size increases, the maximum margin of error decreases, but the decrease isn't proportional.  For instance, for a 50-50 result from a sample size of 1000, the margin of error is +/- 3.1%, but for a sample of 5000 it is about +/- 1.4%, meaning that national polls with sample sizes above a few thousand are usually not worth the effort of producing them.  In practice, some polls tend to vary from sample to sample by much more than would be randomly expected, and these polls are bouncy or overdispersed.  Some polls are very static (except or sometimes even when voting intention actually changes sharply), and these are underdispersed.

In theory underdispersal is nice, because a pollster wants to accurately reflect the national mood rather than releasing polls that are wrong by several points.  No one wants to issue a rogue poll that everyone then ignores.  But a poll that is underdispersed may in some cases be so because it is slow to pick up major shifts when they occur, or indeed doesn't pick them up fully at all.  There is also the problem that there is no way to make a poll under-disperse when using truly random sampling from the entire Australian population, so if a pollster's results are very steady the question must be asked: how are they doing it?  Is it really a pure and random poll, or is the pollster allowing data from other pollsters to influence the way they fine-tune assumptions that create the final outcome?  (The latter practice is known as herding.)  Other possibilities include that underdispersed pollsters are using tracking from their own poll to chop rough edges off their results, or surveying the same respondents too often.

Mobile and Landline Phone Polls vs Online Polling

A frequent hot potato in polling debates is the theory that the rising proportion of mobile-phone-only households (which contain mostly young voters) produce a skew in landline-only polling results, in the Coalition's favour.  The 2013 federal election debunked this theory (in the Australian context) for the time being, as there was not much difference in performance between the landline-only phone polls and those that called mobiles.  Partly this was because unrepresentativeness in landline-only polling can be overcome by scaling (see below) and partly this was because the political attributes of landline and non-landline households seem to not be as different as might be expected.  See Christian Kerr's report of Newspoll surveying.

The 2013 election, at least, supported the view that online-panel pollsters have bigger problems to contend with in this area than landline-only pollsters (and again the Newspoll study above is relevant).  Online panel polling, irrespective of recruitment method, may have biases that cannot be removed, because online respondents are people who like filling out surveys and are comfortable with technology.  Not everyone is like that, and it is a difficult thing to predict by demographic attributes alone, and one that may influence political opinion.  Hence, even with the use of scaling, it may be difficult for online-only pollsters to avoid a skew to the left in their results.  Some online-panel pollsters performed very well in the 2012 US Presidential election, but this may have been because there was a late move in voting intention to Obama.

Following the 2015 election, Newspoll experienced difficulties with rising Greens and Others votes in landline polling.  There has been a general move away from purely landline polling and no currently active national pollster still uses 100% landline polling.

Scaling

Getting a truly random sample of the Australian national population is difficult.  Some types of voters are simply much easier to contact than others.  One option is to keep contacting potential respondents until you get exactly the right demographic mix.  However this can introduce time delays and increase the costs of polling if you are using phone polling.  Another option is to "scale" the responses you have by applying corrections based on which group you have less of in your poll than others.  For instance, suppose that voters in age group A are 10% of the voting population but only 5% of your sample, while voters in age group B are 25% of the voting population but 30% of your sample.  A simple scaling method would then be to double the value of each response from group A and multiply the value of each response from group B by 25/30.   In practice, scaling is much more complicated and a given response might be scaled on many different criteria at once, some of which might increase its weighting and others of which might decrease it.

Scaling effectively increases the margin of error and bounciness of a poll, because any sampling error in a small group that is scaled up could be magnified in the overall total.  There is also a risk that if a demographic group is hard to poll, then the voters who can be polled within that group might not be a representative sample, and that any error caused by that might then be magnified.  For instance, young voters are hard to reach using landline polling, excepting those living with their parents.  But are young voters who live with their parents politically representative of all young voters? 

Some areas of Australia are simply very difficult to poll accurately by any method.  The Northern Territory is one of them.

Internal and external

Many prominent pollsters conduct both "public polls" and "commissioned polls".  A public poll is a poll either conducted by the pollster themselves without anyone paying for it, or commissioned by a major media source, for which full details of results are usually publicly released.  Although there is the potential in theory for the party biases of media sources towards a party to result in them hiring a pollster to present results in a good light for that party, there is really no evidence that this happens in Australia.

Commissioned or internal polls are polls paid for by a political party or by a group with an interest in an issue (such as a lobby group or company).  Commissioned polls may ask standard voting intention questions, but it is the choice of the client whether to release results, and it is common for internal polling to be only selectively released (an increasing problem with robo-polling reducing polling costs).  Usually the full details of commissioned polls are not released.

Some companies produce excellent public polling while also being willing to engage in commissioned polls in which the questions are asked in a way more likely to get the result the client wants.  Often the client wants a poll that shows strong support for their cause so that they can then get more publicity for their cause and attempt to convince politicians that it is popular.

This article is about public polling only, except where stated otherwise.  Just because a pollster does good public polling does not mean their commissioned polls should be trusted.  As a general rule no commissioned poll reported in media should be taken all that seriously, whatever the pollster, without the full release of the verbatim wording of the questions and an extensive breakdown of results.  Even with these things, the wording of the questions often turns out to be suspect, and even if there is nothing wrong with the commissioned poll at all, there is still the possibility of selective release of good polls while not releasing the bad ones.  Furthermore, even well-done internal polling may be less accurate than public polling, because of morale bias: some parties are more likely to hire companies that say they are doing well (even when they're not) than companies that accurately say when they are doing badly.

Upfront Exclusion

This term refers to the proportion of voters who are eliminated from results because they either cannot specify a preference, refuse to answer the question, or fail to complete the survey interview.  For most pollsters this proportion is slight to moderate (a few to sometimes 10%).  In theory if undecided voters had a tendency to shift to a particular party, this could make polls very inaccurate, but there is not much evidence that this issue has bitten in past elections.  Generally, the higher the upfront exclusion rate, the more chance that those voters who do reply are not representative, but this seems to become a serious problem only with polls that upfront-exclude over 10%.

The Green Vote

Most pollsters have a recent track record of usually or always overestimating the Green vote compared to actual election results, especially when the Green vote is fairly high.  An especially stark example was the 2014 Victorian state election, in which all 17 polls published in the two months before the election had the party's vote too high, one by up to eight points.  Part of the reason for this is that the Green vote is actually very soft; there may be other reasons.  Small and new pollsters, and pollsters with high undecided rates, are especially prone to this problemNewspoll at one stage used to have the reverse problem, but changed the way it asks its questions.  Polling of "others" and "independents" is often also inaccurate.  Specific fourth parties like Palmer United tend to be under-polled if they are not specifically named, while the category "independents" tends to over-perform in polling compared to election results.  Voters may offer "independent" as an ambit wish for a good high-profile independent candidate, but they won't vote for one if one isn't on the ballot.

Preferred Prime Minister

Preferred/Better Prime Minister polling questions are a bugbear of Australian poll commentary, which would probably be more informed if such questions did not exist.  Given that Australian politics is so presidential and that the personal approval/disapproval ratings of the Prime Minister are a driving indicator of change in the 2PP vote, it might be expected that a question about who should be Prime Minister would yield good information.  It frequently doesn't.  For whatever reason (and it seems to have something to do with the don't-know option), the Preferred/Better Prime Minister scores of most major pollsters flatter the incumbent.  For instance, in Newspoll, if the two parties are tied on the 2PP vote, and their leaders are tied on personal ratings, then the Prime Minister will typically lead the Opposition Leader by 16 points as Preferred Prime Minister.  This skewing leads to the media talking about fairly big PPM leads for incumbent PMs as evidence of dominance when they are not, or small PPM leads or deficits as evidence that the government still has an ace up its sleeve when in fact they are evidence of trouble.  See Why Preferred Prime Minister/Premier Scores are Rubbish.

The only pollsters that seem to avoid this are ReachTEL (see below) and Morgan SMS. 

2PP Preferencing: Last Election vs Respondent Allocated

Most pollsters publish two-party preferred results that are based on the assumption that voters who do not vote for the major parties will distribute their preferences in the same way as at the last election.  Many pollsters who do this try to calculate the preference flow for the Greens separately from other parties, but some use "overall preference flow" which assumes that the average flow from all non-major-party voters will stay the same (even if the proportion of them who vote for the Greens changes.)

Some pollsters, however, use respondent-allocated preferences, ie they ask the respondent how they will distribute their preferences.  One problem with this is that many voters will actually follow what their party's how-to-vote card says rather than decide for themselves; in any case this method has a history of strong Labor skew and is generally less accurate.

However in the 2013 federal and 2014 Victorian state elections, the truth was somewhere between the two, because the flow of preferences to Labor strengthened.  In the 2015 Queensland election, last-election preferences proved very inaccurate and it's likely respondent-allocated preferences would have been more predictive for that election, and will be for some other such elections with very large swings.  In the 2015 NSW state election the most conservative estimates of respondent-allocated preferences were accurate.  It seems that voter choice about preferencing makes more difference in the optional-preference states, because voters can choose to exhaust their vote.

For a detailed technical discussion at federal level see Wonk Central: The Track Record Of Last-Election Preferences.

Single Polls vs Aggregates

No matter how good a pollster is, no single poll series will consistently give a perfect and reliable picture of voting intention.  Aggregating polling from multiple polls to get a picture of the whole range of voting intention is usually more reliable than assuming any one poll or poll series is accurate.  If you have one poll saying 52-48, you do not know for sure the leading party is in front.  If you have five with an average of 51.5-48.5, all taken at the same time and without significant house effects, you have a much better idea that the leading party really is in front.

Many people make the mistake of saying that if all the polls are within their margin of error of 50-50 then the race is as good as a tie.  Frequently, this isn't true.

Different poll aggregates will give slightly different values at any given time because of the complexity or simplicity of the different assumptions made by those running them.  Such issues as what weight to place on given polls based on their past record, how quickly to assume a poll has ceased to be relevant and what the hell to do about Essential Research are not easy and different modellers will try different assumptions, and then modify them when elections provide more data.

A list of active polling aggregators is given at the base of this article.

Poll Fraud

Poll fraud occurs when a pollster simply makes up numbers, which means it can produce a "poll" without needing to spend time or money surveying anyone.  Poll fraud can be detected by various methods, including results that fail non-randomness tests in their fine detail.  Poll fraud is a problem at times in the USA.  No poll fraud in public polling has been detected in Australia to my knowledge.

The pollsters

Newspoll

Newspoll, house pollster for The Australian, is Australia's best-known polling brand and the one that most seems to influence political debate, election betting market moves, and public comment about party standing.   

Between election campaigns it normally polls fortnightly, but sometimes the schedule is adjusted to respond to current events, to coincide with a new parliamentary week, or to avoid long weekends.  Also the contracted schedule is actually a little short of fortnightly, so sometimes there is a three-week break for no obvious reason.  The day of release (either Monday or Tuesday, with first figures becoming known about 10 pm the previous night) is also varied, mainly for the first reason.

Until July 2015, Newspoll was a telephone pollster that dialled randomly selected numbers and only called landlines.  In July 2015 the Newspoll brand was transferred away from the company previously running it (which was dissolved, with some key staff moving to start Omnipoll).  Now, Newspoll is operated by Galaxy (see below) and is a hybrid pollster using a combination of online panel polling (a la Essential) and robopolling (a la ReachTEL).  The robopolling is of landlines only, but the online polling will reach respondents who do not have landlines.

Newspoll has a long history, going back to late 1985, and has asked questions in a very stable form, making it an excellent poll for past data comparisons, although how much these are affected by the mid-2015 methods change remains to be seen.  The old Newspoll had a predictive record at state and federal elections that was second to none, despite a fairly bad final 2PP figure in 2004 (as a result of a shortlived and incorrect 2PP estimation method).  However, far too much attention was often (and likely will still be) paid to poll-to-poll moves in Newspoll without considering the pattern from other polls.  There was also a history of the poll becoming "bouncy", with a fairly high number of results that are well out of line with the rest in some direction or other.

An often-discussed aspect of the old Newspoll was its upfront exclusion rates and I wrote a detailed article about that here.  Newspoll also attracts a massive volume of online conspiracy theories, most of them left-wing and virtually* all of them groundless and daft.  Reading a full #Newspoll Twitter feed on a given poll night may cause permanent brain damage, and at least 90% of tweets that mention "Newspoll" and "Murdoch" together are rubbish.

A recent source of silly Newspoll conspiracy theories has been the pollster's habit of hibernating for several weeks over summer.  Historically Newspoll has always taken at least four weeks off between polls over the Christmas and New Year periods, usually at least five and in cases eight or more.  Also, Newspoll is more likely to take long breaks shortly after an election.  In 2011, Newspoll did not emerge until the first weekend in February.  In 2008, it polled once in late January (its first poll since the election of the Rudd Government) and then took another four weeks off.

The old Newspoll was extremely accurate in its final 2013 election poll, but in late 2014 developed a tendency to overestimate third-party (combined Greens and Others) results.

(*In 2015 I was reminded that in 2009 Newspoll released a poll on attitudes to asylum seekers, for which it had voting intention figures but declined to release them, apparently to avoid competing with Nielsen which had a poll out that week. This attracted much criticism as the previous Newspoll had been a rogue, and became known in certain circles, slightly inaccurately, as the "missing Newspoll". There are, however, several other false "missing Newspoll" claims that have been made from time to time.)

Galaxy

Galaxy Research has been conducting federal polling since the 2004 federal election.  Galaxy's federal polling was formerly conducted by random telephone surveying (primarily) but is now a mix of such surveying (including of mobile phones) and online panel polling.  Galaxy appears sporadically between elections and is the house pollster for a string of News Limited tabloids.  It seems to be polling less frequently following its large deal to run the Newspoll brand.

Galaxy has a strong predictive record in its relatively short life and is an uncannily steady (underdispersed) poll, with no evident weaknesses.  Earlier in its career it appeared to produce slightly Coalition-leaning results between elections, but the lean would go away during the campaign.  There is a strong case that Galaxy now deserves to be regarded as the most reliable pollster of basic voting intention in the country, though the uncanny steadiness of its polls often seems too good to be true! Also, there is a sharp contrast with Galaxy's specific issue/attribute questions, which (presumably at the behest of sponsoring media) frequently use murky and provocatively subjective language and are often difficult to make accurate sense of.

Galaxy sometimes uses other polling methods.  For instance it has been using automated phone polling (robopolling) in seat polls.

Galaxy was in my view the best pollster of the 2013 federal election campaign and lead-up.

ReachTEL

ReachTEL is the most commonly encountered "robopoll" and is now regularly used by Channel Seven and various newspapers.  A robopoll conducts automatic phone calls to randomly selected landline and mobile phones, and respondents pick answers according to options stated by a recorded voice. Robopolls are cheap to run and can be conducted very quickly, but have the disadvantage that a lot of voters will hang up on them immediately.  Therefore they require a lot of scaling, which increases the chance of errors.  In the US, robopolls are noted for leaning more to the Republicans than other polls (but not enormously) and are thought to be especially prone to herding.  Probably, a large part of the quality (or otherwise) of any given robopoll lies in its scaling.

ReachTEL polling at the 2013 federal election produced national results that were broadly similar to other pollsters, but its state and local level polling sometimes matched other polls closely but sometimes favoured the Coalition, by up to 5 points in some areas.  For instance in Tasmanian seats this method underestimated the Labor primary vote by several points.  ReachTEL was among the better pollsters at the 2014 Victorian state election, the best pollster of primary voting intention at the 2015 Queensland state election, and the best statewide pollster fullstop at the 2015 NSW state election.

ReachTEL forces answers to some questions, often disallowing an undecided option and requiring the respondent to choose one option or the other.  This results in preferred Prime Minister figures that are often closer to the national two-party vote than those of other pollsters.  It also produces much worse ratings for the government on issues questions.  The suggestion is that there are many people who have slightly negative views of the government but will let it off with a neutral rating unless forced.  Forcing can cause voters to hang up but the company advises me that the percentage of hangups after the first question is very small.

ReachTEL leadership performance ratings use a middle option of "satisfactory" which seems to capture some mildly positive sentiment.  For this reason ReachTEL ratings when expressed in the form Good vs Poor seem harsher than those of other pollsters.  At one stage I found that adding half the "satisfactory" rating to the Good/Very Good score reliably produced readings comparable to the Newspoll scale, but lately the gap between ReachTEL and other ratings seems to be closing.


Fairfax Ipsos

Fairfax Ipsos is a new (as of late 2014) phone poll that samples both landlines and mobiles and operates on a similar scale and frequency to the former Fairfax Nielsen polls.  It will take a while for its properties to become apparent but the firm's international record is good. At the 2014 Victorian state election the pollster got the Coalition vote exactly right but overestimated the Green vote and underestimated the Labor vote.  Federal polls and New South Wales polls have suggested the poll initially skewed to the Coalition, but this seems to have reduced with more experience.  However the poll continues to record much higher Green votes than most other pollsters.

Essential Report

Essential Report is a weekly online poll and the house pollster for the Crikey website's subscriber newsletter.  Essential's respondents are selected from a panel of around 100,000 voters, and about 1000 are polled each week, by sending out an email offering small shopping credit rewards for participation.  Unusually, Essential publishes rolling results that are the sum of each week's poll and the last week's poll.  The purpose of this strategy is to reduce bouncing and the impact of brief kneejerk reactions on the poll.

In its early days Essential was a very bouncy and Labor-skewed poll that was pretty much useless, but it made changes at some stage in 2010 and delivered a good result at that year's election.  However, the poll still seems to have some problems.  It too is underdispersed (see Essential: Not Bouncy Enough), but in a way that seems to cause it to become "stuck" and to respond slowly and incompletely to big changes in voting intention, as compared to other pollsters.  Quite why this is is not entirely clear - it could be to do with the composition of the panel or with repeat sampling issues within it (against which some precautions are taken).  Essential also sometimes displays a very different trend pattern to other pollsters.  Its performance in the 2013 election leadup was idiosyncratic.

Essential asks a very wide range of useful attribute and issue based questions that often help to drill down into the reasons why voters have specific attitudes, that in turn underlie their votes.  However these are sometimes marred by high don't-know rates, which are a problem with online polling formats.

Morgan (Multi-Mode)

Roy Morgan Research is a polling house that traces its lineage back to Morgan Gallup polls conducted from the early 1940s.  The very experienced pollster was formerly the house pollster for The Bulletin magazine (which no longer exists), and suffered badly when it predicted a Labor win in 2001.  Now unattached to any specific media, Morgan is not as much discussed as other pollsters, but the lack of media attachment is not the only reason for that.  The pollster has long persisted with "face to face" polling despite overwhelming evidence that it usually had a major pro-Labor house effect, and does itself no favours by often making overhyped comments about the meanings of its specific results.  Furthermore all Morgan federal polls employ respondent-allocated preferences in which the voter, if selecting a minor party, is asked which party they would preference.  This tends to also produce a lean to Labor.  (Last-election preferences are also usually published.)

Morgan's Multi-Mode method initially consisted of a combination of SMS (mobile phone text), face-to-face, and internet sampling.  The weighting of the three methods in Morgan's large weekly samples was not disclosed.  This form of multi-mode displayed a skew to Labor in the early days of the Rudd return (even when using last-election preferences), but not during the final months of the Gillard or during the election campaign proper.

Late in the 2013 campaign, Morgan issued two multi-modes that did not include face-to-face polling, but at least one included SMS, online and phone polling, while another was described only as "Telephone and Internet".  Following the 2013 campaign, Morgan has switched to yet another new form of multi-mode that uses only SMS and face-to-face polling.  This polling method clearly skewed slightly to Labor, even when using last-election preferences, while Tony Abbott was Prime Minister.  Following Abbott's replacement with Malcolm Turnbull, it has so far skewed to the Coalition.

Given that no one polling method is perfect, it makes sense to combine a range of them and Morgan broke ground for Australia in this sense.  However, it would be useful to have more detail on the proportional mix of the ingredients and whether this changes.

Morgan Multi-Mode is also bouncy for its sample size, but given that its sample size is large, most observers won't be bothered by this.  It is also not clear how representative its sampling is in small states.

Morgan polls seem to be very reactive to "news cycle" events.  It is possible that their SMS sampling (apparently drawn from a panel rather than random selection from the whole population) is too prone to "motivated response", with responses from voters who have strong views about the issues of the day being overrepresented in the results. My view is that SMS is a suspect polling method.

In the leadup to the 2016 election, Morgan has issued a number of seat-by-seat results, frequently based on ridiculously small sample sizes and often accompanied by unsound conclusions.  The pollster has also displayed severe skew to the Greens and should be treated with extreme caution at all times.

Morgan (Phone)

Morgan also conducts occasional traditional landline phone polling.  Historically, the Morgan phone series used to be very accurate and free of house effects, although the small sample size and lack of full voting intentions data in recent Morgan phone polls makes it hard to judge their current standing.  These days Morgan phone polls are mainly used to poll leadership and issues questions.  The leadership polls tend to have rather small sample sizes, and also tend to produce extreme results compared with those of other pollsters, especially where unpopular leaders are concerned.  They often involve "smorgasboard" polling in which several possible leaders for each party are listed.

Morgan (SMS)

Morgan does state polling and also most of its issues polling by pure SMS (mobile phone text) polling.  Early federal results (including at the 2010 election) for this method were encouraging.  However some state poll results have had implausibly volatile primary figures or Green votes that were much too high, suggesting that this method needs work.  Morgan SMS polls were easily the worst published polling for the 2014 Victorian and 2015 NSW state elections, but performed better in the 2015 Queensland election.  I also have some concerns about this method as applied to issue polls (see near the bottom of this article).  Also, see comments about seat-polling in the multi-mode section.

Ipsos i-view

Ipsos i-view is an online panel poll by the same pollster as Fairfax Ipsos, that has released a number of polling results, mostly on issue questions.  Results released in April-May 2014 including three federal 2PP readings were promising but no more voting intention results have been seen since.

JWS Research

JWS Research is another relatively recent robopollster.  It conducted a massive marginal-seat poll at the last federal election with indifferent predictive results on a seat basis (but an excellent overall 2PP estimate) and a similar exercise at the 2010 Victorian state election with excellent results.  Lately it has been most notable for a string of aggregated marginal seat mega-polls, including some jointly commissioned by AFR and a Liberal Party linked strategic advice/lobbying firm called ECG Advisory Solutions.  These polls have often been blighted by the release of clearly unsound seat projections based on them, but that is not the fault of the data.  JWS also conducted many local-level seat polls at the 2013 campaign.  In general, similar comments apply to JWS as apply to ReachTEL.  Electorate-level polls released by JWS during the 2013 campaign showed a strong general lean to the Coalition.  It is likely that the series of aggregated marginal polls experienced the same issue.

JWS releases thorough issues polling every four months in an omnibus called True Issues.

Lonergan Research

Lonergan is another robopollster that has only recently moved into public polling (and has also done a few internal polls for the Greens and other left-leaning entities).  Although some Lonergan national polls have produced similar results to other pollsters, the few Lonergan seat polls in the 2013 campaign showed a very strong Coalition skew.  The scaling of results to voter reports of how they voted at previous elections, a method designed to make the poll more accurate, attracted some criticism because voters may not report their voting behaviour at previous elections accurately, and may over-report voting for the winner, as a result of which polling becomes skewed towards the other side.

Lonergan produced a commissioned mobile-phone-only poll late in the 2013 campaign, which was very inaccurate, but it is not clear whether it was inaccurate because of its methods or because its sample size was much too small. (I suspect both.)  Lonergan produced two commissioned seat polls for the Victorian state election, of which one was fairly accurate and the other not.  Lonergan produced two statewide polls for the NSW state election, which both leaned to the Coalition.

AMR Australia

AMR Australia is a relatively rare player that conducts surveys by online selection in a similar way to Essential.  Its results generally seem to display a slight Labor lean but its final poll result at the 2013 federal election was very accurate.  AMR should not be confused with UMR, which is the ALP's internal pollster.

MediaReach

MediaReach is an IRV pollster (robopollster) that is reported as being owned by a firm with several years' experience in the field.  It has done, for example, state polling of WA and the NT and an electorate poll of Mackellar. I am not aware of any elections it has been tested at under this name.

Metapoll

Metapoll is a new online pollster published in the Guardian.  It is also the author of a deluxe polling aggregate that initially included its own unpublished data, though this was later removed except in the area of preferencing.

Research Now

Research Now is an online panel pollster similar to Essential.  It has produced a fair amount of mostly commissioned issues polling but does not seem to have published any voting intentions polling prior to elections, so its accuracy in Australia is unknown.

EMRS

EMRS is a Tasmanian pollster that has surveyed state and federal voting intention in Tasmania since the late 1990s, and sometimes does commissioned voting polls inside and outside the state.  It is best known for quarterly state voting intention polling.  It is a phone pollster calling landlines.

EMRS is sometimes a difficult pollster to make sense of because its undecided rates are much higher than for other pollsters, and this applies even after the initial prodding of unsure voters to say which party they are leaning to.  (Antony Green's partial defence of the company's high undecided rates here was refuted here).  At past state elections the pollster has tended to overestimate the Green vote and underestimate the Labor vote by a few points each time because of this.  A Labor majority was more or less written off (except by psephologists) in the leadup to the 2006 state poll on this basis, but it was the eventual result.

It is unclear (as of 2015) whether EMRS is still a fully random sample pollster.

Nielsen (Inactive)

 AC Nielsen was the house pollster for the Fairfax press, but ceased its Australian polling in July 2014.  Like Newspoll, Nielsen was a telephone pollster that to my knowledge only called landlines. Between election campaigns it polled monthly.  Unlike Newspoll, Nielsen released state breakdowns for every poll (the specific breakdowns in each poll were very variable because of small sample size, but over multiple polls they could be useful).  Nielsen's reputation suffered badly at the 2007 election when it conducted surveying for its final poll too early and hence missed a late swing back to the incumbent Howard government.  Also, polling only monthly (except close to elections) made it very vulnerable not just to missing brief ups and downs, but also to having the story of opinion over a few months wrong if a couple of polls were out of kilter.

Nielsen had an even longer history than Newspoll.  Although it took its archive of polls off its website, I have retrieved results from 1996 onwards via the Wayback Machine.  Nielsen's leader approval ratings were noticeably more benign than Newspoll's, so that a leader could appear quite unpopular in Newspoll while recording a mixed result in Nielsen. 

All in all, a Nielsen poll reading was a lot like having an extra Newspoll.  There were differences in lean between the two from time to time, but overall there wasn't a huge difference between them.

Others

Others will be added here as I come across them or on request.

Online or TV News "Polls": They're Useless!

Ah, but what about those polls on newspaper websites or Yahoo that give you the option of voting for a stance on some hot-button issue?  What about those TV news polls that ask you to call a number for yes or a different number for no?

The short answer is that these are not real polls.  They are opt-ins and they are not scientifically valid as evidence of voter intentions.  For the first thing, as regularly noted in the fine print, they only reflect the views of those who choose to participate.  If a media source tends to be read more by right-wing voters, then its opt-in polls will tend to be voted on more by right-wing voters.

Secondly, opt-ins suffer from "motivated response".  People who care deeply about an issue will vote on them, but people who really don't have a strong view (but might answer a question put in a real poll that they've agreed to spend time on) will probably not bother.

Thirdly opt-ins are prone to co-ordinated stacking.  Activist networks will send messages by email or social media telling people there is a media poll they can vote in, and this will often lead to votes being cast from way outside the area to which the poll relates.  Opt-ins are easily flooded by this method, producing very skewed results.

Finally, opt-ins are often prone to deliberate multiple voting by single voters, either by people with strong views on an issue who want to manipulate the outcome or by people who want to ruin them just because the results are taken far too seriously.  There are ways to try to stop it, but some of them work better than others. (See in this regard the brilliant work of Ubermotive and also see the guide to how to stop it here.)

I hope this guide is useful; feedback is very welcome.

See also: 2013 Federal Election: Best And Worst Pollsters 

Polling aggregators

* My own, in the sidebar of this site (methods post here).  This is a relatively quick model, aggregating 2PP results using published 2PPs and primaries, and designed for fast updating as new polls come out.  It includes adjustment for accuracy and house effect.  A slightly simpler model produced an identical 2PP result to Bludgertrack at the last election.

* Bludgertrack.  This is the best known aggregator.  It incorporates state-level polling data to predict seat tallies and recorded an extremely accurate seat and 2PP projection at the 2013 federal election.  It derives its 2PP figures from adjusted primary figures rather than aggregating released 2PPs.

* Phantom Trend  This recent arrival is even more mathematically snazzy (check out the How It Works section to see what I mean). Has sometimes leant to Coalition more than other aggregators presumably because of different assumptions about house effects.  More volatile than other aggregates.

* Mark the Ballot Bayesian aggregator assuming the house effects of all included polls sum to zero (which tends to produce a slight lean to Labor compared to aggregators that do not assume this).  Does not include Essential because of its trend-averse tracking behaviour.

* Guardian Poll of Polls Similar to the above two.  Some inaccurate data inputs in early 2016 mean that its readings for that time are wrong, but otherwise good.

* AFR Poll of Polls This aggregator tracks the averages of weekly 2PP polling, so each reading is based solely on that week's results and not any previous week's.  This makes it a bouncy aggregate more prone to poll noise and the vagaries of which pollsters poll in a given week. Weighting is based on sample size but no adjustments are made for house effect or skew.  Looks cool but handle with caution.

Andrew Catsaras formerly did the Poll of Polls segment on ABC's Insiders and now and then posts his aggregate, which provides a monthly rounded 2PP figure and now primary estimates.

Several other aggregators operated during the 2013 election cycle and links and comments on them will be added if they resurface.  New aggregators may also be added.

6 comments:

  1. Kevin,


    When I last checked in mid 2010, there was a new international standard for market, social and opinion research, viz: .AS ISO 20252. I noted that Newspoll had this accreditation, but, as far as I could see, Galaxy and Nielsen didn’t. These last 2 made no reference to accreditation on their websites. That told me they weren't up to much. Maybe they have since become accredited.

    I think that the Margin of Error (MOE), nowadays quoted when poll results are released, would be believable if the survey sample was random. However the inevitable large number of non-respondents and the sampling methods (eg. landlines) make the MOE unmeasurable. So these polls can give only us a rough estimate of a prevalence, and shouldn't be complemented with dodgy MOE's.

    I can't see the worth of spending much time studying and comparing the results of opinion polls.




    Leigh Callinan

    ReplyDelete
    Replies
    1. ISO standards - Nielsen at least is listed as accredited here: http://www.amsro.com.au/AMSRO_wp_amsro236/wp-content/uploads/2010/12/List-of-ISO-20252-Companies-20135.pdf (I also found evidence of accreditation for Essential, Morgan and amr). It's not an issue that concerns me greatly because if a given pollster is not "up to much" then that should be revealed over time by persistently inaccurate or suspicious results.

      MOE - I think the article covers the problems with supplied MOEs pretty well. MOE figures supplied by pollsters often are misleading because of house effect (meaning that the distribution is centred about a figure other than the true mean) or scaling (meaning that the sample size is not effectively what it is described as, because some respondents count for more than others). Sampling being a form of pseudo-randomness that then has to be beaten back to something more closely resembling the real thing introduces other problems and it could be that the MOE for any particular pollster can only be measured from experience, and not perfectly modelled. Nonetheless, for polls that don't have huge house effects, MOE turns out to be a useful practical approximation and is helpful in encouraging people to be aware of the influence of random sample error on results. Essential's methods page includes a fairly frank discussion of the issues and the fact that MOE is an in-theory figure.

      Delete
  2. Interesting stuff! What is your opinion of the ABC's Vote Compass survey?

    ReplyDelete
    Replies
    1. Vote Compass is a giant opt-in and not a poll as such, but I have seen that it is using demographic scaling to attempt to "normalise" its output on issues questions and produce a poll-like output. This seems from what I've seen so far to be working reasonably well - the detailed results I've seen generally make sense and are quite interesting. But there will always be some skewing that cannot be eliminated because the survey only gets the results of people who like using computers to answer surveys. For instance, results on IT questions could well be unrepresentative.

      I have some issues with its two-dimensional voter orientation assessment. Firstly the usual way of doing these things is on a two-way axis of economic liberty vs social liberty. Vote Compass uses an axis called "social liberalism" that seems to be conceptually confused. For instance on every personal liberty question I took the most libertarian position possible but I was still classified as much less "socially liberal" than the Greens. It seems that the axis includes views about, for instance, asylum seeker policy under the heading "liberalism", in an attempt to squash all non-economic issues into one axis.

      I also think it evaluates results compared to the Greens too much based on the issues they are making the most noise about at this election rather than longstanding issues that define the party and are likely to resurface outside the campaign. Especially, classic environment vs conservation conflicts, which aren't front and centre in this campaign, don't really feature in the questions list.

      So I have a few reservations about it but it is producing some useful general results.

      Delete
  3. A review of the available "aggregate polling" outlets would be a useful addition to the article.

    ReplyDelete
    Replies
    1. Ta. There are not a lot available at the moment (only mine, BludgerTrack, AFR's and Insiders Poll of Polls) but that's the second time I've had that suggestion and I'll add those to the section discussing aggregates and then edit more in as they inevitably emerge towards the next election.

      Delete

The comment system is unreliable. If you cannot submit comments you can email me a comment (via email link in profile) - email must be entitled: Comment for publication, followed by the name of the article you wish to comment on. Comments are accepted in full or not at all. If you submit a comment which is not accepted within a few days you can also email me and I will check if it has been received.