Wednesday, November 15, 2017

How Accurate Was The Same-Sex Marriage Polling?

The abrogation of responsibility and waste of resources that was the Marriage Law Postal Survey has now concluded with a Yes vote of 61.6%, based on a very high turnout of 79.5%.  Every Australian poll on same-sex marriage in the last ten years has shown more voters supporting same-sex marriage than opposed, with the exception of a single fledgling ReachTEL in 2011, so this has been seen initially as another good result for Australian polls.  In contrast, non-polling "big-data" approaches based on social media analytics failed completely, with the EqualityPulse site mostly favouring the No vote until after the polls had closed and a Griffith University study bombing embarrassingly.  (Be wary of anyone who claims their methods predicted Trump would win - most who predicted him to win did so because they wrongly expected him to win the popular vote.)

This is being taken as another strong result for Australian polling, but the reality is not so snazzy, and more consistent with experience elsewhere.  Below are all the final poll results by each pollster that I could find.  (Some pollsters conducted several polls, typically finding little variation through the survey period.) The polls vary in methods - some asked about results based on those who had voted, some asked about the votes of those who intended to vote and some just asked a basic question about support or opposition to same-sex marriage.  One (Ipsos) appears to have asked about voting intention among those certain to vote only.  In many cases, inadequate public documentation means that it is not entirely clear what the pollster did.

The polls that allowed responses other than Yes and No allowed a range of non-responses.  Morgan seems to have included panelists who didn't respond to the survey invitation at all in this category.  Essential allowed respondents to say they preferred not to say how they had voted.

I've included two different ReachTELs that were reported at around the same time.  I've excluded Coalition for Marriage internal polling because no evidence has been provided concerning the pollster involved or the methods used (other than sample size) and hence I cannot even say for sure that their results were based on polls at all.

Here's the table:

Some summary comments on these results:

* On average, the polls included slightly overestimated the Yes vote, even though most polls included an undecided category.  On average the polls substantially underestimated the No vote, in some cases massively.  The net result is an average error of seven points.

* On a two-answer basis, every final poll by a company except one of the two ReachTELs overestimated the Yes result. However the other ReachTEL (with a larger sample size) had one of the largest underestimates of the No vote on a two-answer basis, and the largest on a raw basis.

* Generally polls that asked people how they had voted were the most accurate, although Essential's use of a "prefer not to say" option may have increased the error in its poll.  Polls that tried to capture intending voters were no more accurate than those that asked about generic support for same-sex marriage, and included the worst misses above.

* The polls listed include one live phone-poll (Ipsos), one mixed online and live phone poll (EMRS) with the rest apparently using robopolling (ReachTEL, Morgan via SMS), online polling (Essential, YouGov) or some mixture of the two (Newspoll).  For some pollsters I don't have methods details.  There is not much evidence of consistent methods effects in the results.

The observed average error of the Australian polls (seven points) is slightly better than in Ireland (nine points) but higher than in USA ballot measures (about three points - see previous discussion).

The general reference points all along have been the overseas polling failures in the Brexit referendum and the election of Donald Trump.  No campaigners hoped to replicate these, but the lead held by Yes in polling meant that these situations were not comparable.  In fact, Australian polls on average have been wrong by more than the national Trump and Brexit polls were, but because the margin was massive anyway, they will be forgiven.

Polls also tended to overestimate the spread in responses by voting age.  For instance the final Newspoll had 69% having voted in the 18-34 age group, but over 75% voted in all components of that age group, with 18-19 year olds voting at a higher rate than any other band below 50 (possibly because the less organised such voters aren't even on the roll).

On the other hand, early attempts to project turnout were accurate, more so than I thought they would be.

The difference between the result of this survey and most of the polling shows that even using methods without direct human contact (such as online surveying or robopolling) does not always eliminate bias.  While Newspoll was able to record a very strong final result, the results do raise questions about the ability of the industry overall to correctly project national votes in which social desirability bonus is a factor.  This may be an issue for polling future referendums, if the parliament ever again finds the courage to create any.

This said, there were relatively few bad misses, and those polls that missed badly tended to use methods often believed to be suspect on this particular subject (such as Ipsos' live-interviewer phone polls) or believed to be suspect in general (such as Morgan's SMS survey.)


  1. I sent an email to EMRS re. their poll, and their operations director Samuel Paske replied & told me it was 70% online, 30% phone.