And the next speaker in this ANZCA 2023 session is my colleague Samantha Vilkins, who continues our focus on the Voice to Parliament referendum by addressing especially the role of opinion polling and poll reporting in the context of the Voice referendum campaign. She begins by noting the long period of public debate about the Voice, going back at least to the election of the Albanese government in May 2022, with a much shorter formal campaign period before the referendum date of 14 October 2023.
Opinion polls provided a kind of spine for the coverage of the Voice debate throughout this period: journalists regularly referred to the latest opinion polls, and outlets like Guardian Australia even operated poll trackers bringing together the ten or so different regular opinion polls operating in Australia. This focus on polls is somewhat problematic given the failures in opinion polling in past elections – there have been cases of ‘poll herding’ where raw poll results were adjusted when they seemed too divergent from pollsters’ gut feeling about what the poll results should be, and this has undermined the reliability of Australian opinion polls overall. (In response to such past cases, the Australian Polling Council also published a formal Code of Conduct for pollsters, which only some poll operators have adopted.)
Polls are big business; they require long-term commitment, and generate substantial public attention. This means that the news outlets that commission them will pay considerable attention to them even when they report little meaningful change, and that they therefore have the potential to have outsized effects on the public imaginary about political trends – not least because they report apparently objective numbers, and because journalistic reporting often ignores the margins of error and other significant limitations that apply to them. This could also end in a feedback loop: more numbers beget more numbers. In particular, the concern is that exposure to poll results influences voter preference: through either a bandwagon or underdog effect, which however might also cancel each other out, or through cue-taking by particular demographic groups.
Such patterns may also have been observed in the Voice debate: polling results were reported throughout the campaign, and occasional data errors were also observed. Specific poll results were also deliberately operationalised in support of the particular referendum outcomes favoured by various news outlets, and critiques of Voice polling were themselves used as starting-points for further Voice coverage. In a referendum with its complicated rules for success (a majority of the vote in a majority of the states, in addition to a national majority), poll results were also cherry-picked to generate a particular view of the likely referendum outcome without considering whether it would meet these more stringent criteria, and thereby to motivate or demotivate campaigners and voters.
Polling may well have had an outsized importance in influencing voting intention, too, because of the somewhat complicated nature of the referendum question itself: plausibly, this might have generated a higher demand for cue-taking. But this cut both ways: the Yes campaign could legitimately use the statistic that 80% of Indigenous Australians supported the Voice proposal, while the No campaign cherry-picked a handful of notable Indigenous Australians who opposed the Voice to successfully counteract this more abstract and impersonal argument. But neither of these approaches in themselves are fair and representative – that fair and representative result from a large-scale opinion formation and determination process was the Uluru Statement itself, which largely disappeared from the Voice debate itself.