The authoritative source for independent research on UK-EU relations

07 Jul 2016

Politics and Society

5385505094_40bb877f61_b

In the wake of the historic ‘Brexit’ decision in the UK’s June 23 referendum on continued EU membership, commentators have questioned the accuracy of opinion polls conducted in the run-up to the event, with some observers noting that polls conducted by various organizations using alternative methodologies yielded quite different impressions of support for the Remain and Leave options.

After the 2015 general election, pollsters were roundly criticized for their failure to predict the Conservative victory. Did they fail again in the EU referendum? If so, just how badly did they do? Did they all get it very wrong, or did some do much better than others?  And, taking artefacts introduced by polling methodology into account, what did underlying trends in Remain and Leave support look like in the run-up to voting day? Going forward, what lessons can we learn?

We address these questions using dynamic factor analyses of EU referendum vote intentions in 121 internet and telephone surveys conducted between January 11 and June 22, 2016.

Modes: We begin by investigating what survey researchers call ‘mode’ effects. These effects concern the impact of differences in the way surveys are conducted on the results they produce. They were all administered either by internet (N = 81) or telephone (N = 40).  Analyzing the results (percentages of Remain and Leave supporters in various surveys) reveals both types of polls significantly overestimated the Remain vote share and underestimated the Leave share (see Figure 1). However, there were sizable differences between the two types of polls. Taken as a group, internet polls tended to perform considerably better than their telephone-based rivals.

first graph

Houses: What about the performance of individual survey companies?  Our analysis of house effects indicates that survey firms varied markedly in their ability to gauge EU vote intentions. Figure 2 documents that, for the nine survey organizations considered, TNS-BMRB, ICM and YouGov performed best—their overestimates of the Remain vote share were +2.24 percent, +2.71 percent and +3.24 percent, respectively. Similarly, these three houses did best on estimating Leave support.

figure 2

Note: solid colored bars are for survey houses doing all or most of their polling by internet; striped bars are for survey houses doing all or most of their polling by telephone.  Populus has 3 internet polls and 2 telephone polls

The three firms performing worst were Populus, IPSOS and ComRes, with Remain house effects equaling +6.02, +.6.34 and +6.72 percent, respectively.

Underlying Trends: By controlling for mode and house effects, our analyses enable us to estimate underlying trends in the dynamics of support in EU referendum vote intentions. The results (see Figure 3) indicate that Leave led Remain over the entire period from January 11, 2016 onward. The size of the Leave lead varies widely—from a low of .39 percent (February 4th) to a high of 13.2 percent (June 12th)—but Leave is always ahead.

The analysis also provides insight regarding the effects of various events that occurred during the campaign. For example, contrary to his intentions, US President Barack Obama’s widely publicized ‘UK to the back of the queue’ intervention may have boosted, rather than diminished, Leave support. In contrast, as numerous observers have speculated, the murder of Labour MP Jo Cox on June 16 may have precipitated an ongoing erosion in Leave support over the final week before the vote. That downturn in Leave’s vote intention share notwithstanding, the analysis documents that over the last month of the campaign the Leave vote always exceeds the 50 percent mark. This means that Leave was very likely ahead throughout this entire period.

Overall, the analyses suggest two general lessons for consumers of political polls. First, house effects associated with selected internet survey firms were quite modest, whereas firms relying heavily or exclusively on telephone surveys found themselves considerably further off the mark.

After the 2015 general election, internet surveys were widely criticized and some commentators claimed that telephone surveys were a preferred alternative. At the time this assertion seemed dubious given that telephone surveys’ typically dismal response rates clearly gainsays any claim to be true ‘probability’ surveys. As documented above, the relatively poor performance of telephone surveys in the EU referendum reinforces doubts about their superiority.

A second lesson is that the ‘topline’ results from surveys are not enough. If observers had used the polling data as inputs to statistical models of the underlying dynamics of EU referendum vote intentions they might not have been surprised by the Brexit result. High quality polls by reputable survey houses should be viewed as a key resource – not a substitute – for informed political analysis.

alphafig2

By Matthew Goodwin, senior fellow UK in a Changing Europe, Harold D. Clarke, School of Economic, Political and Policy Sciences University of Texas at Dallas and Paul Whiteley Department of Government University of Essex

MORE FROM THIS THEME

Can the Conservative Party be reinvigorated?

A battle for the crown – the UK’s migration story, from the coronation to Eurovision

Local elections 2023: what’s at stake

Registers of financial interests: the dangers of semi-transparency?

Dominic Raab’s departure does not end Sunak’s sleaze problems

Recent Articles