There is no more contentious policy debate in British politics than that over Brexit. Consequently, there is considerable demand – from the press, the government and indeed the public itself – for insights into what people want. The febrile environment presents a challenge for those attempting to disseminate complex research findings and inform this debate. We recently learned just how difficult this can be when we discussed our ongoing public opinion research with Buzzfeed and provided a detailed technical report of our findings to other reporters.
The initial article by Buzzfeed on 11 August 2017, as well as the data we made available to journalists, quickly generated a great deal of media coverage, including a number of misinterpretations and sensationalist headlines as well as heated debate on social media. While we immediately set out to correct the misinterpretations by contacting individual reporters, making the full data summary and documentation available and writing a detailed blog post that explained the methodology and findings, some of these headlines and reports had already reached a large audience.
We acknowledge that we should have taken greater care over how we disseminated such a complex study, including seeking input from colleagues in communications. What follows is an attempt both to set the record straight and to learn lessons from what happened.
The research study
What do people want from the Brexit negotiations? Public opinion research that studies the public’s preferences takes many forms. By far the dominant approach is to ask samples of respondents whether they like or dislike particular policies, parties, candidates, or issues. This produces estimates of the public’s stated preferences – that is, what they say they want from politics.
As we wrote in our initial blog post, when policies bundle a number of dimensions or features (such as the trade, immigration, border control, and budgetary aspects of the Brexit negotiations), it is common in a stated preference approach to directly ask respondents about those specific aspects as separate questions. Evaluating the relative importance of the different features is the challenge.
To overcome those challenges, an alternative approach is to have respondents reveal their latent preferences by examining the patterns in the choices they make between alternative policies that vary along a number of dimensions that require making trade-offs.
We adopted one such approach, known as a conjoint analysis. Briefly, a conjoint approach:
(1) allows us to mathematically disentangle preferences for individual features of the bundled outcomes through choices over the bundles;
(2) avoids the risk that respondents infer, possibly incorrectly, that certain features go hand-in-hand with others by making relationships explicit in the described bundles of outcomes;
(3) generates considerably more data than traditional stated preference approaches because each respondent evaluates multiple pairs of outcomes.
This peer-reviewed method is widely used and well-validated in public opinion research.
Our conjoint analysis of preferences over the Brexit process contained a sample of 3,293 respondents recruited via YouGov’s online Omnibus panel on 26-27 April (Buzzfeed initially reported the study entailed 20,000 respondents but quickly corrected the error.) Respondents were shown a series of possible Brexit negotiation outcomes in pairs and were asked to choose which of the two they liked best (there was no “don’t know” option). This requirement that they choose – i.e. make trade-offs between the different aspects of the negotiations – is crucial to the conjoint approach, because it is what allows the design to reveal individuals’ relative preference for different aspects of Brexit.
What we found
Our complete technical report contains approximately seventy pages of data analysis and summaries of the conjoint analysis. We cannot summarise it all here, but the results originally reported by Buzzfeed represent the core results: when tasked with evaluating the complex bundles of policies that make up “Brexit”, the public favours some aspects more than others but are indifferent about most aspects, as we wrote in our blog post on the LSE Brexit Blog.
That indifference reflects the bundling of different features together, requiring respondents to make trade-offs between them. For many of the features we studied, our respondents accepted and rejected outcomes with the particular feature about half of the time meaning that for most of the key aspects of Brexit debate the public would be willing to make possibly significant trade-offs.
Yet that is not universally the case. Leave voters were particularly concerned about control over immigration, especially the idea of controlling rather than limiting immigration, and they were also strongly opposed to scenarios that leave Britain accountable to the European Court of Justice.
Remain voters, by contrast, were particularly concerned about the rights of EU nationals living in the UK and UK nationals living in the EU. Remain voters in our study strongly opposed the loss of residency rights for EU nationals in the UK (and UK nationals in the EU), more so than any other possible feature. Leave voters also tended to avoid outcomes that included this feature but to a lesser extent than Remain voters.
Our study also included some more traditional metrics of public support: namely measures of preferences over three possible Brexit scenarios. Rather than measuring support for the outcomes with the explicit labels “soft”, “hard”, or “no deal”, which can be interpreted differently by different people, we constructed three fixed scenarios that ranged from a Britain fully outside the EU to a scenario where Britain retained all of the main features of membership, briefly summarized as:
- Scenario A: the UK does not reach an exit agreement with the EU, resulting in no UK-EU free trade agreement and no settlement on citizens’ rights or payment to the EU budget, but full control of immigration and legal sovereignty;
- Scenario B: the UK and EU agree to a free trade deal, some immigration controls, legal sovereignty from the European Court of Justice, and both one-off and ongoing paymentsto the EU budget;
- Scenario C: the status quo of EU membership is largely preserved along each of these dimensions.
Respondents were each given one of the possible pairs of scenarios (A vs B; A vs C; B vs C), so we could measure relative support for each. In this analysis, we found that the public appeared, at least at the time of the study, to be quite favourable toward Scenario A and Scenario B. Indeed, majorities preferred both in head-to-head competition against Scenario C.
Those results are driven in part by Leave voters overwhelmingly favouring Scenario A (91% for Scenario A to 9% for Scenario C; or 85% for Scenario B to 15% for Scenario C in the two head-to-head comparisons), but we also found considerable support for Scenarios A and B among Remain voters, as well. We think this suggests a public, especially Leave-voters, quite willing to accept “harder” Brexit outcomes.
Combining these two types of analysis therefore allows us to see not only the degree to which commonly discussed packages of outcomes are favoured by members of the public but also, more importantly, to understand the degree to which features of those deals affect overall preferences.
The reporting of the research
The discussion of these results with Buzzfeed proved controversial. While we made a technical report available to journalists, fellow academics, and members of the public upon request Friday afternoon, we failed to include a clear non-academic summary of the results and should have prioritised getting the technical report online immediately rather than responding to individual journalists. Buzzfeed did not inform us when their article would be published.
Moreover, they created visualisations which downplayed the interconnected nature of the measures of acceptability for each feature. Given that we were reporting results of a complex decision task with similarly complex results, this invited the misinterpretation of our results as typical polling numbers, which they are absolutely not.
What happened next shows the challenges of communicating complex research findings quickly in a febrile political environment. The results of the study were quickly used, in many cases incorrectly, to bolster pre-existing political viewpoints. One newspaper attached a highly misleading and sensationalist headline to their otherwise largely accurate reporting of our study, claiming that 29% of Remain voters supported the deportation of all EU citizens. This headline, which was not claimed anywhere in Buzzfeed’s original reporting, became the soundbite summary of the research that was picked up by politicians, pundits, and numerous other media outlets.
The rush by the media to report the results meant the core evidence from the research was largely lost in a cloud of clickbait. We regret that the way we released the findings contributed to the spread of misinformation. We worked over subsequent days to respond to misreporting, to request media corrections, and to publish a definitive statement and complete technical details of the research. While our statement was live within 48 hours of the original reporting, that was simply not fast enough.
Our experience is a cautionary tale for researchers trying to contribute to public understanding of contentious political issues.
Four most common critiques of the research
Aside from the simple misreporting of the research, discussions of our study over the past week have generated a number of substantive critiques.
The first is that our study did not explicitly provide respondents an opportunity to choose to “remain in the EU” (or, conversely, to simply “leave the EU”) rather than accept one of the outcomes presented in the study. While we could have included this, the study was explicitly aimed at studying opinions concerning the deal between the EU and the UK on Britain exiting the EU. Our ambition was to study what the public’s preferred exit deal would look like.
While providing a “Remain in the EU” option would have allowed a subset of respondents to express their most-preferred outcome, it would tell us nothing about what those individuals want from Brexit if it proceeds. In essence, allowing respondents who favour remaining in the EU to opt-out of the study would effectively silence them in the resulting data.
In parallel to the conjoint analysis, we have been investigating attitudes towards the question of whether the decision of Britain to vote to leave the EU was right or wrong, and in our discussion with the journalist from Buzzfeed we made it clear that our ongoing research shows these attitudes have remained largely stable since the referendum and this was reported in the article.
Second, our study was designed to measure preferences over the full spectrum of possible Brexit outcomes from a “soft Brexit” to an alternative in which the UK left without reaching any agreement. On one feature – describing the legal rights of EU citizens in the UK and UK citizens in the EU – a possible value was that EU citizens would have to leave the UK (and UK nationals in the EU would similarly have to return).
Given our study was conducted before Theresa May indicated her intention to protect leave to remain for EU nationals, “all must leave” was a real – albeit unlikely – possibility and had to be included in the study. Some have argued that by studying preferences over this possibility, we have invited a public debate about mass deportation. That was simply not our intention and we do not see it as a fair or compelling critique.
Studying public preferences does not mean endorsing them. It is important that researchers are able to study difficult and controversial topics (e.g., xenophobia, racism, etc.), without the risk of being seen as advocating the views under examination. It is not our place as public opinion researchers to judge the public’s views. We report them. We were personally attacked, in public and private, because we studied the widespread (but not universal) opposition to continued residency rights for EU nationals. We find that troubling.
Third, some have argued that our research is fundamentally flawed because it included limited information about the possible negative economic effects of some of the scenarios used in the study. This is a common critique of public opinion research; in essence, if the public knew different things, they might have different opinions. That is a reasonable concern. However, studying the effects of campaigning to change public knowledge is a different task from measuring views as they are.
Relatedly, we aimed to provide objective and comprehensible descriptions of the possible outcomes of the Brexit negotiations. Including statements about personal economic costs of combinations of Brexit features would have been extremely difficult to portray objectively and with precision. We chose to avoid asking respondents about those effects.
Fourth and finally, some of our fellow survey researchers, as well as members of the public, have argued that the study’s results are potentially invalid because respondents faced unrealistic or confusing scenarios in the conjoint design. We understand this critique but also feel that it highlights a significant difference between stated preference and revealed preference research.
Each individual respondent’s judgement about the particular scenarios they faced is far less important in a conjoint design than the importance judgements that can be mathematically decomposed from the pattern of responses across the whole of the sample. In order to detect how important each feature of Brexit is to large swathes of the electorate as a whole, we needed respondents to face a task that might have seemed very difficult to them; that difficulty is precisely what allows us to see what the public would ultimately favour at the expense of other features.
What Brexit means and what it will mean for Britain in the future are among the most important political questions facing the country today. In an effort to inform that ongoing political debate, we conducted innovative research that was designed to measure complex preferences over Brexit in a way not possible with typical polling. We stand by our research and interpretation thereof.
Equally, we acknowledge we made mistakes when it came to how the results were communicated to a wider audience. We continue to believe that it is the job of social scientists to convey their findings outside the narrow confines of academia. That being said, when we do this again, we will do it differently.