Imagine a fantasy world in which the British government wanted only to follow public opinion. With no agenda of its own, the Cabinet would sit down weekly to plan how to translate the latest polls directly into public policy. This government would find life very difficult; it would be prone to frequent U-turns and would rapidly become frustrated with its public masters. The problem is the slippery nature of opinion polls. Questions asked about the same issue on the same day can often carry different, even directly contradictory, messages about public preferences.
One common explanation for this, the case of deliberately leading questions, can be swiftly dismissed. Everyone knows that a question along the lines of ‘Do you support Policy X or do you oppose this ill-conceived and dangerous idea?’ will reduce support for Policy X, and the major pollsters refuse to field such obviously biased questions. Such blatant bias is now largely confined to opt-in polls on the websites of tabloid newspapers.
The real difficulty for pollsters and those poring over their results is that even ostensibly neutral questions can be strikingly inconsistent. Consider one of the earliest question-wording experiments, a 1940 survey in which American respondents were randomly chosen to receive one of two questions about free speech. The results are in the table, which also shows what happened when the experiment was re-run three decades later. Americans in 1940 were a lot more comfortable in ‘not allowing’ (75 per cent) than in ‘forbidding’ (54 per cent) speeches against democracy. By 1974, the results were more befitting of the Land of the Free but the big difference between question wordings remained. The nature of that difference makes sense – forbidding something sounds harsher than merely not allowing it – but its scale is troubling. Are public preferences on issues as fundamental as free speech really so weak as to be dramatically shifted by a change in emphasis?
THE FORBID/ALLOW ASYMMETRY IN QUESTION-WORDING
ALLOW/NOT FORBID (%) | NOT ALLOW/FORBID (%) | |
1940 EXPERIMENT | ||
Group A: Do you think the US should allow public speeches against democracy? | 25 | 75 |
Group B: Do you think the US should forbid public speeches against democracy? | 46 | 54 |
1974 EXPERIMENT | ||
Group A: Allow public speeches against democracy? | 52 | 48 |
Group B: Forbid public speeches against democracy? | 71 | 21 |
To answer that question, it is useful to sketch Paul (or Paula), the typical survey respondent. Politics is low on his agenda and, as a result, many of the questions asked by pollsters are on issues to which Paul has given little previous thought. As American researcher Philip Converse concluded, many people simply ‘do not have meaningful beliefs, even on issues that have formed the basis for intense political controversy among elites for substantial periods of time’. But Paul is an obliging type and can’t help feeling that, if a pollster is asking him about an issue, he really ought to have a view on it. So he will avoid saying, ‘Don’t know’ and oblige with an answer. (As Chapter 3 shows, respondents are often happy to answer even when pollsters ask about fictional policies.)
How, then, does Paul answer these questions? Not purely at random because, even with unfamiliar issues, there are links to more familiar and deeply held attitudes and values. For example, if Paul were asked whether he would support restrictions on UK arms sales to Saudi Arabia, he might say ‘yes’ on the grounds that fewer weapons in circulation is generally a good thing or ‘no’ on the grounds that British exports support British jobs. None of this requires him even to know where Saudi Arabia is on the map. However, the other thing about Paul is that he is a little lazy, at least in cognitive terms. Rather than addressing the question from all relevant angles, balancing conflicting considerations to reach a judgement, he is prone to answer on the basis of whatever comes immediately to mind. If the previous night’s news contained graphic images of suffering in a conflict zone, Paul will probably support restricting arms sales; if instead there was a story about manufacturing job losses, he is likely to oppose it. This ‘top-of-the-head’ nature of survey answers is what gives the question wording such power. Any small cue or steer in the question is, by definition, at the top of people’s heads when answering.
Attributions are one common cue. In the early 2000s the Conservative Party found that many of its new ideas were quite popular in opinion polls – unless the poll mentioned that they were Conservative policies, in which case that popularity ebbed. If the proposal to restrict arms sales were attributed to Labour or to Jeremy Corbyn in particular, respondents might just respond according to their partisan or personal sympathies (and see Chapters 16 and 43 for how this applies even to cats and fictional characters).
Now imagine that the question asked about ‘arms sales to the authoritarian regime in Saudi Arabia’. Paul and many others would be more supportive of restrictions. This doesn’t mean that the lack of democracy in Saudi is really a decisive factor in public judgements outside the context of the survey; it means that the question elbows other considerations out of respondents’ minds. Or suppose that the arms sales question itself was studiedly neutral but that it was preceded by a series of questions about instability and conflict around the world. The effect would be much the same.
Another common steer comes in the sadly ubiquitous questions based on declarative statements. For example, another survey experiment found majority agreement (60 per cent) with the statement ‘Individuals are more to blame than social conditions for crime in this country.’ But the survey also found almost the same level of agreement (57 per cent) with the exact opposite statement: ‘Social conditions are more to blame than individuals for crime in this country.’ This is because the statements used in the question have persuasive power in themselves. It is easier for unsure (and lazy) respondents to agree with the assertion than consider the alternatives. No wonder there was opposition to the Scottish government’s original proposal for the 2014 referendum question: ‘Do you agree that Scotland should be an independent country?’
Lastly, consider the choice between open and closed questions. Polls often ask, ‘What do you think is the most important problem facing Britain today?’ In the ‘closed’ version, where respondents choose from a list, crime is a popular choice. Yet in an ‘open’ version, where respondents have to name an issue unprompted, crime is much less often mentioned. Maybe a list helps to remind people of their genuine concerns, but then is crime that troubling to someone who can’t remember it unaided?
All of this illustrates the persistent difficulty for our fantasy government. Even the most discerning consumer of opinion polls, who well understands why two surveys deliver different results, might still struggle to say which better reflects what the public really thinks. Some have even drawn the radical conclusion that ‘true’ attitudes simply don’t exist. This seems overstated, however. For one thing, people do have strong views on the big issues that they care about. It is when pollsters ask about more remote topics that opinions look so fickle. Second, even when respondents appear malleable, this is not simply swaying in the breeze; it is because something in the question leads them to consider the issue in a different way.
Public opinion thus has at least some anchoring in people’s most deeply held beliefs and values. Perhaps a preferable conclusion is that the truths are out there – but that there are many of them and they may be quite different. This, of course, provides exactly the leeway that real governments are after.
The quotation from Philip Converse is taken from his 1964 essay on ‘The nature of belief systems in mass publics’. A ‘one-stop shop’ for question-wording effects is the book Questions and Answers in Attitude Surveys by Howard Schuman and Stanley Presser (Sage, 1996). For informed commentary on UK opinion polling, with frequent reminders of the pitfalls discussed in this chapter, consult the blogs UK Polling Report and Number Cruncher Politics.