In nearly every survey there are some people who tell pollsters that they do not have an opinion on an issue. But the number willing to volunteer ignorance in this way often appears smaller than it should be, given that many people know and care very little about politics. How, then, do voters decide where they stand on unfamiliar areas of public policy when asked about them in polls?
As noted in Chapter 1, a radical answer to this question was proposed in the ’60s by American political scientist Philip Converse. Converse suggested that, on many issues, a substantial minority of the public has no opinion at all. Rather, they express what he referred to as ‘nonattitudes’. A nonattitude is an answer to an opinion question which has no underlying cognitive or emotional basis; people select from the available response options more or less at random, as if ‘mentally flipping a coin’.
If such ‘nonattitudes’ are widespread, the implications for democratic politics, as well as for the polling industry, would be problematic.
It is difficult to assess how big a problem nonattitudes really are, however, because from their outwardly observable characteristics at least, attitudes and nonattitudes are identical. An expedient solution to the problem of identifying the prevalence of nonattitudes is to ask people their opinions on issues which sound real but do not actually exist. People who are willing to provide an opinion on a plausible-sounding but fictitious policy issue are, we may assume, also likely to offer similarly empty opinions on real issues which they know little or nothing about.
The idea of identifying nonattitudes in this way stretches back at least as far as the ’40s, when pollster Sam Gill speculated that up to 70 per cent of Americans would provide an opinion on the (non-existent) Metallic Metals Act. However, serious academic consideration of public opinion about fictitious issues did not start until the ’80s, when George Bishop and colleagues at the University of Cincinnati found that a third of Americans either favoured or opposed the fictitious Public Affairs Act. Bishop found that this figure dropped substantially when respondents were offered an explicit ‘Don’t know’ option. However, 10 per cent of respondents still selected a substantive answer, even when given a clear opportunity to express their lack of familiarity. Similar findings were reported in the US at around the same time by Howard Schuman and Stanley Presser, who also found that a third of respondents to their survey expressed positions on issues which, though real, were so obscure that few ordinary citizens would ever have heard of them.
And, despite the British generally considering themselves to be intellectually superior to their American cousins, recent research found significant proportions of the British public were also willing to express views on fictitious policy issues. It isn’t possible to make direct comparisons between the British and the American research, because the questions posed and response alternatives offered were rather different. However, the British study found that 15 per cent of the British public either supported or opposed the non-existent ‘Monetary Control Bill’, while 11 per cent expressed a position on the equally fictitious ‘Agricultural Trade Bill’.
So, non-trivial numbers of citizens are willing to offer opinions on issues which do not exist. Are they really selecting a response option at random, as Converse suggested? Probably not. Research has shown that responses to these fictitious issues are related to existing partisan tendencies. For example, in the British research, Conservative supporters were twice as likely to express an opinion on the Agricultural Trade Bill, compared to people who did not identify with a political party. This suggests that respondents do not choose their answers to fictitious issues at random but, rather, seek to determine what the issue is about and how it relates to their political predispositions, through clues in the wording of the question. In this instance, ‘agricultural trade’ sounds like legislation promoting free trade, so Conservative supporters interpret it as something which they should, on the face of it at least, favour.
Another sign that these opinions are not just random expressions of ignorance comes from the somewhat counter-intuitive finding that people who reported being very interested in politics were more likely (23 per cent) to provide an opinion on the fictitious bills than those who expressed no interest at all (11 per cent). The British research also found that men were 50 per cent more likely to express a view on the Agricultural Trade Bill than women. So, responding to fictitious issues seems to result, at least in part, from considering yourself to be the sort of person who should have a view on matters of public interest. Many voters know little or nothing about more obscure parts of the political agenda, but voters who have already proclaimed their general interest in politics may be too embarrassed to admit ignorance when subsequently asked their position on specific issues.
Despite the seemingly flippant nature of the exercise, then, research on fictitious issues tells us at least two interesting things about how people respond to questions relating to real policy issues in polls. First, people do not choose a response option at random from the tops of their heads but are, instead, actively seeking to understand what the question is about. They then provide their best guess at what their position is, based on their political orientation and the limited information available to them about the issue. This helps explain why the ‘framing’ of a survey question can matter so much to the shape of public opinion elicited; the exact terms used to describe an issue can strongly affect how voters understand what it is about and, therefore, how they feel about it. Be that as it may, fictitious issues research also tells us that a great many answers to genuine policy questions in surveys are likely to be based on little more than informed guessing, following a brief moment of reflection. This may not come as a surprise to many observers of contemporary politics. However, it serves as a cautionary reminder to all those who proffer opinion poll evidence in order to show they have public backing for a particular policy position; the mandate they are citing is probably weaker than it appears.
Bishop’s study on the US is ‘Pseudo-Opinions on Public Affairs’ by George Bishop et al. (Public Opinion Quarterly, 1980). The British study can be found in Patrick Sturgis and Patten Smith ‘Fictitious Issues Revisited: Political Interest, Knowledge and the Generation of Nonattitudes’ (Political Studies, 2010). Other relevant studies are Howard Schuman and Stanley Presser’s ‘Public Opinion and Public Ignorance: The Fine Line between Attitudes and Nonattitudes’ (American Journal of Sociology, 1980) and George Bishop’s The Illusion of Public Opinion: Fact and Artifact in American Public Opinion Polls (Rowman & Littlefield, 2005).