In March 2017 Republican congressman John Shimkus objected to his party’s revised healthcare plan on the grounds that it was unfair to men. His argument was that by keeping the ‘essential health benefits’ (including maternity cover) guaranteed by the Affordable Care Act, the plan unfairly forced men to purchase prenatal coverage. It was a position that sparked widespread criticism from those who believed that the previous exclusion of maternity coverage from insurance plans had been unfair to all women, who were forced to either buy specific, more expensive coverage, or go without. Many on this side of the argument, including insurance expert and columnist Nancy Metcalf, suggested that women should be able to say, ‘Fine, but in that case why should we have to pay for your Viagra or prostate cancer tests?’
Questions of fairness can be complicated and emotive. Is it fair that some people’s taxes pay for other people’s benefits? Is it fair that the average FTSE 100 boss is paid 150 times more than the average worker? Is it fair that taxes from long-term residents of a country pay for the healthcare of new arrivals? I suspect at least one of these questions will have raised your blood pressure a little.
From a very early age, we are highly attuned to questions of fairness – as anyone who has ever told a child they have to go to bed before an older sibling will attest. In fact studies have shown that even infants as young as two years old are sensitive to perceived unfairness.1
Why is this relevant to a book about political bias? Because ultimately many political arguments come down to the morality of fairness. Our moral values shape our political views and therefore influence how we vote.
In this chapter I’ll outline some intriguing findings about fairness, morality and politics, and show how we often interpret things differently based on intuitions we might not even be consciously aware of.
We tend to think we make moral decisions as part of a considered, rational process in which we weigh up reasons for why something might be right or wrong, but frequently we can’t actually explain why we have certain moral intuitions.
To demonstrate this problem, psychologists have developed a process called ‘moral dumbfounding’.2 In these experiments people are asked provocative ethical questions, such as whether it is wrong for adult siblings to have sex. Most people will certainly say yes. But things get interesting when we start to explore the reasons behind their response. Take a moment to get a sense of your own position on this particular question; why do you think it is right or wrong?
One of the most frequent explanations offered is that a child born from a brother and sister would be more likely to have a genetic defect. So let’s exclude that option by stating that they use completely effective contraception, which means there is no way the sister will become pregnant.
Do you still think it would be morally wrong? Probably yes. Another common reason people give is simply that it could be terrible for their reputation if other people know they have had sex. But what if they do it just once, secretly, in a way that guarantees no one will ever find out?
When psychologists run this kind of experiment, they systematically take away all the practical reasons people might give for why this might be wrong – but the vast majority of participants remain convinced that their first reaction is correct, even though they cannot offer a justification for it. Sometimes our gut feelings about right and wrong are not actually shaped by reasoning, but the other way round: we attempt to rationalise our responses by looking for arguments to support them.
It can be hard to talk politics with people we disagree with. If you struggle to understand why people on the left are so focused on supporting the welfare state, or why people on the right are so focused on reducing taxes, you might simply have made assumptions about fairness that are completely different from theirs.
When it comes to voting, it’s easy to assume that people just support the tax and welfare policies that directly benefit them. There is some truth to that, but we don’t always vote in our own economic self-interest. Poor working-class voters will sometimes vote for parties that intend to reduce welfare programmes, and well-off voters will sometimes vote for higher taxes to support welfare programmes they are unlikely to use.
Psychologists have argued that to understand why people make moral decisions that don’t obviously seem self-serving, we have to look at how we have evolved as a species. It might sound strange that evolution has played a role in shaping something as complicated as our moral intuitions, but there are some reasons to consider the idea.
Evolution is, by definition, a process whereby genes can only survive if the biological organism successfully reproduces. Genes, as Richard Dawkins argues, are therefore literally ‘selfish’: their continued existence hangs on them making it into the next generation of offspring.
In complex social organisms like humans, the ability to survive and reproduce has clearly become dependent on our ability to cooperate with other humans: sometimes we need to help others because sometimes others can help us.
And if social animals are going to cooperate, they need to develop a keen sense of fairness – to prevent individuals taking advantage of others in the group.
Other social species also seem to have developed a keen sense of fairness. In a simple but powerful experiment, Sarah Brosnan and Frans de Waal trained pairs of monkeys to perform a straightforward task for a small reward,3 such as a grape or piece of celery. Everything worked smoothly so long as both monkeys received the same reward. But when one monkey was rewarded with celery and saw that the other monkey was being given grapes, he immediately spotted the disparity and protested about it. When it continued, he refused to eat the celery (which had previously been perfectly acceptable as a motivational reward) and threw it back in the tester’s face! If you watch the videos of these experiments, you get a very vivid sense of the monkey’s visceral emotional response, which most of us can probably relate to.
Humans, of course, clearly have more sophisticated responses to violations of fairness. One of the most powerful examples of these is what psychologists call ‘altruistic punishment’.
The test to demonstrate this involves a simple game: on each turn, players can choose to put an amount of money in a common pot.4 The value of the pot is then increased (doubled, for example), and the resulting increased funds are then shared between all players equally. Critically, every player receives a share from the pot, even if they didn’t pay into it. That means that some players can get a ‘free ride’. They don’t contribute, but they do benefit.
If you’re anything like the players of the game, you’ll already be feeling a bit annoyed that anyone could choose to be a free rider in this situation. It seems participants get sufficiently annoyed that, when offered the chance, they will give up some of their own money to punish them. For example they might pay £1 (that they will never see back) to have £1 taken from participants who are not contributing.
This concept of the ‘altruistic’ payment-to-punish is hard to explain if you believe that humans are purely rational and selfish. You could argue that participants were being rational if they thought that the punishment could pay off in the next round of the game, signalling to the free rider that they expected them to contribute in the next round. The behavioural economists who developed the game were concerned about that interpretation, so they included a condition where people could pay to punish even when they never expected to play the game or encounter the other players again. People still paid to punish free riders.
Researchers have carried out this experiment with people from cultures around the world (something psychologists don’t do enough), with the same basic pattern of results. The extent to which individuals will pay to punish differs between cultures; however, the findings suggest that we are not simply motivated by a narrow sense of self-interest but that we have evolved to have a moral reaction to the idea that others aren’t contributing fairly. (Interestingly, research has also shown that letting people punish others in group situations of this kind does ultimately result in better cooperation over time.5 It seems that punishment can sometimes be an effective way to ensure groups work together.)
This research helps to explain why there are some political choices where we tend to agree. In essentially every democracy across the globe, we have a collective understanding that violations to certain rules should be punished. There is almost universal agreement, for example, that there should be some sort of penalty for those found guilty of theft, and as a result of this consensus these kinds of issue are not seen as hugely ‘political’.
But beyond this, politics offers plenty of scope for disagreement. Recent elections in America, Britain and France, and the rise of nationalist parties across most of Western Europe, indicate that – in some countries at least – populations are becoming increasingly politically polarised. In 2014 the Pew Research Center conducted the largest political survey in its history – a poll of more than 10,000 adults – and concluded that Republicans and Democrats were ‘further apart ideologically than at any point in recent history’.
Why are we so divided? Political and economic issues, like the 2008 financial crash and the increasing waves of migration across Europe, obviously play a large role. But part of what makes these issues so decisive is the way they tap into our different moral sensitivities. This is arguably one of the main obstacles to political dialogue: our views on controversial and changing issues such as tax, welfare or nationalism may depend upon intuitions that we cannot explain to others because we are not aware of them ourselves.
This brings us to one of the most politically and morally charged issues of our time: inequality. In 2015 an Organisation for Economic Co-operation and Development (OECD) report identified the UK and the USA as two of the most unequal developed nations in the world. It warned that rising levels of inequality risked damaging the fabric of society as well as stunting economic growth. In 2014 Pope Francis tweeted: ‘Inequality is the root of social evil’, while Barack Obama has argued that inequality is ‘the defining challenge of our time’.
Yale psychologist Paul Bloom recently examined our beliefs about economic inequality.6 With colleagues Christina Starmans and Mark Sheskin, he argued that although the concept gets a lot of attention, when we look carefully at our responses to different situations, we don’t disagree with inequality in and of itself. What we are concerned about is something that is often mistakenly conflated with inequality: economic unfairness.
Having reviewed a wide range of studies, they claimed the research shows that people prefer ‘fair’ distributions to equal ones.
If that distinction isn’t clear, picture this scenario. Imagine you are a teacher helping a group of students with a project, and you become aware that some of the children are working much harder than others; essentially some of the children aren’t really contributing to the project at all. Would you feel it was fair to assign the same grade to all of the children for the project, when you knew that this overall grade didn’t reflect the individual efforts of each child?
Based on this idea, Bloom and his colleagues argue that we shouldn’t focus on inequality but on proportionality, and whether wealth distribution fairly reflects different people’s contributions. In short, they think we don’t really mind if some people are paid in celery and others are paid in grapes – as long as we feel the people being paid in grapes deserve it.
If they are correct, then voters on both left and right should be able to tolerate a certain amount of inequality, provided they feel everyone has been treated fairly.
Simple? Not quite. For a start, who defines what is fair?
Unlike the general consensus on ‘altruistic punishment’, not all societies have the same customs or conventions about fairness. Children are able to pick up on these from a young age, as shown by some interesting cross-cultural studies by American psychologist Michael Tomasello.
In his work he showed that children quickly learned rules for what was right and wrong in particular contexts. In one experiment, children were taught how to play a game: one of the rules was that one event in the game had to happen before another event.7 They were then introduced to a teddy who tried to play the game, but got things in the wrong order. The kids were not happy. They had not only learned what the rule was; they rapidly enforced that rule (with an earnest sense of right and wrong) when they saw the teddy violating it. You might recognise this earnest enforcement of rules from playing with your own kids: ‘No, you don’t do it like that!’
This research offers an interesting perspective on the way in which children are acutely sensitive to the social conventions around them. In the run-up to the EU referendum, for example, my seven-year-old niece was intensely interested in understanding who was ‘right’ and ‘wrong’. Walking around Cambridge a month before the referendum she noticed the many Remain signs in people’s gardens, and earnestly wanted to know if they were the ‘good side’. Clearly she didn’t understand what was actually being decided, but still she was trying to figure out the norms for her social group.
Tomasello’s findings led him to predict that in different environments children might have different concepts of fairness. To test this he devised another simple game about the sharing of rewards.8
He tested children from a predominantly meritocratic culture (Germany) and from two African tribes, one highly egalitarian and one highly gerontocratic (where elders were in charge of making decisions). The children, playing in pairs, had to extract cubes from a container and were rewarded for how many cubes they collected. They then had to decide how those rewards would be shared (with no adults in sight).
The German kids were much more likely to share the rewards based on how well each child had done on the test, whereas the children from the egalitarian culture were more likely to share them equally. Children from the gerontocratic culture were rather idiosyncratic in how they shared the rewards (just respecting your elders doesn’t seem to help kids learn a consistent notion of fairness!). The children spontaneously stuck to the ideas of fairness inherent in their culture, suggesting that evolution equips us with a propensity to learn about the concept of fairness, but doesn’t necessarily dictate what version of fairness we will come to believe in.
While concepts of fairness might differ around the world, most Western societies, like Germany, have a predominantly meritocratic notion of fairness, but that still doesn’t mean people within those societies agree on what is fair – nor does it help us understand why they vote the way they do. To do that we need to delve deeper into the assumptions people make about why things are the way they are.
Let’s go back to economic inequality: do you attribute someone’s wealth to the actions of that individual or their circumstances in life?
Generally speaking, in America Republicans are much more likely to think that differences in wealth are a direct reflection of the amount of work put in, while Democrats believe that wealthier people tend to have had more advantages in life than others.9
This could explain why some of us support more generous welfare programmes (because people are poor for reasons beyond their control) and some of us oppose higher taxation policies for the rich (they are only rich because they have worked harder).
Why do we see things so differently depending on our political outlook?
The answer to this isn’t totally clear, but there are a few clues. One very basic observation is that those on the left (at least in surveys in the USA) tend to have a slightly more accurate perception of how large the differences in equality are.10 It’s hard to say why this is: does their heightened concern for inequality make them more likely to be aware of the issue, or does their slightly more accurate perception of the issue make them more concerned about it?
Another helpful way to think about the fairness of inequality comes from the work of US political scientist John Alford and colleagues. Having researched whether our political views are determined by genetics, they found evidence that genetic make-up is in fact more important than environmental or familial factors.11 They went on to argue that we should rethink how we talk about left and right or liberals and conservatives, to try to more accurately characterise the underlying psychological differences between them. They claim it is more helpful to think about two types of thinking style: ‘contextual’ vs ‘absolute’.
Absolutist thinkers are more likely to see something as inherently right or wrong, whereas contextualist thinkers are more likely to allow for circumstances that could have influenced someone’s behaviour.
Is there any evidence to support this way of categorising people? Well, let’s take a look at different attitudes to punishment on the left and right – like inequality, this is a moral issue where we make intuitive judgements about what’s fair.
As we’ve seen, although people tend to agree that certain moral violations should be punished, we often disagree on the nature of those punishments. For example, those on the political right are more likely to support the use of the death penalty than those on the left.
Fascinatingly, whether or not people support capital punishment is linked to whether they think the crime is a result of the individual’s character or their circumstances.12 In a similar way to how they account for wealth or poverty, conservatives (or absolutists) are more likely to hold the individual responsible for their own actions and view the death penalty as justified, while liberals (or contextualists) tend to point to mitigating circumstances or contextual factors and oppose the death penalty. Tony Blair famously moved the UK Labour Party closer to the centre of British politics by acknowledging both of these perspectives on crime when promising to be both ‘tough on crime, and tough on the causes of crime’.
Interestingly, conservatives (absolutists) and liberals (contextualists) also seem to differ when it comes to the factors that influence human behaviour. Voters on the right are more likely to think that behaviour and personality are determined by genes,13 whereas those on the left favour contextual factors like our upbringing or environment. So if you lean to the right, you may be more inclined to agree with Alford that your political choices might be influenced by your genes.
Our intuitions about what is fair – economically, or in terms of punishment – seem to be related to this underlying psychological difference: a style of thinking we might not even be aware of.
In 2012, Barack Obama gave a speech in which he said: ‘If you’ve got a business, you didn’t build that.’ That phrase was taken out of context, but his point was that, if you have a business that transports goods, you didn’t build the roads; if you rely on educated workers, you didn’t educate them yourself. As Obama continued, ‘If you’ve been successful, you didn’t get there on your own’, but relied on a system ‘that allowed you to thrive’. You may well have worked hard, but that can’t explain success on its own, because there are plenty of people out there who work hard. Perhaps it is a little clearer now why this ‘contextual’ argument would have resounded well with those on the left, but proved controversial to the right.
These intuitions – and the way we think about individuals and their contexts – are also connected to our views on all kinds of topics, including immigration.
According to the absolutist/contextualist framework, those who are more likely to oppose immigration are also more likely to think that we all have a relatively fixed character that determines how we behave. So far this has only been demonstrated in Australia,14 but it suggests that this is one of the reasons some people are more opposed to immigration, because they think immigrants might find it hard to adjust or integrate to a new society, which might result in tension or a lack of social cohesion.
But this is only a small part of it. To really understand controversial topics like immigration and nationalism, we have to look at what else shapes our morality.
Social psychologist Jonathan Haidt has argued that it’s not just fairness we humans have developed a sensitivity to; there are all sorts of other factors, such as how we feel about the social groups we are part of (our in-groups) and our attitudes towards authority.
We’ve seen why we might have developed a sense of fairness; it is not quite so clear what has driven us to develop traits such as group loyalty. But again, it may come down to our selfish genes trying to get into the next generation.
Perhaps at some point in our evolutionary history, our survival (and that of our genes) was critically dependent on the survival of our group. For thousands of years, humans tended to live in small groups that relied on collaborative activities such as hunting and gathering, and, of course, fighting against other small groups. The way our behaviour evolved during that time could well have shaped the way we think about politics today.
Whether or not this theory is correct, Haidt has collected a range of evidence from a variety of countries, including the USA, UK and Canada, that suggests people who identify as left and right wing differ substantially in the way they see two key moral principles: group loyalty and respect for authority.15
Haidt and his colleagues found that people who identify as conservative are more likely to endorse statements like: ‘Loyalty to one’s group is more important than one’s individual concerns’ and ‘The government should strive to improve the well-being of people in our nation, even if it sometimes happens at the expense of people in other nations’. They argue this reflects the greater moral emphasis conservatives place on loyalty to one’s in-group, alongside other values including fairness, protecting people from harm and respect for authority.
By contrast, they find evidence that liberals only really emphasise two moral values: fairness and protecting people from harm. So liberals are less likely to agree that ‘Loyalty to one’s group is more important than one’s individual concerns’, but do tend to endorse statements such as: ‘Justice, fairness and equality are the most important requirements for a society’ and ‘Compassion for those who are suffering is the most crucial value’.
This doesn’t mean that conservatives are ‘more moral’ than liberals, just that they display moral sensitivities about a wider range of topics.
This difference could help us to understand the reasons behind the rising tide of nationalism around the world in recent years, and why it is such a polarising issue.
With the UK voting to leave the European Union, Donald Trump promising that ‘from this day forward, it’s going to be only America first’, and Marine Le Pen’s Front National achieving a historic share of the vote in France’s presidential elections, we are witnessing a resurgence in nationalist sentiment. But while some of us rally behind nationalist politicians and policies, others are actively put off by them. Perhaps this is partly because we disagree over how important it is to be loyal to one’s in-group. If the idea of putting your country first speaks to you – or, conversely, leaves you cold – the reason could lie in your moral values.
If conservatives have a broader range of moral intuitions than liberals, do right-wing parties have a political advantage? It is possible. They share moral concerns about fairness with those on the left (even if they may disagree on what is ‘fair’) but can also appeal to moral values that may be neglected by the left, such as nationalism. Interestingly, one of the few left-wing parties to have made significant gains in recent years is the Scottish National Party, which after years in the political wilderness managed to combine their policies on improved state welfare and free student tuition with the promotion of a sense of national identity.
If we look at the Brexit referendum in 2016, immigration and nationalism were clearly central issues, and I suspect moral intuitions played a huge role in determining how people voted. In the past, left-wing parties have frequently failed to address people’s concerns over these issues. Professor Tariq Modood at Bristol University is very critical of the left for this reason, and argues that it has basically given the right a free hand to set the agenda on nationalism, rather than trying to develop its own positive vision.
Dismissing concerns about national identity or immigration as racist or nationalistic may or may not be fair, but it certainly doesn’t seem to be helping the left win any arguments. This sense of dismissal (and its consequences) is perhaps epitomised in Hillary Clinton’s famous description of many of Trump’s supporters as a ‘basket of deplorables’.
The various moral sensitivities that shape our political outlook are not easy to pinpoint or explain to others. Sometimes we don’t even know ourselves why we intuitively think something is right or wrong. But understanding that our outlooks are based on some crucial differences in the way we interpret concepts of fairness, for example, can help us not only to understand the political choices we make, but also to try to comprehend the choices made by people with opposing views. You might still disagree, but at least it might be a little clearer what you are disagreeing about.