7

Everyone Has a Price

In 1911 Frederick Winslow Taylor, an aristocratic Philadelphian, published The Principles of Scientific Management. Later dubbed ‘the Isaac Newton of the science of work’ by 1970s management guru Peter Drucker, Taylor was arguably the world’s first management consultant. His book paved the way for what are now mainstream management techniques to improve worker efficiency. But Taylorism, as it became known, had a shaky start.

The Watertown Arsenal in Massachusetts was a US Army-run facility used mainly as a factory for manufacturing artillery-gun carriages. Taylor had been consulted on how to increase the productivity of the workers and stop ‘soldiering’, a word which in those days meant shirking or malingering. Taylor’s assistant patrolled the factory floor armed with a stopwatch, until one worker refused to be timed. On 11th August 1911 this worker was fired for insubordination and everyone walked out in protest. It was the first strike against Taylorism, just months after Taylor’s Principles was published. But without the official support of the union, the strike lasted only a week, while most of the operational changes introduced by Taylor were retained.1 And Taylor had the last laugh on the Watertown site, because the Arsenal is today an office complex housing the Harvard Business Review, a leading journal of management science, much of which derives directly from Taylorism.

Yet the aftermath of the strike tells another story. Since the Arsenal’s workers were federal employees, they had the right to protest to Congress directly. They persuaded Congress to launch an investigation. One result of this was the US government’s ban on the use of the stopwatch to monitor workers in army workshops – to modern eyes, an astonishing political intervention. At the time, though, both supporters and critics of Taylorism saw it as a political and moral project, so the direct involvement of politicians would have surprised no one. Taylor did not see his ‘scientific management’ as morally neutral or apolitical. He argued that workers were simpletons whose behaviour had to be controlled by managers with superior intelligence. As Taylor explained to the Congressional committee: ‘I can say, without the slightest hesitation, that the science of handling pig-iron is so great that the man who is … physically able to handle pig-iron and is sufficiently phlegmatic and stupid to choose this for his occupation is rarely able to comprehend the science of handling pig-iron.’2

The Congressional investigation concluded that Taylorism had a dehumanizing effect on workers. And Taylor’s use of language like ‘social engineering’ and ‘social control’ – alongside pronouncements like ‘In the past the man has been first, in the future the system must be first’3 – did not help his cause, especially in the 1930s, with critics drawing parallels with fascist ideology. The perceived tyranny of Taylorism was perfectly captured in Charlie Chaplin’s 1936 satire Modern Times.

By the late 1940s the battleground in the appliance of science to controlling human behaviour had moved beyond the workplace, with the emergence of behaviourism in psychology. Behaviourists did lab experiments using rewards and punishments to train animals to behave in any way required. They believed the same carrot-and-stick techniques could be applied to humans. Behaviourists had much in common with the economists at think-tanks like RAND: a mechanical view of human nature, leading to a belief in the universal effectiveness of incentives to manipulate behaviour; a desire for a science of social control; and they even developed some military applications – the leading behaviourist Burrhus Skinner designed a pigeon-guided missile for the US Navy.

Like Taylor, Skinner had a gift for garnering bad publicity. The cages inhabited by animals in his experiments (he taught pigeons to play table tennis and some of his students taught a pig to use a vacuum cleaner) became known as Skinner boxes. But at the same time Skinner invented a kind of sealed, heated crib, which looked like a large fish-tank on wheels, which he tested on his newborn daughter. Skinner wrote about it for the Ladies’ Home Journal, the editors titled the article ‘Baby in a Box’, and from then on most people seemed to believe that a Skinner box could be used for babies as well as lab rats. The crib may have been harmless, but it looked creepy. Despite the controversy surrounding Skinner, behaviourism became more influential and by the 1960s had moved on from rats in mazes to reward tokens for infants and hospital patients with mental illnesses. The practicality and wide applicability of the approach has enabled its gradual entry into the political mainstream.

But in the last half century something has changed. Nowadays Taylorist management techniques are mostly seen as legitimate, apolitical interventions which work with the grain of human nature: the mainstream application of management science in the service of efficiency. Outside the workplace, too, our language is peppered with talk of ‘incentives’. The word has shifted meaning from a tool of social engineering loaded with moral and political significance to a neutral, objective term meaning nothing more than ‘motivations’. Yet if Taylor was right in talking in terms of social control, something must have got lost in the translation. Incentives, after all, are tools for getting someone to do something they would not choose to do otherwise. Even if employed for noble reasons, incentives are still instruments for the exercise of power.

‘You’re sort of like a robot, but in human form.’4 So speaks a manager at an Amazon warehouse, talking about their pickers, the Amazon staff who spend all day walking – or jogging – around their warehouses picking the stuff we ordered. The pickers wear small satnav devices which measure their productivity, instruct them on the most efficient route to take around the warehouse and send them texts instructing them to speed up if they have indulged in time-wasting activities such as talking or toileting. This twenty-first-century form of Taylorism is just as aggressive as anything Taylor advocated, but our reaction to it has been transformed. The modern equivalent of banning the stopwatch is unthinkable: no contemporary government would ever forbid Amazon from measuring its pickers’ productivity. More generally, we have come to accept financial incentives in contexts which were once controversial. In several countries in Central and South America, governments have followed economists’ advice and introduced ‘conditional cash transfer’ schemes to encourage poor mothers to be better mothers: the details vary, but cash is offered to mothers who stop smoking, have their child vaccinated or improve their child’s school attendance. And incentivization pervades the education sector: universities accept donations on condition that they must offer certain courses or, at least, include particular books on the reading lists; retailers pay schools to install confectionery or soft-drink vending machines in recreation rooms; schools pay seven-year-olds cash for each book they read and fine parents if their children arrive late.

What caused this unnoticed transformation in the way we think about incentives? Perhaps it is linked to a parallel transformation in economics. Free-market economics has become dominant, and many economists are big fans of incentives. The bestselling Freakonomics enthuses, ‘Economists love incentives … The typical economist believes that the world has not yet invented a problem that he cannot fix if given a free hand to design the proper incentive scheme.’5 Most economics textbooks extol the virtues of markets but increasingly define economics as the science of incentives, while in everyday life the boundary between markets and incentive schemes often seems blurred.

Yet for that great advocate of free markets, Friedrich Hayek, the boundary was clear and important. Hayek saw the problem of how to motivate workers as a question ‘of an engineering character’.6 He belittled those concerned with such issues as ‘social planners’, whose actions would violate the natural order of the market. From a Hayekian perspective, incentive schemes are damaging forms of social engineering which interfere with market forces rather than extending them. So this is not a straightforward story of ideas about incentives coming to prominence on the back of a shift towards free-market economics or the political Right.

The influence of modern economics on how we think about incentives has been more subtle. Economists have been at the forefront of the explosion in ‘incentives’ talk over the past few decades, a time in which incentives became apolitically rebranded as mere motivations. Labelling all our motivations as incentives suits some economists, because it is a sly way of reducing the rich complexity of human psychology to the one-dimensional motivation of homo economicus. So it is worth reminding ourselves that incentives are not just motivations: many motivations cannot be understood as incentives without distorting them beyond recognition. If I look after my dying mother, I do so out of love, responsibility or obligation. But we cannot describe these motives as my ‘incentives’ to care for her. Curiosity drives children to do new, sometimes dangerous, things. But we don’t say that curiosity was their incentive. Barack Obama probably inspired some black children to pursue political careers. We cannot replace ‘inspired’ with ‘incentivized’ in the previous sentence.7

As we shall see, the orthodox economists’ theory of motivation leaves us blind, unable to anticipate when incentive schemes will backfire and unable to see that alternative ways of getting people to do things are often better – alternatives involving open, honest attempts to persuade and respect for other points of view.

Their theory, though, is easy to summarize: everyone has their price.

BIG SHOT MEETS A YANKEE ACTRESS

There is an old joke about a rich and famous man who meets a woman at a party. Perhaps the earliest version, allegedly a true story, involved the British media tycoon and politician Lord Beaverbrook encountering a ‘Yankee actress’. The year was around 1937. (In subsequent retellings the big shot has been, among others, George Bernard Shaw, Groucho Marx or Winston Churchill.) The conversation goes something like this.

Big Shot: ‘So will you spend the night with me for £10,000?’

Actress: ‘Well …’

Big Shot: ‘How about £100?’

Actress: ‘What kind of person do you think I am?’

Big Shot: ‘We have already established that. We are just haggling over the price.’

As well as implying that you can get almost everyone to do almost anything if you pay them enough, this economic theory of motivation has a less obvious but equally dubious implication. Since money can substitute for all other motivations (if you pay enough), most economists see it as interchangeable with all motivations, a neutral common currency in terms of which all motivations can be expressed. It is a one-dimensional picture of human motivation in which money simply adds to existing motivations or replaces them when they are absent. But money, and material benefits and costs more generally, cannot play this neutral role. In the real world money comes with psychic baggage. Just in the way that a gift can trigger gratitude or resentment in the recipient, depending on how they perceive the motives of the giver, financial carrots can also trigger a range of responses in those on the receiving end of the inducement.

In early 1993 the Swiss government was trying to decide where a (low-radioactivity) nuclear-waste dump should be sited. One possibility was to locate it near Wolfenschiessen, a small Swiss village of 640 families. In hour-long interviews with over 300 residents, they were asked how they would feel if offered financial compensation to agree to the dump. Support for the local siting of the dump fell by more than half once the compensation was offered. Eighty-three per cent of those who rejected the money explained their rejection by saying that they could not be bribed.8 When some day-care centres in Haifa, Israel, introduced fines to discourage parents from collecting their children late, more parents arrived late.9 Like the Swiss villagers, the Haifa parents saw the financial incentive as trying to buy their compliance rather than persuade them. With a ‘price’ for lateness now established, the parents treated the fine as a fee – a fee which bought them the right to collect their children later.

Evidence such as this was greeted with shock by economists, although the possibility that financial incentives can be counterproductive is now long established among psychologists (who by the early 1970s had largely abandoned Skinner’s ideas). Introducing explicit financial incentives to get someone to do something can undermine or displace their existing intrinsic motivation: in psychologists’ language, the intrinsic motivation is ‘crowded out’.10 Often, this intrinsic motivation includes a sense of moral obligation or duty – to work colleagues, your employer, your community or your country, depending on the context. In occupations such as medicine or teaching careful interview-based research has confirmed beyond doubt what we surely already knew: nurses, doctors and teachers are strongly motivated by the intrinsic importance of what they do, an intrinsic motivation which can be undermined by clumsy financial incentives imposed by their employers. It is not just medicine and teaching. In Boston, Massachusetts, the head of the fire service noticed that firefighters were off sick more on Mondays and Fridays. He imposed a strict limit of fifteen sick days, with pay deductions for those taking more days off sick. Oh dear. The number calling in sick over that Christmas and New Year period rose tenfold.11 We can see the same kind of response in occupations with less obvious intrinsic motivation. David Packard, the founder of Hewlett-Packard, described attitudes at General Electric, where he worked in the late 1930s: ‘the company was making a big thing of plant security … guarding its tool and part bins to make sure employees didn’t steal … Many employees set out to prove this obvious display of distrust justified, walking off with tools and parts whenever they could.’12

Just like more heavy-handed uses of power, financial incentives send a signal about the beliefs and motives of the people introducing them. If I feel manipulated like a puppet, tugged this way and that by the people pulling the strings, then I will respond by withdrawing cooperation, loyalty, gifts of unpaid overtime and other forms of altruistic behaviour. It is the familiar twenty-first-century tragedy we have seen in other chapters: people live down to economists’ cynical, distrustful expectations of them. The belief that ‘everyone has their price’ becomes self-fulfilling.

But the news cannot be all bad, because in some contexts financial incentives work just as intended by the economists who designed them. In 2002 Ireland introduced a small tax (15 pence) on plastic shopping bags. Within two weeks the use of bags had fallen by 94 per cent. The UK followed Ireland with a smaller tax phased in from 2011, leading to bag usage falling by around 80 per cent.13 Crucially, though, the financial incentive was not used in isolation. The government appealed to our sense of social obligation as well as the desire to avoid tax. A major publicity campaign explained how abandoned bags end up in waterways, damaging marine life, with the rest in landfill taking hundreds of years to decompose. And before the tax was imposed there was national debate leading to public support for the bag tax from most retailers and consumer groups as well as environmentalists.

Using a financial incentive in isolation is unlikely to be so successful. If the people pulling the strings make no attempt to engage us through explanation or reasoned debate, that sends two possible signals. Either they believe we are simply mercenary: our compliance can be bought and we don’t care whether we are being asked to do a good or bad thing. Or they believe we are stupid, in the sense of being unable to comprehend the good reasons for doing what we are being asked to do. Both signals tell us that we are not respected by the people with power over us.

Yet running through this is a positive lesson. Badly designed incentives can be counterproductive – but when handled carefully, with astute communication with the target group, they can work well. This is the message of Freakonomics and many behavioural economists: incentives send signals about the beliefs and motives of those offering them. And economists are familiar and comfortable with the idea that actions send signals.

Unfortunately, matters are more complicated. There are additional reasons why incentives can fail to work as intended. The mere presence of money can be a problem; when the money is removed the problem remains; and even if the incentives work as intended, they may still be a bad idea. Economists are much less comfortable with all that.

THE JEWISH TAILOR AND THE BLOOD DONOR

There is an old fable about a Jewish tailor who had recently opened a shop in town. The local bigots were determined to drive him away. They sent round a group of hooligans to jeer at him threateningly. The tailor thanked them and gave them some money. They laughed at him and left. Next day the hooligans returned to jeer, but the tailor told them he could not afford to give them as much money. The hooligans grumbled, but they took the money and left. On the following day the tailor apologized and said he could afford only one cent each this time. The hooligans told him they would not waste their time for just one measly cent. They left, never to return.

When financial incentives crowd out prior motives for some behaviour, the displacement or destruction of this prior motivation can be permanent. As a result, even if the financial incentives are subsequently withdrawn, the original behaviour does not return. At the Haifa day-care centres, when the late-collection fine was abolished after sixteen weeks, the number of late parents remained above the levels seen before the fines were introduced.14 The parents’ sense of moral obligation to pick up their children on time had been eroded.

One reason why the imprint of incentives can linger long after their removal is clear enough. People remember the implicit message underlying the incentives, about being untrustworthy, or incompetent, and so on. But even if no such negative signal accompanies the explicit incentive, the crowding-out effect can persist even after the incentive has gone. Why?

If we think about gifts again, a clue emerges. True, our reaction to a gift depends on the motives of the giver – but more obviously it depends on the nature of the gift. Orthodox economic theory implies that the best gift to receive is cash, because then you can buy exactly what you want.fn1 Yet cash gifts are uncommon because we all know that this misses the point. The best gifts express and celebrate something about the relationship between giver and recipient. They are about more than satisfying the wants of the recipient. After all, when we do receive cash gifts, many of us spend the money on nothing more exciting than household bills or a new vacuum cleaner.

Given the notable difference between cash and non-cash gifts, it seems likely that people perceive financial incentives as inherently different from equivalent non-financial ones, even if the motives of those offering the incentives are entirely benign. It is hard to test this hypothesis, not least because perceptions may be influenced by a variety of acquired beliefs. But in a recent experiment, when young children were shown coins and their function was explained in simple terms the children became less helpful towards others in their ordinary daily activities.15 People respond to cues suggesting the appropriate behaviour in the situation they find themselves in, and financial carrots and sticks are likely to cue the behaviour people use in their financial transactions – thinking like a consumer. Although we can sometimes act as ‘ethical consumers’, on average our relationships with sellers are both more anonymous and more short-lived than our relationships outside the market, in our communities, families and places of work. Economists have long celebrated the advantages of our one-off, anonymized, transactional relationships in the market. In entering the transaction we don’t need to look beyond the end of our nose, choosing just with our own interests in mind; and we always leave the transaction quits, with no outstanding obligations or responsibilities towards the other party. The result is that we are more likely to leave morality behind when thinking as a consumer, compared to how we think in the community, family and workplace. Psychologists call this ‘moral disengagement’ or ‘switching your ethics off’. Short-term, anonymous relationships do not trigger the strong feelings of sympathy and reciprocity which fuel our moral behaviour. So the mere presence of money changes our moral framing – how we see our situation – and increases the risk of our moral disengagement.

There is a further reason why incentives can crowd out our intrinsic motivation – and again, the damaging effects of this crowding out can linger long after the incentives themselves have been removed.

As with some other pivotal moments in this book, 1968 was a key date. Back then, Britain was unreservedly proud of its National Health Service and fully wedded to Keynesian government intervention in the economy. So the conclusion of a report from the Institute of Economic Affairs was truly shocking: state management of the blood supply did not meet the growing demand from hospitals. Instead people should be paid to give blood, because this would raise the supply. The proposal was immediately rejected. Instead its major impact was to stimulate a book-length critique, The Gift Relationship, from a sociologist called Richard Titmuss. He compared blood-supply statistics in Britain with the United States, where in some states there were various forms of financial incentive to supply blood. Titmuss found that not only did paying people to give blood reduce the supply but also that freely donated blood was of higher quality. When people were paid to give blood, they were more likely to conceal aspects of their medical history which would render their blood unacceptable.

The Gift Relationship was reviewed at length by Nobel Prize-winning economists Ken Arrow and Bob Solow. Both were somewhat puzzled by Titmuss’s findings and could see no reason why offering financial incentives might reduce the amount of blood donated. Subsequent economists have responded similarly: although recent research continues to support Titmuss’s view that paying for blood leads to a reduction in supply, some economists challenge the evidence simply on the grounds that it is inconsistent with textbook economic theory.16 Yet countries that have experimented with paying people to give blood find that once the payments are withdrawn blood donation does not return to the level before the experiment. It remains lower, suggesting that at least some of the intrinsic motivation to donate has been permanently crowded out.

One reason for the puzzlement of Arrow and Solow was the signal sent to potential donors by the financial incentive: surely it was a signal of social approval, a reward for good behaviour. Why would this deter donors? Easy, said Titmuss. Because they are not donors any more. They are sellers. Even a small payment to supply blood makes it seem more like a transaction than a gift: it is harder to see yourself as doing something altruistic when you are being paid. My enhanced self-respect from an act of altruism comes partly from the sacrifice involved. It’s not the same if I’m financially compensated for it.

Altruism is one thing; being a lone altruist is quite another. Acting altruistically can enhance your self-respect, but having that generosity exploited by others often has the opposite effect. Moreover, at a more unconscious level, psychologists agree that we learn social behaviour principally by copying others (just watch young children if you’re not convinced). We become more altruistic if we observe altruism in others. But a problem with incentives is that they make it hard to read the motives of others. When I see someone being paid after giving blood, were they being altruistic or just doing it for the money? The presence of incentives, then, makes it harder to see the altruism in others, and so I am less likely to act altruistically myself. And removing the incentives does not in itself remove the problem: I would need positive evidence of the altruism of others, rather than the mere absence of incentives, in order to copy it and ‘learn’ to be altruistic again.

Summing up, the offer of cash can deter blood donors, because it makes it harder to see yourself as altruistic and harder to observe altruism by other people. So we learn to be more selfish instead, a selfishness that can persist in the long term. Similar effects apply when blood donors are not exactly motivated by altruism, but something closer to civic duty or public-spiritedness: financial incentives make it harder to see yourself as doing your civic duty, and harder to see public-spirited behaviour in others. So we become more selfish.

Acts of altruism or public-spiritedness enhance our self-respect. More generally, it is the freedom to act, to make our own autonomous choices, which is crucial to self-respect. This brings us to a further reason why introducing incentives can backfire: they interfere with our autonomy. Even if the motives of the people pulling the strings are entirely benign, they are still trying to control and manipulate our behaviour. The adverse effect of incentives on autonomy has been studied in detail across a wide range of skilled occupations and professions from medicine to coding. There is compelling evidence that experienced surgeons, lawyers, academics and scientists strongly resist incentive schemes which interfere with their freedom to act in line with their expert judgement or which conflict with the norms of behaviour expected in their profession. While economists have lately come to accept that incentives should be used with special care in some professions, they disagree over whether the problem is more widespread. Many economists still assume that incentives don’t normally conflict with a worker’s sense of autonomy, because ordinary people don’t work to express their autonomy. They just do it for the money.

WHAT NOBODIES AND SOMEBODIES HAVE IN COMMON

Meet Luke. Luke worked as a ‘custodian’ (a cleaner or janitor) in a major US teaching hospital. One patient was a young man in a long-term coma. One day Luke cleaned the patient’s room, as normal. He cleaned it while the patient’s father, who had been keeping a vigil for months, had left the room to smoke a cigarette. On returning, the father angrily shouted at Luke to clean the room. Luke cleaned it again, without a murmur of complaint. In an interview with researchers studying working practices, Luke explained the incident: ‘I kind of knew the situation about his son … He wasn’t coming out of the coma … It was like six months that his son was here. He’d be a little frustrated, and so I cleaned it again. But I wasn’t angry with him. I guess I could understand.’ 17

Luke went on to explain that he saw his cleaning duties as just part of his job, which also involved making patients and their families comfortable, cheering them up and listening when they wanted to talk. Of course, none of this was in Luke’s official job description, which only mentioned cleaning tasks. It is easy to imagine the effect of introducing financial incentives to encourage Luke to focus just on his cleaning. To say that Luke’s intrinsic motivation might be ‘crowded out’ hardly begins to describe the problem. Luke’s desire to help patients and their families was not just another motivation, potentially competing with financial incentives, but central to how he saw his working life.

Luke’s story suggests that autonomy and identity matter not just in skilled occupations but in jobs often regarded as menial or mundane. Most of us don’t want to see ourselves as working just for the money. We are also trying, like Marlon Brando in On the Waterfront, to be somebody. We are constructing an identity, for ourselves and in the eyes of others. Of course, this applies outside the workplace too. We want to be free to make our own choices rather than be pushed by incentives – even if done so gently, politely and for a good cause.

Luke wanted to be a somebody to the patients and families in the hospital, rather than a nobody. His philosophy would have been recognized by a somebody among philosophers, Isaiah Berlin. But before we see what the janitor and the philosopher have in common we must confront the elephant in the room. For economists have a simple solution to the problem of incentives backfiring, whether that backfiring is due to the signal sent by the introduction of incentives, the moral disengagement that incentives can encourage or the challenge to autonomy they pose: increase the money involved.

The Visit of the Old Lady, by the great Swiss playwright Friedrich Dürrenmatt, tells the story of Claire Zachanassian. Young Claire becomes pregnant by her lover, Alfred, who lives in the same small town. When Alfred denies paternity, Claire goes to court, but Alfred wins the case by bribing two townspeople to lie. Years later Claire, now an old lady, returns to the town, which has fallen on hard times. Claire, however, is now the fabulously wealthy widow of an oil tycoon. She offers the town half a billion Swiss francs, and another half-billion to be shared among the residents. But there is a catch: the residents must kill Alfred. The mayor refuses and the townspeople are shocked. Claire says she will wait. Over time the townspeople buy lots of luxury goods on credit and accumulate debts. Finally, they agree to kill Alfred. Claire, satisfied she has now ‘bought justice’, gives the mayor a cheque and leaves the town with Alfred’s body.

Perhaps Dürrenmatt is right: financial incentives can work in the way predicted by orthodox economics – as long as they are big enough. If you offer enough money, people will do what you want, because the lure of the cash will outweigh any pesky moral scruples. Everyone has their price, after all. Still, for those of us uneasy about the spread of incentives, the economists’ chutzpah is breathtaking: the solution to the problem of incentives backfiring is more incentives – bigger, better, longer, harder.

But does it work? The Visit was Swiss fiction. We need Swiss fact. Let’s revisit the upstanding villagers of Wolfenschiessen, who said they would not be bribed to agree to a nuclear dump being located nearby. At the time of those interviews three other sites were being considered, but a year later the government had settled on Wolfenschiessen, whose residents were offered substantial compensation by the developer: $3 million per year for the next forty years. In July 1994 Wolfenschiessen decided in a village meeting to accept the offer and host the nuclear dump.

So is it as easy as that? Just sweep aside all need for a deeper, more realistic understanding of human motivation with a big wad of cash? Not so fast. An obvious snag is that big financial incentives are a costly way to get people to do things. Wolfenschiessen was a village of only 640 families, so $3 million per year amounted to $4,687 per family, more than a month’s salary even for the wealthy burghers of Wolfenschiessen, and around 120 per cent of Wolfenschiessen’s total annual tax revenue.

The real objection, though, is more fundamental. Suppose that if incentives are big enough, we can be sure they will work just as intended. Does that mean they are okay? And if so, how did we slip from Taylor’s vision of incentives as an instrument of social control to incentives as okay? Most economists and other supporters of incentives answer ‘yes’ to the first question. Incentives are morally irreproachable, they argue, because they involve voluntary exchange. No one is forced to do anything against their wishes. But this brings us to a contradiction at the heart of the argument for incentives. On the one hand, supporters recommend incentives over other forms of social control, such as regulation or coercion, on the grounds that they preserve freedom of choice. On the other hand, the successful use of incentives depends on the ability to control people’s behaviour – to induce them to respond to the incentive in a predictable way. In essence, then, the argument for incentives claims that people can be controlled while being free.

Like most seeming paradoxes, things are not what they seem to be. If a person can be readily, predictably, led to do something they would not otherwise do, we say that they are being controlled or manipulated. We do not describe them as ‘free’, even if they could have chosen to do otherwise. Real freedom requires more than the superficial ability to choose.

Enter the great philosopher Isaiah Berlin with the classic modern analysis of freedom, heavily influenced by his personal experiences. In 1920 the eleven-year-old Berlin left Bolshevik Russia with his family, driven away by oppression and anti-Semitism. Later, he noted the way in which totalitarian regimes equate freedom with the pretence of choice. I don’t act freely, Berlin argued, ‘if in a totalitarian state I betray my friend under threat of torture … Nevertheless, I did, of course, make a choice … the mere existence of alternatives is not, therefore, enough to make my action free.’18 This provided the background for Berlin’s definition of freedom (which he called liberty):

The ‘positive’ sense of the word ‘liberty’ derives from the wish on the part of the individual to be his own master. I wish my life and decisions to depend on myself, not on external factors of whatever kind … I wish to be somebody, not nobody; a doer – deciding, not being decided for, self-directed and not acted upon by external nature or by other men as if I were a thing, or an animal or a slave incapable of playing a human role … I wish, above all, to be conscious of myself as a thinking, willing, active being, bearing responsibility for my choices and able to explain them by reference to my own ideas and purposes.19

And if that is too high-falutin, just remember Luke. He had his own ideas and purposes regarding his job, he wanted to be self-directed in pursuing them and responsible for his choices, judged against his own standards.

The picture of humanity which emerges from the traditional economic theory of motivation does not respect our ideals of liberty and autonomy. Since I can be predictably manipulated by incentives, it cannot be said that my decisions depend only ‘on myself, not on external factors’. Nor am I ‘self-directed’. Incentive schemes, then, might work just as intended by their designers, with no crowding out – but still be morally wrong because they conflict with our values of liberty and autonomy.

And incentives can be morally wrong in other ways too.

When a judge is offered and accepts a financial incentive to let a guilty person go free, this is bribery, and is obviously harmful. That both the briber and the judge might be better off is irrelevant. In the film Indecent Proposal (1993), a billionaire is attracted to a happily married woman. He offers the couple a million dollars if she will spend a night with him. The billionaire is not aggressive and the offer is not exploitative, so when the couple accept it is a voluntary transaction. We need older, stronger language than ‘crowding out’ to describe what is going on in these cases: incentives can corrupt us. They can corrupt both the person offering the incentive and the one accepting it. Shakespeare saw how dangerous corruption can be – and how far people will go to resist it. In Measure for Measure, judge Angelo offers to spare Isabella’s brother’s life if she will sleep with him. Isabella, a novice nun, refuses, saying that it is better that her brother die once than that her soul be sacrificed for eternity.

But is there really any likelihood of something as serious as corruption arising from the more prosaic, everyday incentive schemes devised by economists? Yes, not least because lying is another form of corruption. We’ve seen that when people are paid to give blood they are more likely to lie about their medical history. And when teachers’ pay is linked to their students’ exam scores, more teachers will lie about how well their students performed.20

Of course, financial incentives are not always corrupting. But sometimes they undermine the opposite of corruption – the development of good character. This is not just a worry of idle philosophers: a central purpose of school is the development of good character in young people. We want students not just to do the right thing but to do it for the right reason. We want children to learn self-discipline, the ability to resist immediate temptations. Yet schools in Dallas paid seven-year-old schoolchildren $2 per book they read. The danger of cash incentives to read is that self-discipline is undermined. Children are led to think purely in terms of immediate pleasures and pains – whether the effort of reading is outweighed by the cash payment. They begin to believe that reading is ‘work’ rather than pleasure, something not worth doing for its own sake. Rather than learning to read, money becomes the goal, so inevitably children focus on maximizing financial rewards, trying to cheat the system by picking shorter, easier books. With this way of thinking ingrained, it is not surprising that if the financial incentive is withdrawn, children respond by withdrawing their effort. And incentives do not encourage older children to take responsibility for their reading either. Just imagine the tweeny sneer: ‘If you’re not paying me, how can you expect me to read a book?’21

This is another illustration of why autonomy matters. It is the difference between doing something for its own sake or for your own internal reasons and doing it because of an external incentive. Children who choose autonomously, wanting to learn to read, will never stick with the easier books they can already read. It becomes boring – and there is no point in cheating if you are just cheating yourself.

THE WEIRD WORLD OF NUDGE

A widely discussed development in economics in recent years has been the emergence of behavioural economics. In essence, behavioural economics tries to study how people actually behave – in contrast to fantasies such as homo economicus which dominate orthodox economics. It uses ideas and methods from psychology, and it was two psychologists, Daniel Kahneman and Amos Tversky, who perhaps did more than anyone else to dislodge old orthodoxies in economics about how we think and choose. One big idea in behavioural economics began with Kahneman and Tversky’s Asian disease problem:

Suppose you are told that an unusual Asian disease is expected to kill 600 people in your country. Two alternative policies to combat the disease have been proposed. If Policy A is adopted, 200 people will be saved. If Policy B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved. Do you favour policy A or B?

But suppose instead the response to the same disease involves a choice between the following two policies: if policy C is adopted, 400 people will die. If policy D is adopted, there is 1/3 probability that no one will die, and a 2/3 probability that 600 people will die. Do you favour policy C or D?22

Kahneman and Tversky discovered that a large majority favour A when asked the first question, while a large majority favour D in answer to the second – even though the two questions are essentially the same. Policy A has the same outcome as policy C, and B has the same outcome as D. The exact wording or framing of the alternatives alters people’s choices; these framing effects turn out to be ubiquitous in a wide range of decision contexts. For psychologists, this was not surprising: of course our decision-making is affected by how the alternatives are described. But it was shocking news for orthodox economists. With their careful, meticulous experiments, Kahneman and Tversky forced economists to accept the reality of framing effects. And they had an equally powerful impact on how most economists thought about incentives.

First, Kahneman and Tversky made economists much more accepting of the possibility that incentives can be counterproductive. Until they came along, the evidence for crowding out and the possibility that incentives can backfire was a huge embarrassment to economists. What could be a more basic idea in economics than that people respond predictably, obviously, to money? Once there was too much robust evidence of crowding out to be able to ignore it, the only remaining possibility was to label this kind of behaviour ‘irrational’: a feeble response, but a correct one, if ‘rational’ is defined as what homo economicus would do. Kahneman and Tversky’s crucial contribution was to develop an explanation for what economists had previously called irrationality, in effect an entire theory of irrationality. This rescued crowding out from being an embarrassing anomaly: now it was just one among many types of ‘irrational’ human behaviour.

Second, some economists saw in Kahneman and Tversky’s framing effects an explanation of how incentives sometimes backfire and sometimes don’t. Incentives which are identical as far as economic theory is concerned – the same monetary value, and so on – can produce different results, depending on how they are described or framed.

Third, accepting the reality of counterproductive incentives suggested that another approach to policy-making was needed. But as the new behavioural economics began to filter through to policy-making circles, something strange happened. The central lesson of behavioural economics is that people make poor decisions – yet the policy innovation it provoked seeks to rely on precisely those poor decisions to bring about desirable outcomes. Welcome to the weird world of Nudge.

Nudge began with a 2008 book of that name by economist Richard Thaler and lawyer Cass Sunstein. Both of them had worked with Kahneman and Tversky, who had shown that real people do not act like homo economicus. Rather than weighing up all relevant considerations and carefully calculating the ‘optimal’ choice, people are guided by rules of thumb, intuition, impulse and inertia. The core idea behind Nudge is that rather than fighting these forces, we should use them, to steer or nudge people to make the choices they would want to make – the choices homo economicus would make, or at least something close. At first, Nudge looked like a passing fashion, just the latest idea from the policy wonks hanging around central government. But it didn’t go away. Sunstein worked for Obama in the White House, Thaler’s ‘Nudge Unit’ advised the Cameron government in the UK, and self-conscious Nudge policies are now being used in around 130 countries.23 Thaler won the Nobel Prize for economics in 2017.

Nudge enthusiasts almost always point to the same policy to illustrate the Nudge approach, its flagship success story – automatic enrolment in workplace pensions. A workplace pension has two big advantages over other forms of retirement saving: tax breaks and contributions from your employer. Yet many people fail to enrol in their workplace-pension scheme, even though they need to save for retirement one way or another. Simple inertia has long been seen as the culprit. It is easier to do nothing than to think about what to do, how much to pay in, and so on, not least because those decisions trigger uncomfortable thoughts about financial insecurity and old age. To tackle the problem, Sunstein and Thaler suggested a minor tweak, a gentle nudge. Why not make a pension contribution at some appropriate level the default or do-nothing choice for employees? Those who do not want to join the pension are still free to opt out.

It turns out that tweaks like this are available for almost every choice we make. In a school or workplace cafeteria the healthy foods can be displayed prominently and attractively – while the least healthy choices can be literally kept under the counter. At Amsterdam’s Schiphol Airport, each of the urinals in the men’s bathrooms has a picture of a fly drawn on them. Letters reminding people to pay their outstanding taxes are more effective when they point out that most people living nearby have already paid up. In the UK, economists were mystified that government subsidies for energy-saving loft insulation had little effect. Then the government’s ‘nudge team’ suggested that loft-clearance services should be subsidized instead. The overall cost to households increased but the demand for loft insulation rose significantly.

No wonder Nudge has proved popular with politicians of all shades: desirable social outcomes can be engineered without the heavy-handed use of financial incentives or coercion via laws and regulations. Instead, Nudge works with the grain of human nature and respects freedom of choice.

Or so it seems. The trouble with Nudge – and behavioural economics more generally – is that it still shares too many ideas with orthodox economics. Behavioural economics inspired by Kahneman and Tversky’s work is often labelled research on heuristics and biases. That last word reveals the underlying assumption of most behavioural economics: human decision-making is biased – in other words, flawed. While Kahneman and Tversky had launched a revolution in irrefutably demonstrating that people do not behave like homo economicus, they left unquestioned the equally central pillar of orthodox economics that people ought to behave like that – leaving homo economicus untouched as the ideal of what it means to be rational. And the Nudge enthusiasts leave it untouched too. So at the heart of their approach is the assumption that the ideal choice, the perfectly rational choice, is what homo economicus would do. Homo economicus has just one goal – the promotion of his or her own well-being or welfare. Exactly how welfare is maximized does not matter to homo economicus – ends justify means – so autonomy, being the author of your well-being, gets ignored. As Sunstein insists, ‘People speak in terms of autonomy but what they are doing is making a rapid, intuitive judgement about welfare.’24 And if their only goal is welfare-maximization, there is an obvious role for us, the ‘we’ that appears so frequently in the writings of Nudge wonks and behavioural economists. We, the Nudge experts, needn’t get bogged down in messy compromises between ethical values or wondering what ‘they’ really want. We, the Nudgers, already know what they should want, so we can get on with nudging them in that direction.

One practical problem here is that behavioural economics applies to elites too. Experts can mess up Nudging because they are vulnerable to the same cognitive flaws as the rest of us. Of course, experts can be incompetent in using other policy tools, including financial incentives. So Nudge may seem no worse than the alternatives in this respect. But with financial incentives, at least we know we are being ‘incentivized’ and can be on guard. In contrast, we often don’t know we are being Nudged. More than that, subterfuge can be essential to Nudging. Nudges frequently rely on covert manipulation of our behaviour or a degree of secrecy – such as putting the unhealthy foods in the cafeteria out of sight. This example might make covert manipulation appear relatively harmless, but in general Nudge is wide open to being exploited by mendacious regulators and politicians with darker motives. At the very least, the ‘harmless’ aura surrounding some nudges and the sheer subtlety of others suggests that Nudging may be less subject to democratic scrutiny than traditional regulations and more vulnerable to capture by special-interest groups.

Finally, there is a more basic objection to some seemingly harmless nudges like ‘auto-enrolment’ in pension schemes (making ‘opt in’ the default): other policies may be superior to Nudge. Some US evidence suggests that auto-enrolment may have reduced retirement saving.25 Default pension contributions in the US have often been set very low (around 3 per cent) and many employees who might otherwise have contributed more than 3 per cent stick with the default out of inertia. Bizarrely, many Nudgers assume that once the nudge draws people’s attention to their pension, employees will adjust their contributions to the ‘optimal’ level. They won’t stick with the default. This is an astonishing assumption from behavioural economists, who know the power of inertia in leading us to stick with the default. In any case, why not just compel people to save enough for retirement, either through private schemes, or tax-funded public ones?

Again, the culprit is the default: the default adherence of the Nudgers to orthodox economics. They assume that, once given the appropriate nudge, people will default back to behaving like homo economicus. The Nudgers begin their argument by essentially dividing humanity into two groups. It’s us and them again. There are the slaves of impulse, inertia and rules of thumb. And then there are the smart people (guess which group the Nudgers think they are in). Then the Nudgers revert to orthodox economics, which holds that any form of government mandate or compulsion must harm homo economicus, because mandates force a change in behaviour – and the prior behaviour of homo economicus was already optimal. Nudges, on the other hand, impose no such harm, because they leave homo economicus free to do his own thing. As Sunstein and Thaler conclude: ‘we should design policies that help the least sophisticated people in society while imposing the smallest possible costs on the most sophisticated’.26 It is hard not to detect a note of condescension here. So what would more respectful Nudging look like? Indeed, let’s widen the question: what about the respectful use of incentives more generally?

BEYOND CARROTS AND STICKS

In several Indian cities, including Bangalore and Rajahmundry, a strange street performance can occasionally be seen. A band of drummers gathers, usually outside an office building, and beats out a fast-paced tattoo, often for more than an hour. The performances attract a large crowd, cheering enthusiastically. But the drummers don’t expect the crowd to pay anything. They want the company inside the office building to pay instead – unpaid taxes. This is an Indian way of giving tax avoiders an incentive to pay up, by naming and shaming them in a highly public way in the local community. It’s a method that has been successful in recouping unpaid taxes where other methods have failed.27

Incentives accompanied by moral messages can work well. Of course, different contexts and cultures demand different approaches to incentive design, but nevertheless there are some common ingredients. To begin with, incentive designers must not ignore the previous sentence: context and culture matter. Unfortunately, ignoring context is an article of faith for many economists, and behavioural economics suffers from that inheritance. One reason is physics envy: the desire of economists to emulate sciences such as physics. Scientists do controlled experiments, so behavioural economists prefer to do controlled experiments too. Since suitable conditions for controlled experiments almost never exist in real life, most behavioural economics research is conducted in the lab.28 Students play games or answer hypothetical questions about contrived situations, such as how they would respond to a financial incentive. But, of course, behaviour in a lab context is at best an incomplete guide to behaviour in the real world.

Paying full attention to context and culture is about understanding not just when incentives backfire but when they might work unexpectedly well too. Given the wide-ranging evidence of crowding out, where financial incentives undermine our existing intrinsic motivations, ‘crowding in’ comes as a surprise: incentives can enable or encourage people to act in line with their intrinsic motivations. Yes, schemes that pay children to read books have several potential drawbacks. But in some communities, the main reason why few children read is peer pressure. Reading is just not cool. Being paid to read might give children who would like to read an excuse to tell their peers, ‘I’m just doing it for the money.’ There is evidence pointing in this direction from a programme paying pregnant women cash to quit smoking.29 In private, the women told researchers that a big obstacle to quitting was peer pressure to continue smoking. The cash incentive gave them cover to quit publicly, because in this group of (mainly low-income) women, ‘I’m just doing it for the money’ was a badge of pride.

The moral and social meaning of our actions is never far from the surface. Unlike the pregnant women, blood donors seem unlikely to tell their peers that they are motivated by money, even if that’s the truth. Behavioural economists and other incentive designers need to be able to distinguish the pregnant women from the blood donors, so moral complexity and ambiguity cannot be ignored. That means going beyond labelling different descriptions of different situations as mere ‘framing effects’ and going beyond welfare maximization as the definition of what people want and what is best for society.

We can get another perspective on the morality of incentives by contrasting them with rewards and punishments. There is a big difference. We don’t say that athletes are ‘incentivized’ to win Olympic medals, because medals are rewards for excellence, not incentives. The difference remains even if the rewards and punishments are financial. The power of rewards and punishments, including financial prizes and fines, comes from being seen as deserved. For example, the prospect of a fine levied in a court of law is more powerful than a fee or other financial incentive of the same money value. The fine embodies a moral message, a public condemnation. Clearly, in some contexts, rewards and punishments could be both fairer and more effective than standard economic incentives. However, rewards and punishments earn their legitimacy only through an ongoing dialogue between those distributing and receiving them, as well as wider society.

Such a dialogue may be more obvious in the case of rewards and punishments, but it needs to happen if any kind of nudge or incentive scheme is to succeed. What might it look like? In the case of nudges, it should include showing us the defaults and rules of thumb we unconsciously rely on to make choices, to help us overcome them if we want. For example, in one US workplace cafeteria, rather than hiding away the unhealthy foods under the counter, workers were given the opportunity to choose and pay for their lunch when they arrived at work in the morning. And just in case they didn’t realize why, they were told that people who choose their lunch early in the morning are much more likely to choose healthy foods than those who wait until lunchtime. You need to pay up front, too, to avoid backsliding. This is a nudge, but an open and transparent one that seeks to help people ‘lock in’ their willpower, at the time of day when they have it.

Perhaps the original, more covert nudge might get more people to eat more healthily, but there are other issues at stake. It is not just a matter of ‘what works’. If that were true, and all the Nudgers’ arguments about our error-prone decision-making hold, then we should take them to their logical conclusion: why mess about with Nudge when you can use stealthier and more powerfully manipulative techniques such as subliminal advertising? Equally important, our decision-making may not always be as error-prone as it appears. Respectful nudging acknowledges the limitations of homo economicus as an ideal of rationality. Yes, our mental operating system frequently deviates from that of homo economicus, but this is not always a bug. Sometimes it’s an upgrade.

Ultimately, in a democracy, we should cherish dialogue as a way for those on the receiving end of nudges or incentives to communicate with those pulling the strings. People on the receiving end may see incentives as the unnecessary and undeserved use of power. Workers subject to extremely intensive monitoring, like the pickers in the Amazon warehouse, have cause for complaint here. (Unlike the workers in Taylor’s day, they can only dream of help from the US Congress.) More subtly, people may legitimately disagree with the aims or purposes of those pulling the strings. In many healthcare systems doctors face a range of financial incentives introduced by governments (in public systems) or insurance companies (in private systems). Doctors may resist these incentives – not because they don’t care about maximizing patient welfare but because they believe the incentives steer treatment in ways which harm patient welfare. Moreover, when governments and insurers insist that the entire responsibility for treatment decisions lies with doctors, it looks like incentive schemes are being used by governments and insurers to exercise power without responsibility.

When, in the late eighteenth century, Benjamin Franklin brought back from France a diamond-encrusted snuffbox, a present from Louis XVI, the US Congress was troubled. They saw that it could change Franklin’s attitude to France, possibly without him realizing.30 Worrying about the unwelcome effects of incentives is not new. The new development is the infusion of ideas from economics: only in the last few decades have economists devoted much explicit attention to incentives.31 And the impact of economics has been highly significant. We are finally in a position to summarize it.

It is tempting to begin by supposing that financial incentives have become more ubiquitous because of the growing influence of economics. But how would we measure that ubiquity? What would it mean? True, incentives feel more ubiquitous partly because, as we’ve seen, economists have introduced incentive talk into everyday life – including in situations where that language seems horribly inappropriate. Yet the real impact of economics has been in influencing the kinds of incentives we choose, and how we justify them. First, economists have brought along their disciplinary default assumption: that people can be assumed to be selfish, and that little is lost by ignoring their altruistic and ethical motives. This leads directly to the assumption that everyone has their price. Restricting the evidence base – for example, using lab experiments rather than interviews in real-world contexts – makes it harder to develop a richer, more nuanced picture of human motivation. Second, economists consider only one value, one measure of success, in judging incentives: individual welfare or well-being, narrowly defined. They want to avoid opening up a Pandora’s box of moral questions: values of fairness, responsibility, autonomy and respect, among others. Their desire to keep the box shut may be understandable, but it’s impossible: values such as these play a central role in determining whether incentives work, and whether they have benign or malign side-effects. Third, economists instead see all incentives in terms of an exchange or transaction: I do what you want me to do in exchange for the incentive. Since the exchange is voluntary, it will not take place if either side is made worse off. And so, according to economists, incentive schemes cannot harm anyone’s welfare. Some economists take this conclusion one step further: incentives cannot be unethical. At the very least, their argument here attempts to silence any further ethical assessment of incentives.

Together, economists’ ideas have severely limited our ability to think clearly about incentives. But as we have seen, this can be turned around. By setting aside restrictive economic thinking, we can hope to develop incentives and nudges which are both effective and respectful.

Daniel Kahneman has a strong memory from around the age of seven, when he lived in Paris. This was Paris under Nazi occupation, when there was a 6 p.m. curfew. Hurrying home after curfew one evening, young Danny was terrified when an SS soldier approached him. All the more so because Danny had the yellow Star of David inside his sweater, whereas Nazi law required him to display it prominently. The SS soldier picked Danny up and hugged him. Then he put him down, showed him a picture of a young boy in his wallet and gave him some money. Danny went home thinking that people are deeply complex and unpredictable.32

People are deeply complex and unpredictable, and economists’ understanding of incentives has barely begun to grapple with the complexities, both of predicting how people respond to some new incentive or nudge, and of the many moral and political questions which arise in trying to get people to do things. We seem determined to preserve our autonomy as a central part of our identity, and so we resist incentives which interfere with it. Yet at the same time we yearn, childlike, for a paternalistic authority to take care of us, to make decisions for us. Sometimes, perhaps, we can square this circle by embracing autonomy when it is most precious to us, while consciously giving up decision-making elsewhere. When Barack Obama was president he noted: ‘I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make.’33

But often the contradiction cannot be avoided. We want autonomy, and we want a wise, benevolent authority figure to ensure we do what is best. The best we can do is to face this contradiction, openly and honestly. And either way we jump, as a small crumb of comfort, we still deserve respect. There is no need for incentives which deny even that.