In recent years, Britain’s Conservative Party has viewed its annual conference as a PR disaster waiting to happen. These meetings, traditionally held in seaside towns such as Brighton and Blackpool, see thousands of delegates from local Conservative clubs congregate in search of leaders finally willing to throw off the scourge of political correctness and modern values. Whether it be low-level racism emanating from the conference platform, the bland male greyness of the figures in the spotlight or the sight of elderly supporters expressing their disgust with same-sex relationships, potential embarrassment lurks around every corner.
But in 1977, two years into Margaret Thatcher’s leadership of the party, a dose of youthful and unexpected colour was injected into proceedings. A sixteen-year-old schoolboy with a thick Northern accent, William Hague, took the platform and elicited hoots of approval from the otherwise staid conference, including from the woman who would go on to be prime minister for eleven years.
In between tearing into the ‘socialist state’ of the Labour government of the day, the teenager gently ribbed his audience: ‘It’s alright for most of you – half of you won’t be here in thirty or forty years’ time’. He proceeded to identify the nub of the socialist threat. ‘There is at least one school in London’, he announced, ‘where the pupils are allowed to win just one race each, for fear that to win more would make the other pupils seem inferior. That is a classic illustration of the socialist state, which draws nearer with every Labour government.’
Twenty years later, Hague was the new leader of his party. He never got to taste the electoral victory as leader that his heroine had over the course of the 1980s. But he would no doubt have been delighted with how British society had developed in the meantime. After twenty years of Thatcherite policy-making, the ‘socialist state’ was scarcely discernable anywhere, least of all in Tony Blair’s recently elected Labour government. A pro-business, free-market creed had taken hold across the Western world. And in keeping with the teenage Hague’s vision, the political appeal of competitive sport had never been higher.
During the long economic boom that lasted from the early 1990s up until the banking meltdowns of 2007–08, sport was the great unquestionable virtue for political leaders everywhere. Attracting international sporting contests, such as the FIFA World Cup and Olympics, to particular cities became a cause célèbre for political elites who hoped to bask in the reflected glory of successful professional athletes. As prime minister, Tony Blair took to the sofa of the BBC’s flagship football programme to chat informally about the skills of his favourite midfielder. His successor, Gordon Brown, tried to get in on the act, using his first day in 10 Downing Street to give a speech citing his school rugby team as his abiding inspiration. And when his authority was tottering in the summer of 2008, Brown returned to Hague’s original theme, throwing his weight behind more competitive school sport. ‘That is the spirit we want to encourage in our schools,’ he declared, ‘not the medals for all culture we have seen in previous years, but more competition’.
Meanwhile, there was little sporting metaphors couldn’t apparently justify. Every further inflation of executive pay was explained in terms of maintaining a ‘level playing field’ in a ‘war for talent’. When pressed by an interviewer in 2005 about the rising inequality that his government had overseen, Tony Blair responded that ‘it’s not a burning ambition for me to make sure that David Beckham earns less money’, despite the fact that football had nothing to do with the question.1
Even after the epic failure of the neoliberal model of 2008, Britain’s political class has returned to this rhetoric, announcing that the ‘global race’ requires that welfare is slashed and labour markets further deregulated. The need to entrench ‘competitiveness’ as the defining culture of businesses, cities, schools and entire nations, so as to out-do international rivals, is the mantra of the post-Thatcher era. A science of winning, be it in business, sport or just in life, now brings together former sportsmen, business gurus and statisticians to extend lessons from sport into politics, from warfare into business strategy, and from life coaching into schools.
But as the teenage Hague imagined the future thirty or forty years hence, there was one defining trend of the new era that neither he nor anybody else could foresee. It transpires that competition and competitive culture, including that of sport, is intimately related to a disorder that was scarcely discussed in 1977 but which had become a major policy concern by the end of the century. As the 1970s drew to a close, Western capitalist countries stood on the cusp of a whole new era of psychological management. The disorder at the heart of this was depression.
One way of observing the relationship between depression and competitiveness is in statistical correlations between rates of diagnosis and levels of economic inequality across society. After all, the function of any competition is to produce an unequal outcome. More equal societies, such as Scandinavian nations, record lower levels of depression and higher levels of well-being overall, while depression is most common in highly unequal societies such as the United States and United Kingdom.2 The statistics also confirm that relative poverty – being poor in comparison to others – can cause as much misery as absolute poverty, suggesting that it is the sense of inferiority and status anxiety that triggers depression, in addition to the stress of worrying about money. For this reason, the effect of inequality on depression is felt much of the way up the income scale.
Yet there is more to this than just a statistical correlation. Behind the numbers, there is troubling evidence that depression can be triggered by the competitive ethos itself, afflicting not only the ‘losers’ but also the ‘winners’. What Hague identified as the socialist fear, that competition makes many people ‘seem inferior’, has been proved far more valid than even left-wing 1970s schoolteachers could have imagined; it also tells them that they are inferior. In recent years, there has been a flurry of professional sportspersons confessing their battles with depression. In April 2014, a group of prominent ex-sportsmen in the UK penned an open letter urging ‘sporting directors, coaches, and leaders of development programmes, to attend to the development of “inner fitness” alongside “athletic fitness”’, to protect professional sportsmen from this epidemic.3
A study conducted at Georgetown University found that college footballers are twice as likely to experience depression as non-footballers. Another study discovered that professional female athletes display similar personality traits as those with eating disorders, both being linked to obsessive perfectionism.4 And a series of experiments and surveys conducted by the American psychologist Tim Kasser has revealed that ‘aspirational’ values, oriented around money, status and power, are linked to higher risk of depression and a lower sense of ‘self-actualization’.5 Wherever we measure our self-worth relative to others, as all competitions force us to, we risk losing our sense of self-worth altogether. One of the sad ironies here is that the effect of this is to dissuade people, including schoolchildren, from engaging in physical exercise altogether.6
Perhaps it is no surprise, then, that a society such as America’s, which privileges a competitive individual mindset at every moment in life, has been so thoroughly permeated by depressive disorders and demand for antidepressants. Today, around a third of adults in the United States and close to half in the UK believe that they occasionally suffer from depression, although the diagnosis rates are far lower than that. Psychologists have shown that individuals tend to be happiest if they credit themselves for their successes, but not for their failures. This might sound like a symptom of delusion, but it is arguably no more delusional than a competitive, depressive culture which attributes every success and every failure to individual ability and effort.
Hasn’t America always been a competitive society? Isn’t that the original dream of the settlers, the Founding Fathers and the entrepreneurs who built American capitalism? This myth of society-as-competitive-sport surely dates back far earlier than the late 1970s; and yet it was only in the late 1970s that the epidemic of depression first took hold. It seems extraordinary now to consider that, in 1972, British psychiatrists were diagnosing depression at five times the rate of their US counterparts. And as recently as 1980, Americans still consumed tranquilizers at more than twice the rate of antidepressants. What changed?
The sixteen-year-old Hague had taken the conference platform at a turning point in the history of economic policy-making in the Western world. According to the most respected measure of income inequality, Britain has never been more equal than it was in 1977.7 But at the same time, the case for market deregulation was becoming increasingly credible, urged on by corporations that felt that they had become victimized by regulators, unions and consumer pressure groups.8 Persistently high inflation had led a number of governments, including Britain’s, to experiment in ‘monetarism’, an attempt to control the amount of money in circulation but which also threatened economic growth and jobs. Thatcher and Ronald Reagan were waiting in the wings to usher in the era that would become known as ‘neoliberalism’.
One way of understanding neoliberalism is to examine how things progressed from there: the spiralling executive pay, the unprecedented levels of unemployment, the growing dominance of the financial sector over the rest of the economy and society, the expansion of private sector management techniques into all other walks of life. Analysing these trends is important. But it is also important to understand how and why they were possible, and that involves turning in the opposite direction, to the twenty-year period which preceded young Hague’s call to arms. It is during those two decades that many of the critical ingredients of neoliberalism would shift from the outer margins of intellectual and political respectability to becoming the orthodoxies of a new era. Among these were a renewed reverence for both competitiveness and the management of happiness.
At the heart of the cultural and political battles of the 1960s was an acute relativism which attacked the roots of moral, intellectual, cultural and even scientific authority. The right to declare some behaviours as ‘normal’, certain claims as ‘true’, particular outcomes as ‘just’, or one culture as ‘superior’ was thrown into question. When the traditional sources of authority over these things attempted to defend their claims, they were accused of offering just one partial perspective, and of using their own parochial language to do so. In place of some values being ‘better’ or ‘truer’ than others, there was simply conformity on the one hand and difference on the other.
The core political and philosophical questions posed by the 1960s were these. How to take any publicly legitimate decisions, once there are no commonly recognized hierarchies or shared values any longer? What will provide the common language of politics, once language itself has become politicized? How will the world and society be represented, once even representation is considered to be a biased and political act? The problem, from a governmental point of view, was that the reach of democracy was extending too far.
Jeremy Bentham’s vision of a scientific, utilitarian politics was initially motivated by an urge to cleanse legal process and punishment of the abstract nonsense that he believed still polluted the language of judges and politicians. In that sense, he hoped it would rescue politics from philosophy. But viewed differently, it could also serve a different function. The recourse to mathematical measurement could also rescue politics from excessive democracy and cultural pluralism. The Benthamite emphasis on a robust and scientific measure of psychological welfare reappeared in the wake of the 1960s, in various guises, some of which were associated with the counter-culture, others of which were ostensibly being peddled by conservatives. But they succeeded politically to the extent that they could claim to sit outside the fray. What they shared was an attempt to use numbers as a means of recreating a common public language.
In a world where we cannot agree what counts as ‘good’ and what counts as ‘bad’, because it’s all a matter of personal or cultural perspective, measurement offers a solution. Instead of indicating quality, it indicates quantity. Instead of representing how good things are, it represents how much they are. Instead of a hierarchy of values, from the worst up to the best, it simply offers a scale, from the least up to the most. Numbers are able to settle disputes when nothing else looks likely to.
At its most primitive, the legacy of the 1960s is that more is necessarily preferable to less. To grow is to progress. Regardless of what one wants, desires, or believes, it is best that one gets as much of it as possible. This belief in growth as a good-in-itself was made explicit by some subcultures and psychological movements. Humanistic psychology, as advanced by Abraham Maslow and Carl Rogers, attempted to reorient psychology – and society at large – away from principles of normalization and towards the quest for ever greater fulfilment.9 Individuals were perceived to be hemmed in by the dull conformity of 1950s culture, which blocked their capacity to grow. To assume that there was a ‘natural’ or ‘moral’ limit to personal growth was to fall back into repressive traditions. It wasn’t long before corporations were making the identical argument about the malign impact of market regulation on profit growth.
The first-ever attempt to compare the happiness levels of entire nations was conducted in 1965, by the former pollster to President Roosevelt, Hadley Cantril.10 In collaboration with the Gallup polling company, Cantril surveyed members of the public around the world in an entirely new way, which he termed the ‘self-anchoring striving scale’. Pollsters had historically been interested in how individuals felt towards specific products, policies, leaders or institutions. Cantril’s innovation was to ask them how they felt about their lives, relative to their own aspirations. Attitudinal research had invited people to look outwards upon the world and express their opinion as a number. Cantril asked them to look inwards upon themselves and do the same. This was a landmark in the development of contemporary happiness studies. But in the notion of ‘self-anchored striving’, it also hinted at the loneliness and aimlessness of a society with nothing but private fulfilment as its overarching principle.
The problem is that even a society of self-actualization and growth still needs some form of government and recognized authority. Who will provide it? Where will the expertise come from, to write the ground rules of this new growth-obsessed, relativist society?
What we witness, in the period from the late 1950s to the late 1970s, is the rise of a new breed of expert, capable of reconstructing authority for this new cultural landscape. Unlike the scientific and political authorities that they – often deliberately – displaced, their authority was devoid of the traditional moral baggage of professionalism, and rooted instead in a dispassionate ability to measure, rank, compare, categorize and diagnose, apparently uncluttered by moral, philosophical or social concerns. The old experts carried around notions such as the ‘public interest’, ‘justice’ and ‘truth’. As Bentham might have put it, they were victims of the ‘tyranny of sounds’ that theory exerts over the mind. The new experts were simply technicians, applying tools and measures which they were proud to declare were ‘theory neutral’.
At a time when political disputes were raging to the point of violence and beyond, dispassionate scientists, who were simply qualified to measure and to classify, were an attractive new source of authority. Crucially, this ethos was both counter-cultural and conservative at the same time: counter-cultural, because it knocked the old establishment authorities off their perch, and conservative because it lacked any vision of political progress of its own. In that respect, these experts offered an exit route from the ‘culture wars’. In the biographies of a handful of scholars who moved from the periphery of American academia circa 1960 to becoming the architects of a new competitive-depressive society by 1980, we can see the seeds of neoliberalism being sown.
Bentham in Chicago
There is something a little eerie about Chicago’s Hyde Park neighbourhood. Its tree-lined streets of late-nineteenth-century houses feel typical of many traditional upper-middle-class American suburbs. At its heart sits the great University of Chicago, mimicking the gothic style of an Oxford college, complete with mediaeval turrets and stain-glassed windows. Wandering around the leafier parts of Hyde Park, where ivy creeps up walls and lawns are groomed immaculately, the visitor could be forgiven for forgetting where they actually were. A reminder comes in the form of the emergency phones, located in white posts on every corner in and around the university, with a blue light on top. Hyde Park is a sanctuary of peace and scholarship, but it is located in Chicago’s South Side, and visitors are advised against straying too far in any single direction on foot.
This cocoon in which the university sits was a significant factor in the development of the ‘Chicago School’ of economics, which was instrumental in the design and implementation of the neoliberal policy revolution. Chicago itself is 700 miles from Washington, DC, and 850 miles from Cambridge, Massachusetts, the homes of Harvard and MIT, the original bastions of American economics. Not only were Chicago School economists tightly congregated in Hyde Park, they were also several hundred miles from the core of the political and academic establishments. They had little choice but to seek debate with one another, and for three decades after the end of World War Two, they engaged in this with a rare fury.
The scholars who became known as the Chicago School began to cluster around the leadership of economists Jacob Viner and Frank Knight during the 1930s. By the late 1950s, they had grown into a tight-knit family. In one case, the family ties were quite literal: Milton Friedman married Rose Director, sister of Aaron, who was the linchpin of the post-war Chicago School. Aside from a certain geographic isolation, these economists had a number of intellectual and cultural traits in common. Among these was the sensibility of the outcast.
Until cracks appeared in the previously dominant Keynesian policy programme during the early 1970s, Chicago was rarely taken entirely seriously as a centre of economics and was only grudgingly offered recognition by Harvard and MIT as the Reagan revolution unfolded. In time, however, the Chicago economists began a steady accumulation of Nobel Prizes. Friedman, who grew into the status of a conservative celebrity as the 1960s wore on, was the son of Jewish immigrants and boasted of his lack of establishment credentials. Gary Becker, another prominent member of the school, admitted that they all had a ‘chip on their shoulder’.11 Their sense of iconoclasm was fuelled by the sense that America was run by a northeastern elite of liberal intellectuals who simply assumed their right to rule.
Following from this was a shared suspicion of government. One way in which this was aired was via the application of economic analysis to the behaviour of law-makers and government bureaucrats, to demonstrate that they were equally self-interested as businesses or consumers in a marketplace. The work of George Stigler, known as ‘Mr Micro’ to Friedman’s ‘Mr Macro’ (a joke on account of the microeconomist Stigler being over a foot taller than his macroeconomist friend), turned the spotlight of economic analysis away from markets and towards those in Washington who claimed to act in the public interest.
Suspicion of government is not necessarily the same thing as being anti-state, and so it proved. In the most controversial episode of a controversial career, Friedman visited Chile in spring 1975 to offer advice to the autarchic Pinochet regime. For a man who professed anarchist sympathies, this engagement with a military dictator appeared hypocritical to say the least. Friedman simply defended himself as someone who was in pursuit of scientific knowledge and willing to share it with whomever was interested. In any case, the Chicago School complaint against governments was not that they had too much power, but – à la Bentham – that they used it in an unscientific fashion. In short, policy-makers needed to listen to economists more closely, a view that reveals the most distinctive Chicagoan trait of all: the fundamental belief that economics is an objective science of human behaviour which can be cleanly separated from all moral or political considerations.
At the root of this science lay a simple model of psychology that can be traced back via Jevons to Bentham. According to this model, human beings are constantly making cost-benefit trade-offs in pursuit of their own interests. Jevons explained the movement of market prices in terms of such psychological rationality on the part of consumers, who are constantly seeking more bang for their buck (or less buck for their bang). What distinguished the Chicago School was that they extended this model of psychology beyond the limits of market consumption, to apply to all forms of human behaviour. Caring for children, socializing with friends, getting married, designing a welfare programme, giving to charity, taking drugs – all of these apparently social, ethical, ritualized or irrational activities were reconceived in Chicago as calculated strategies for the maximization of private psychological gain. They referred to this psychological model as ‘price theory’ and saw no limit to its application.
Nobody seized the implications of this more than Gary Becker. Today, Becker is known for having developed the notion of ‘human capital’, a concept that has helped shape and justify the privatization of higher education through demonstrating that individuals receive a monetary return from ‘investment’ in their skills.12 More diffusely, Becker’s influence has been felt in an approach that reduces all moral and legal questions to problems of cost-benefit analysis. Individuals are addicted to drugs? The price of the drugs is obviously too low, or perhaps the pleasure received from them is too high. Shoplifting is on the rise? The penalties (and risk of being caught) are obviously too low; but then again, maybe it makes more sense to endure the shoplifting than to invest money in closed-circuit television and security guards.
The economists who carried out this work were always fiercely resistant to the idea that they were ideologically motivated. All they were trying to do, they reasoned, was identify the facts, free from the moral and philosophical baggage that cluttered the minds of their liberal rivals in Harvard and MIT or the politicos in Washington. The behaviourist ghost of John B. Watson hovered in the background, insisting that human activity could be understood in its entirety, with sufficient scientific scrutiny by a detached observer.
Their analyses were tested in the infamous pressure-cooker environment of the economics department’s ‘workshop’ system. In conventional academic seminars, a speaker reads a paper which the attendees are encountering for the first time. There is no time for an audience member to develop a very acute critique, even if she wanted to. But Chicago’s ‘workshop’ system was different. Papers would be circulated in advance for reading and authors would have only a few minutes to defend what they’d written before the roomful of colleagues would dive upon them, seeking holes in their logic, pursuing errors in the argument like it was their prey. ‘Where should I sit?’ a nervous speaker once asked Stigler, who was organizing the workshop. ‘In your case, under the desk’, quipped Stigler grimly.
What if the psychological model or ‘price theory’ itself was faulty? What if people don’t act like rational calculators of private gain, least of all in their domestic, social and political lives? What if economics isn’t fully adequate to understanding why people behave as they do? In the seminar rooms of Chicago economics, these were the questions that could never be raised. All regimes of radical, sceptical, anti-philosophical empiricism require certain propositions which are exempted from scrutiny. In Chicago, that proposition is price theory, which, from the lectures of Viner during the 1930s through to the current pop-economics fad of Freakonomics, has been the central article of faith for an institution that proclaims to have no need for faith.
An Archimedes who suddenly has a marvelous idea and shouts ‘Eureka!’ is the hero of the rarest of events. I have spent all of my professional life in the company of first-class scholars but only once have I encountered something like the sudden Archimedian revelation—as an observer.
It was in these breathless tones that George Stigler recounted one particular workshop in 1960, which took place in Aaron Director’s home in Hyde Park. Stigler would never forget that evening and later cursed Director for not having tape-recorded it.13 It became a turning point for his career and for the Chicago School more generally. Arguably, it was a turning point for the project of neoliberalism.
The paper that was discussed that evening was the work of the British economist Ronald Coase, then of the University of Virginia. Coase always resisted the iconic status that Stigler and others were keen to bestow upon him. His career had progressed quietly and methodically, through asking simple scientific questions about why economic institutions are structured as they are. He claimed never to understand the excitement that his work had engendered. He collected his 1991 Nobel with the words ‘What I have done has been determined by factors which were no part of my choosing’, a sentiment that would have struck the chip-on-shoulder, competitive individualists of Chicago as akin to defeatism.
And yet, by accident or otherwise, this modest economist with a working class background in Kilburn, London, acquired the role of an ‘Archimedes’ for the intellectual bruisers from Hyde Park. In the process, he contributed to a new, more vicious understanding of how capitalism should be governed, and the form that competition should take. Coase’s work ended up a crucial plank in a political worldview stating that there was no limit to how large and powerful a capitalist firm should be allowed to become, so long as it was acting in a ‘competitive’ fashion.
Coase has never been described as a ‘neoliberal’, less still a ‘conservative’. He was, however, taught by two economists at the London School of Economics during the 1930s, Friedrich Hayek and Lionel Robbins, who were both instrumental in the emergence of neoliberal thought. Robbins and Hayek were seeking to muster a fight-back against the Keynesian and socialist thinking that thrived through the Depression, by highlighting the unique intelligence contained in the price system of competitive markets. Coase breathed this in. More importantly, he was exposed to Hayek’s intense scepticism regarding what any social science, including economics, could be capable of knowing.
Armed with a radically sceptical eye, though nevertheless adhering to the basic tenets of ‘price theory’, Coase was able to ask a question that his more libertarian colleagues in Chicago had never properly considered: What exactly is the benefit of a market anyway? If it’s to produce welfare, is it not possible that, under certain circumstances, this can be done even better through different types of organization, such as corporations? In their hostility to state intrusions in markets, Friedman and company had largely just assumed that free markets were intrinsically superior on principle. But paradoxically, this belief also committed them to certain types of state intervention, namely regulation and competition law, which would ensure that the market maintained its correct form.
Coase’s brilliance was to spot within the Chicago School position a final remnant of metaphysical speculation that they themselves were not aware of. Up until this point, the Chicago School still assumed that markets needed to be open, competitive, run according to certain principles of fairness, or else they would become submerged under the weight of monopolies. Markets needed ground rules if they were to match up to the ideal of being a space of individual freedom. This meant that they still required authorities capable of intervening, once competitors ceased to play fair or grew too powerful, and the market started to ‘fail’.
Ever the sceptic, Coase did not accept this style of reasoning. Nothing in real economic life was ever that simple. Markets were never perfectly competitive in actuality, so the categorical distinction between a market that ‘works’ and one that ‘fails’ was an illusion generated by economic theory. The question economists should be asking, Coase argued, is whether there is good evidence that a specific regulatory intervention will make everyone better off overall. And by ‘everyone’, this shouldn’t just mean consumers or small businesses, but the party being regulated as well. This argument was straight out of Bentham: he was advocating that policy be led purely by statistical data on aggregate human welfare, and abandon all sense of ‘right’ and ‘wrong’ altogether. If there isn’t sufficient data to justify government intervention – and such evidence is hard to assemble – then regulators would be better off leaving the economy alone altogether.
One of the most far-reaching implications of Coase’s argument was that monopolies are not nearly as bad as economists tend to assume. Compared to a perfectly competitive, perfectly efficient market, then yes, monopolies are undesirable. But this was what Coase disparagingly called ‘blackboard economics’. If economists opened their eyes and looked at capitalism as it actually existed, they might discover that regulatory efforts to produce efficient markets were often counter-productive. Meanwhile, leaving firms alone to work things out for themselves (using private contracts and compensation where necessary) could actually produce the best available outcome overall – not a perfect outcome, just the best one available. The function of economics was to carefully calculate what should be done, on a case-by-case basis, not to offer utopian visions of perfect scenarios.
Coase’s scepticism towards regulation first appeared in a 1959 paper on the telecoms market. This made a stir. While the Chicago School were no friends of government, they had at least assumed that markets needed to be kept in check to some extent, if they were not to become dominated by vast corporations making excessive profits. On the other hand, they were intrigued by Coase’s critical style of reasoning, and the radicalism of his conclusions. Director invited the Englishman to give a paper defending his position, a paper which was later published as ‘The Problem of Social Cost’, becoming one of the most cited articles in the history of economics.
Scenting blood, twenty-one leading figures from the Chicago economics department arrived. All had read Coase’s paper, and a vote taken at the beginning of the evening suggested that all twenty-one of them disagreed with it. In the timeworn style of the Chicago workshop, Coase was introduced by Director, then given five minutes to explain and justify his argument, before he would be torn apart by force of economic logic. As ever on these occasions, when it came to the latter part, Milton Friedman led the way. But on this occasion, something unusual happened – Friedmanite logic did not appear to be working. Here’s George Stigler again:
Ronald didn’t persuade us. But he refused to yield to all our erroneous arguments. Milton would hit him from one side, then from another, then from another. Then to our horror, Milton missed him and hit us. At the end of that evening the vote had changed. There were twenty-one votes for Ronald and no votes [against].14
As one of his students later put it, ‘He had out-Chicagoed Chicago’.15 Coase had no ideological axe to grind against government. He was not possessed of any particular love of unregulated, dog-eat-dog capitalism, which was more than could be said of Friedman.
What he did have, which the Chicago economists found irresistible, was a desire to question every assumption about how the economy ought to be governed, every assumption about what ‘good’ and ‘bad’ competition looked like, and to challenge the assumptions of policy-makers that they could necessarily tell the difference between the two. Through his scepticism towards the very possibility of a perfect market, he was even more doubtful of the state’s authority than Friedman et al. Scientific economic analysis alone could determine whether a regulatory intervention was needed.
Sympathy for the capitalist
Stigler believed that an entire paradigm had shifted before his eyes. The theoretical case that underpinned government regulation of markets had evaporated, right there in Aaron Director’s living room. It turned out that, up until 1960, even the Chicago School had been labouring under some metaphysical moral presumption that certain situations were essentially in need of government intervention and others weren’t. ‘Coase’s Theorem’, as Stigler later christened it, stated that this wasn’t the case, and there was no basis on which to believe that regulation could automatically improve on situations that arose spontaneously between competing actors.
Except that this wasn’t quite what Coase had argued. The paper he had defended in Director’s home that evening in 1960 said that there were no grounds on principle to assume that market regulation was ever necessary. There were no grounds on principle to assume that one competitor exploiting another was necessarily a bad thing. But nor were there any grounds on principle to believe that regulatory intervention was a bad thing either. Coase was simply making a plea for robust economic analysis of the data available, as an alternative to the utopian propositions of ‘blackboard economics’. To maintain authority amidst various conflicting perspectives on the rights and wrongs of a competitive situation, regulators needed to be staffed by economists who would simply represent the facts.
Stigler and his colleagues had little interest in such even-handedness. What they now possessed was a devastating critique of the moral authority of regulators and legislators, who purported to act in the ‘public interest’ but were typically just acting either in their own interests (to create more jobs for regulators) or out of political resentment towards large successful businesses. What regulators and left-wing liberals had singularly failed to recognize was that large, exploitative, monopolistic businesses also generate welfare. In fact, given a free rein, who knew how much welfare they might produce?
From the increasingly bullish Chicagoan perspective, the scale of giant corporations allowed them to work more efficiently, doing more good for consumers and society at large. The benefit they produced did not happen in spite of their aggressive competitive behaviour, but because of that behaviour. Let them grow as far as they can, as profitably as they can, and see what happens. Why worry about businesses getting ‘too big’? Who is to say that they shouldn’t be even bigger? By the end of the decade, Friedman was stating the pro-corporate argument even more nakedly. As he put it in a famous article published in The New York Times Magazine in 1970, the single moral duty of a corporation is to make as much money as it possibly can.16
The question that Coase posed that evening in 1960 was a radical one: regulators had long been striving to protect competitors from larger bullies, but what about the welfare of the bully? Didn’t he deserve to be taken into account as well? And – as the Chicago School would later seek to explain – might consumers actually be better off being served by the same very large, efficient monopolists than constantly having to choose between various smaller, inefficient competitors? If the welfare of everyone were taken into account, including the welfare of aggressive corporate behemoths, then it was really not clear what benefit regulation was actually achieving.
Here was utilitarianism being reinvented in such a way as to include corporations in the state’s calculus. Walmart, Microsoft and Apple didn’t exist in 1960, but they could not have imagined a more sympathetic policy template than the one that was cooked up in Chicago on the back of Coase’s work. Once Reagan was in the White House, these ideas spread quickly through the policy and regulatory establishments of Washington, DC, before permeating many international regulators over the 1990s.17 In less than a decade, policy-makers went from viewing high profitability as a warning sign that a firm was growing too large to being a welcome indication that a firm was being managed in a highly ‘competitive’ fashion.
There is one deeply counter-intuitive lesson which emerges from this: American neoliberalism is not actually all that enamoured with competitive markets at all. That is to say, if we understand a market as a space in which there is a choice of people to transact with, and a degree of freedom as to whether to do so – think of eBay for example – the Chicago School was entirely comfortable with the notion of businesses restricting this choice, restricting this freedom, on grounds that it produces more utility overall.
What Stigler, Friedman, Director and their colleagues really admired was not the market as such, but the competitive psychology that was manifest in the entrepreneurs and corporations which sought to vanquish their rivals. They didn’t want the market to be a place of fairness, where everyone had an equal chance; they wanted it to be a space for victors to achieve ever-greater glory and exploit the spoils. In their appeals to the limitless potential of capital, these Chicago conservatives were making a similar appeal to the logic of growth as the counterculture and the humanistic psychologists were. With Gary Becker’s metaphor of ‘human capital’, the distinction between corporate strategy and individual behaviour dissolved altogether: each person and each firm was playing a long-term game for supremacy, whether or not there was a market present.
In what sense is this winner-take-all economics still ‘competition’? Perhaps the clue to the Chicagoan vision lay in their own combative intellectual culture. These professed outcasts, with a ‘chip on their shoulder’, believed that no game was ever really lost. Friedman made his career on the basis of arguing single-handedly against a global Keynesian orthodoxy for nearly four decades, until finally, by the late 1970s, he was perceived to have ‘won’. Coase no doubt impressed his hosts partly by his willingness to stand up for his minority view, and willing them over. The elites of Harvard, MIT and the federal government were entitled to enjoy their period of dominance, but they should have taken these upstarts in Chicago a little more seriously from the start. Because when the neoliberals got to taste intellectual and political victory, they would fight just as hard to cling onto it. Chicago-style competition wasn’t about co-existing with rivals; it was about destroying them. Inequality was not some moral injustice, but an accurate representation of differences in desire and power.
The Chicago School message to anyone complaining that today’s market is dominated by corporate giants is a brutal one: go and start a future corporate giant yourself. What is stopping you? Do you not desire it enough? Do you not have the fight in you? If not, perhaps there is something wrong with you, not with society. This poses the question of what happens to the large number of people in a neoliberal society who are not possessed with the egoism, aggression and optimism of a Milton Friedman or a Steve Jobs. To deal with such people, a different science is needed altogether.
The science of deflation
The ability of individuals to ‘strive’ and ‘grow’ came under a somewhat different scientific spotlight between 1957 and 1958, due to accidental and coincidental discoveries made by two psychiatrists, Ronald Kuhn and Nathan Kline, working in the United States and Switzerland respectively. As with so many major scientific breakthroughs, it is impossible to specify who exactly got there first, for the simple reason that neither quite understood where exactly they had got to. The era of psychopharmacology was still very young, with the discovery of the first drug effective against schizophrenia in 1952 and the running of the first successful ‘randomized control trials’ (whereby a drug is tested alongside a placebo, with the recipients not knowing which one they’ve received) on Valium in 1954. These breakthroughs opened up a new neurochemical terrain for psychiatrists to explore.
Unlike the developers of those anti-anxiety and anti-schizophrenia drugs, Kline and Kuhn were not sure precisely what disorder they were seeking to target. Kline began experimenting with a drug called iproniazid, which had first been used against tuberculosis, while Kuhn was trialling imipramine in the hope that it might target psychosis. Had they both been certain of what effect they were looking for in advance, it’s doubtful that they would have made any discovery at all. It was because they were not sure that they engaged in extremely careful observation of the drugs’ recipients. Thanks to this, both psychiatrists noticed something that was both banal and revolutionary at the same time.
The drugs did not appear to have any particular effects that could be scientifically classified. There was no specific psychiatric symptom or disorder that they seemed to relieve. Given that psychiatrists of the 1950s still viewed their jobs principally in terms of healing those in asylums and hospitals, it wasn’t clear that these drugs offered anything especially useful at all. As a result, drug companies initially showed little interest in the breakthrough. The drugs simply seemed to make people feel more truly themselves, restoring their optimism about life in general.
People felt better as a result of these pharmaceuticals, not in any specifically medical or psychiatric sense, but more in terms of their capacity for fulfilment and hope. As Kuhn observed, his new substance appeared to have ‘antidepressant properties’. The extraordinary implication, which has since become our society’s common sense, was that sadness and deflation, and hence their opposites, could be viewed in neurochemical terms.
For a while, psychiatrists struggled to know how to describe the new drugs. Kline chose to refer to his as a ‘psychic energizer’, which remains a decent description of many of the drugs currently marketed as ‘antidepressants’, but which are used to treat anything from eating disorders to premature ejaculation. The subtlety of their effects was perplexing, but this very property – this selectivity – has since come to be the main promise of those who seek to transform and improve us via our neurochemistry. Unlike barbiturates, the new drugs did not alter physical metabolism or overall levels of psychic activity. They appeared to boost those parts of the patient that had been deflated or damaged but to leave mind and body otherwise unaffected. This wasn’t just the discovery of a new drug, but of a whole new notion of personhood.18
In the decades since Kuhn and Kline first experimented with their new drugs, antidepressants have become celebrated for this alleged selectivity and their non-specificity. The supposed genius of the selective serotonin reuptake inhibitor (SSRI) is to seek out the precise part of the self that requires energizing and give it a boost. In the years following the launch of Prozac in 1988, enthusiasm for the potential of SSRIs reached unprecedented heights. Claims were made by psychiatrists such as Peter Kramer that Prozac didn’t simply boost mood, but reconnected individuals with their real selves.19 The notion of illness, not to mention that of sadness, has been transformed in the process.
It would take twenty-five years before Kuhn and Kline’s new ‘psychic energizers’ would attain mass market appeal; indeed they were initially marketed as anti-schizophrenia drugs. But culturally, their discovery was perfectly timed. Psychiatrists and psychologists had shown virtually no interest in the notion of happiness or flourishing up until this time. The influence of psychoanalysis meant that psychiatric problems were typically viewed through the lens of neurosis, that is, as conflict with oneself and one’s past. Depression was a recognized psychiatric disorder that could be treated with electric shock therapy if severe enough, but it received comparatively little attention from the psychiatry profession, let alone the medical profession. The Freudian category of ‘melancholia’, as the inability to accept some past loss, continued to shape how chronic unhappiness was understood within much of the psychiatry profession.
But these psychoanalytic ideas were relatively useless when it came to dealing with a more diffuse form of depression, manifest as a generalized deflation of desire and capability. It was this that psychiatrists and psychoanalysts were increasingly confronted by as the 1960s wore on, forcing them to question certain core aspects of their theoretical training.20 Depressed individuals were not speaking in terms of shame or repressed desires any longer, but merely in terms of their own weakness and inadequacy. If anything, it was an absence of desire that afflicted them, more than a bottling up. Admittedly, drug companies were content to assist with the relinquishing of traditional psychoanalytic theory, as the pharmaceutical company Merck demonstrated in 1961 when it distributed fifty thousand copies of Frank Ayd’s Recognizing the Depressed Patient to doctors around the United States, immediately after winning a patent battle over the antidepressant amitriptyline.21 But the drugs were entangled in a broader cultural and moral transformation.
The question of how to boost general energy and positivity was an entirely new one for psychologists at the close of the 1950s. But it was slowly emerging as a distinctive field of research in its own right, with a number of new questionnaires, surveys and psychiatric scales through which to compare individuals in terms of their positivity. The year 1958 saw the launch of the Jourard Self-Disclosure Scale and then in 1961 the Beck Depression Inventory, the work of Aaron Beck, the father of cognitive behavioural therapy. Mental health surveys conducted in the United States during the 1950s, aimed partly at assessing the psychological state of war veterans, discovered that generalized depression was a far more common complaint than psychiatrists had assumed. This psychic deflation was coming to appear as a risk that could afflict anyone at any time, whether or not there was psychoanalytic material to back that assessment up.
By the late 1960s, psychologists were studying depression far more closely, without the assumption that there must be an underlying neurosis. Martin Seligman’s experiments on ‘learned helplessness’, in which he showed that if you electrocuted a dog enough it would eventually cease to resist, helped to map out a new understanding of depression. This sowed the seeds of the positive psychology movement, dedicated to the programmatic ‘unlearning’ of helplessness, of which Seligman is the figurehead.
A drug that is itself selective immediately weakens the responsibility of the physician or psychiatrist to identify precisely what is wrong with a patient. It can therefore be prescribed in a non-specific way, as if to say, ‘Try this, and see if whatever is ailing you starts to fade’. Misery itself becomes the phenomenon to be dealt with, rather than any particular manifestation or symptom. In the early 1960s, this was an affront to the authority of psychiatrists and doctors, whose professional role involved specifying exactly what was causing a problem and offering a solution to it. The idea that individuals may be suffering from some general collapse of their psychic capabilities, manifest in any number of symptoms, challenged core notions of medical or psychiatric expertise.
Over half a century after the discovery of antidepressants, it remains the case that nobody has ever discovered precisely how or why they work, to the extent that they do.22 Nor could anybody ever make this discovery, because what it means for an SSRI to ‘work’ will differ from one patient to the next. A great deal of attention has been paid to how SSRIs alter our understanding of unhappiness, relocating it in our brain neurons; but they also fundamentally alter the meaning of a medical diagnosis and the nature of medical and psychiatric authority.
A society organized around the boosting of personal satisfaction and fulfilment – ‘self-anchored striving’ – would need to reconceive the nature of authority, when it came to tending and treating the pleasures and pains of the mind. Either that authority would need to become more fluid, counter-cultural and relativist itself, accepting the lack of any clear truth in this arena, or it would need to acquire a new type of scientific expertise, more numerical and dispassionate, whose function is to construct classifications, diagnoses, hierarchies and distinctions, to suit the needs of governments, managers and risk profilers, whose job would otherwise be impossible.
Psychiatric authority reinvented
The Chicago School ultimately benefited from the ostracism that it was long shown by the American economics and policy establishment. It offered a lengthy gestation period, during which alternative ideas and policy proposals could mature and be ready for application by the time the governing orthodoxy had been engulfed in crisis. That crisis began brewing in 1968, as US productivity growth began to falter and the cost of the Vietnam War ate into the government’s finances. The crisis mounted from 1972 onwards, with sharp rises in oil prices and the breakdown of the global monetary system that had been put in place after World War Two.
The American psychiatry profession experienced its own crisis, with an almost identical chronology. In 1968, the American Psychiatric Association (APA) published the second edition of its handbook, the Diagnostic and Statistical Manual of Mental Disorders (DSM). Compared to later versions of the manual, this publication initially elicited very little debate. Even psychiatrists had little interest in the book’s somewhat nerdish question of how to attach names to different symptoms. But within five years, this book was the focus of political controversies that threatened to sink the APA altogether.
One problem with the DSM-II was that it seemed to fail in its supposed goal. After all, what was the use of having an officially recognized list of diagnostic classifications if it didn’t appear to constrain how psychiatrists and mental health professionals actually worked? The same year that the DSM-II was published, the World Health Organization published a study showing that even major psychiatric disorders, such as schizophrenia, were being diagnosed at wildly different rates around the world. Psychiatrists seemed to have a great deal of discretion available to them, being led by theories as to what was underlying the symptoms, which were rarely amenable to scientific testing in any strict sense. They shared a single terminology but lacked any strict rules for how it should be applied.
The ‘anti-psychiatry movement’, as it was known, included some who viewed the entire profession as a political project aimed at social control. But it also included others, such as Thomas Szasz, who believed that psychiatry’s main problem was that it was incapable of making testable, scientific propositions.23 In a famous experiment conducted in 1973, nineteen ‘pseudopatients’ managed to get themselves admitted into psychiatric institutions, by turning up and falsely reporting that they were hearing a voice saying ‘empty’, ‘hollow’ and ‘thud’. This was later written up in the journal Science under the title ‘On Being Sane in Insane Places’, adding fuel to the anti-psychiatry movement.24
Most controversially, the DSM-II included homosexuality in its list of disorders, provoking an outcry that gathered momentum from 1970 onwards, with the support of leading anti-psychiatry spokespersons. The APA was relatively untroubled by the problem of unreliable diagnoses, seeing as few of its members or governing body were especially interested in reliability in the first place. But the political storm generated by the homosexuality classification was far harder to ignore. Whereas the problem of diagnostic reliability was largely containable within the profession itself, the controversy over the DSM classification of homosexuality had spilled out into the public sphere.
Just as the Chicago School waited patiently in the cold until the economic policy crisis of the 1970s had run its course, there was one school of psychiatry which was blissfully untroubled by the turmoil sweeping the APA. This small group, based at Washington University in St Louis, had long felt alienated from the psychoanalytic style of American psychiatry. Far more indebted to the Swiss psychiatrist Emil Kraepelin than to Freud (or to Adolf Meyer, whose adaptation of Freud’s ideas dominated much APA thinking through the 1950s and ’60s), they treated classification of psychiatric symptoms as of the foremost importance. Mental illness was to be viewed in the identical way as physiological illness, an event in the body – more specifically, the brain – which required objective scientific observation and minimal social interpretation.
Through the 1950s and 1960s, the St Louis group, led by Eli Robins, Samuel Guze and George Winokur, was left to operate in its own intellectual and social bubble. They were repeatedly refused funding by the National Institute of Mental Health, who preferred instead to fund studies within the Meyerian tradition, which focused on the relationship between mental illness and the social environment. The St Louis school were outcasts from the establishment, relying on networks with European sympathizers and throwing some rollicking parties among themselves, but peripheral to American psychiatry.
For these ‘neo-Kraepelinians’, psychiatry’s claims to the status of science depended on diagnostic reliability: two different psychiatrists, faced with the same set of symptoms, had to be capable of reaching the same diagnostic conclusion independently from one another. Whether a psychiatrist truly understood what was troubling someone, what had caused it, or how to relieve it, was of secondary importance to whether they could confidently identify the syndrome by name. The job of the psychiatrist, by this scientific standard, was simply to observe, classify and name, not to interpret or explain. Within this vision, the moral and political vocation of psychiatry, which in its more utopian traditions had aimed at healing civilization at large, was drastically shrunk. In its place was a set of tools for categorizing maladies as they happened to present themselves. To many psychiatrists of the 1960s, this seemed like a banally academic preoccupation. But it was about to become a lot more than that.
While they were rejected by the psychiatry profession itself, the St Louis school were not the only voices arguing for greater diagnostic reliability at the time. Health insurance companies in the United States were growing alarmed by the escalating rates of mental health problems, with diagnoses doubling between 1952 and 1967.25 Meanwhile, the pharmaceutical industry had a clear interest in tightening up diagnostic practices in psychiatry, thanks to a landmark piece of government regulation. There was an increasingly powerful business case for establishing a new consensus on the names that were attached to symptoms.
In 1962, Senator Estes Kerfauver of Tennessee and Representative Oren Harris from Arkansas had tabled an amendment to the 1938 Federal Drug, Food and Cosmetic Act, aimed at significantly tightening the rules surrounding regulatory approval of pharmaceuticals. This was a direct response to the thalidomide tragedy, which led to around ten thousand children around the world being born with physical deformities between 1960 and 1962 as a result of a new anti-anxiety drug that had begun to be prescribed for morning sickness. The United States was relatively unaffected, due to the prudence (later viewed as heroism) of one FDA official who blocked the drug on grounds that it wasn’t adequately tested.
One feature of the Kerfauver-Harris amendment was that drugs had to be marketed with a clear identification of the syndrome that they offered to alleviate. Again, this made clarity around psychiatric classification imperative, although in this case for business reasons. If a drug seemed to have ‘antidepressant properties’, for example, this wasn’t enough to clear the Kerfauver–Harris regulatory hurdle. It needed a clearly defined disease to target – which in that case would need to be called ‘depression’. As the British psychiatrist David Healy has argued, this legal amendment is arguably the critical moment in the shaping of our contemporary idea of depression as a disease.26 Thanks to Kerfauver–Harris, we’ve come to believe that we can draw clear lines around ‘depression’, and between varieties of it – lines that magically correspond to pharmaceutical products.
By 1973, the APA was facing charges of pseudoscience, homophobia and the peddling of regressive 1950s moral standards of normality. No less critically, they also represented a threat to the long-term profitability of big pharma. Both cultural and economic forces were pitted against the profession, throwing the very purpose of psychiatry into question. Ultimately, the St Louis approach to psychiatry would be the winner in this crisis, and the strict, anti-theoretical diagnostic approach would soon move from the status of nerdish irrelevance to orthodoxy. But it would take a particularly restless figure within the higher ranks of the APA to bring this volte-face about.
Robert Spitzer came from a traditional psychiatric background, joining the New York State Psychiatric Institute in 1966. He fell in with the authors of the DSM-II after hanging out with them at the Columbia University canteen in the late ’60s, but was growing somewhat bored of the psychoanalytic theories peddled by his colleagues.27 Spitzer was someone who enjoyed a fight. He’d grown up in a family of New York Jewish communists and spent his youth engaged in lengthy political and intellectual arguments with his father, not least over the latter’s Stalinist sympathies. Today, he is commonly recognized as the most influential American psychiatrist of the late twentieth century. But as much as anything, this was down to his entrepreneurial zeal and imagination as it was to his ideas. What Spitzer had in spades, and which professional associations tended to lack, was an appetite for radical change.
In the late 1960s, Spitzer had a growing interest in diagnostic classification, spotting an alternative to the status quo. But his status within the APA was marginal, until he was given the task of defusing the homosexuality controversy. To achieve this, he mounted an aggressive campaign within the APA, offering an alternative description of the syndrome concerned – ‘sexual orientation disturbance’ – which highlighted that suffering must be involved before any diagnosis of sexuality disorder could be made. This was a subtle but telling distinction: Spitzer was implying that the relief of unhappiness should replace the pursuit of normality as the psychiatrist’s abiding vocation. In 1973, he faced down opposition from senior colleagues within the APA on this issue and won. Thanks to Spitzer’s advocacy, the question of sexual ‘normality’ was (not-so-quietly) replaced with one of classifiable misery, hinting at how the character of mental illness was changing more broadly.
The following year, Spitzer was given his next political challenge: to deal with the APA’s diagnostic reliability. The DSM-II was already looking dated, and in any case needed rewriting to abide by the World Health Organization’s own changing diagnostic criteria. Spitzer was appointed as chair of the Task Force on Nomenclature and Statistics, now with a clear mandate to deal with the problems of diagnostic reliability that had been brewing for over a decade. Crucially, he retained complete control over how the task force would be composed. He hand-picked its eight members with a clear intention to tear up the APA’s existing theoretical principles and replace them with a set of methods which were straight out of St Louis.
Four of the eight appointees to Spitzer’s task force were from St Louis, whom he described as ‘kindred spirits’. The other four were judged to be sympathetic to the coup that Spitzer was about to stage. In appointing Spitzer, the APA – and certainly the health insurance industry – had hoped that stricter diagnostic categories would actually lead to a reduction in the levels of diagnosis overall. Greater rigor in the criteria attached to a diagnosis, it was assumed, would make it harder for syndromes to be diagnosed. What they hadn’t calculated for was the exhaustiveness of the task force’s approach to classification, which yielded a progressive multiplication in the varieties of recognized mental illness.
Every known psychiatric symptom was being listed, alongside a diagnosis. To do this, they drew heavily on a 1972 paper on diagnostic classification authored by the St Louis group, but adding further classifications and criteria.28 Typing away in his office in Manhattan’s West 168th Street, urging on his task force to recite symptoms and diagnoses like some endless psychiatric shopping list, Spitzer was unperturbed. ‘I never saw a diagnosis that I didn’t like’, he was rumoured to have joked.29 A new dictionary of mental and behavioural terminology was drafted.
Relatively unhappy
The resulting document that Spitzer and his team produced in 1978 provided the basis of the DSM-III, arguably the most revolutionary and controversial text in the history of American psychiatry. Finalized over the course of 1979 and published the following year, this handbook bore scarce resemblance to its 1968 predecessor. The DSM-II outlined 180 categories over 134 pages. The DSM-III contained 292 categories over 597 pages. The St Louis School’s earlier diagnostic toolkit had specified (somewhat arbitrarily) that a symptom needed to be present for one month before a diagnosis was possible. Without any further justification, the DSM-III reduced this to two weeks.
Henceforth, a mental illness was something detectable by observation and classification, which didn’t require any explanation of why it had arisen. Psychiatric insight into the recesses and conflicts of the human self was replaced by a dispassionate, scientific guide for naming symptoms. And in scrapping the possibility that a mental syndrome might be an understandable and proportionate response to a set of external circumstances, psychiatry lost the capacity to identify problems in the fabric of society or economy.30 Proponents described the new position as ‘theory neutral’. Critics saw it as an abandoning of the deeper vocation of psychiatry to heal, listen and understand. Even one of the task force members, Henry Pinsker (not from St Louis), started to get cold feet: ‘I believe that what we now call disorders are really but symptoms’.31
The DSM-III came about because the APA had found itself on the wrong side of too many cultural and political arguments at once. The forms of truth that psychiatrists were seeking could not survive the turbulent atmosphere of 1968 and its aftermath: they were too metaphysical, too politically loaded and too difficult to prove. But amidst this is a story about how happiness – and its opposite – appeared as a preoccupation of mental health professionals, medical doctors, pharmaceutical companies and individuals themselves. To get to this point, the mainstream psychiatric establishment had to be virtually cut out of the loop. A landmark legal case in 1982, in which a psychiatrist was successfully sued for prescribing long-term psychodynamic therapy to a depressed patient, and not an antidepressant drug, offered a rousing demonstration of the new state of affairs.32 Today, 80 per cent of the prescriptions that are written for antidepressants in the United States are by medical doctors and primary care practitioners, and not by psychiatrists at all.
In a post-1960s era of ‘self-anchored striving’, what can people possibly hold in common other than a desire for more happiness? And what higher purpose could a psychological expert pursue than the reduction of unhappiness? These simple, seemingly indisputable principles were what emerged from the cultural and political conflicts which came to a head in 1968. The growing problem of depression, experienced as a non-specific lack of energy and desire, combined with the emergence of a drug that seemed selectively to alleviate this, and the need of drug companies, regulators and health insurers to find clarity amidst such murkiness, meant that psychoanalytic expertise was heading for a fall.
A host of new techniques, measures and scales would be needed to track positive and negative moods in this new cultural and political landscape. Aaron Beck was well ahead of his time with his 1961 Beck Depression Inventory. In respect of physical pain, the influential McGill Pain Questionnaire was introduced in 1971. Various additional questionnaires and scales were introduced during the 1980s and 1990s to identify and quantify levels of depression, such as the Hospital Anxiety and Depression Scale (1983) and Depression Anxiety Stress Scales (1995). With the growing influence of positive psychology, which offered to mitigate the ‘risk’ of depression occurring, scales of ‘positive affect’ and ‘flourishing’ were added to these. Each of these represented a further manifestation of the Benthamite ambition to know how another person was feeling, through force of scientific measurement alone. Underlying them was the familiar monistic hope, that diverse forms of sadness, worry, frustration, neurosis and pain might be placed on simple scales, between the least up to the most.
The reconfigured DSM, together with the various newly designed scales, made it very clear what should be classified as depression and to what extent. A sufficient number of symptoms, such as loss of sleep, loss of appetite, loss of sexual appetite, in combination for two weeks or more could now be called ‘depression’. But what it actually meant to be depressed, or what caused it, had disappeared from view, for many of the new league of psychological experts who emerged on the tails of Spitzer and the St Louis team. The voice of the sufferer was not quite silenced in the new diagnostic era, but it was regulated by the construction and imposition of strict questionnaires and indices. The neurosciences potentially now enable psychiatry to move away from even those restricted questions and answers.
So what really is this so-called disease that now afflicts around a third of people at some point in their lifetime, and around 8 per cent of American and European adults at any one time? It is often said that depression is the inability to construct a viable future for oneself. What goes wrong, when people suffer our contemporary form of depression, is not simply that they cease to experience pleasure or happiness, but that they lose the will or ability to seek pleasure or happiness. It is not that they become unhappy per se, but that they lose the mental – and often the physical – resources to pursue things that might make them happy. In becoming masters of their own lifestyles and values, they discover that they lack the energy to act upon them.
It is only in a society that makes generalized, personalized growth the ultimate virtue that a disorder of generalized, personalized collapse will become inevitable. And so a culture which values only optimism will produce pathologies of pessimism; an economy built around competitiveness will turn defeatism into a disease. Once the Benthamite project of psychic optimization loses any sense of agreed limits, promising only more and more, the troubling discovery is made that utilitarian measurement can go desperately negative as well as positive.
Depressive-competitive disorder
‘Just do it’. ‘Enjoy more’. Slogans such as these, belonging to Nike and McDonald’s respectively, offer the ethical injunctions of the post-1960s neoliberal era. They are the last transcendent moral principles for a society which rejects moral authority. As Slavoj Žižek has argued, enjoyment has become an even greater duty than to obey the rules. Thanks to the influence of the Chicago School over government regulators, the same is true for corporate profitability.
The entanglement of psychic maximization and profit maximization has grown more explicit over the course of the neoliberal era. This is partly due to the infiltration of corporate interests into the APA. In the run up to the DSM-V, published in 2013, it was reported that the pharmaceutical industry was responsible for half of the APA’s $50 million budget, and that eight of the eleven-strong committee which advised on diagnostic criteria had links to pharmaceutical firms.33 The ways in which we describe ourselves and our mental afflictions are now shaped partly by the financial interests of big pharma.
One of the last remaining checks on the neurochemical understanding of depression was the exemption attached to people who were grieving: this, at the very least, was still considered a not unhealthy reason to be unhappy. But in the face of a new drug, Wellbutrin, promising to alleviate ‘major depressive symptoms occurring shortly after the loss of a loved one’, the APA caved in and removed this exemption from the DSM-V.34 To be unhappy for more than two weeks after the death of another human being can now be considered a medical illness. Psychiatrists now study bereavement in terms of its possible mental health ‘risks’, without any psychoanalytic or common sense of why loss might be a painful experience.35
Corporations are also increasingly aware of the economic inefficiency of depression in an economy that trades on enthusiasm in the workplace and desire in the shopping mall.36 Finding ways to lift people out of this illness, or reduce the risks of encountering it in the first place (through tailored diet, exercise or even brain scans to assess the risk early in children), is viewed as essential to the survival of corporate profitability. One report on the topic, sponsored by a number of UK corporations including Barclays Bank, stated with a peculiar absence of compassion, ‘Today’s brain-based economy puts a premium on cerebral skills, in which cognition is the ignition of productivity and innovation. Depression attacks that vital asset.’37
One way in which Bentham was shaped by the emancipatory social spirit of his age was in his assumption that the measurement and maximization of happiness were a collective venture. There was, in principle, a justification for one man’s happiness to be impeded: for another man’s benefit. Admittedly, the main arena in which he explored this was punishment: prison is justified to the extent that non-prisoners benefit from its existence. But nevertheless, the calculus of utility was one that took everyone into account. In economic policy, this could justify transfers of money from rich to poor, if it was clear that poverty was the cause of misery.
The depressive-competitive disorder of neoliberalism arises because the injunction to achieve a higher utility score – be that measured in money or physical symptoms – becomes privatized. Very rich, very successful, very healthy firms or people could, and should, become even more so. In the hands of the Chicago School of economics or the St Louis School of psychiatry, the logic that says we have a particular political or moral responsibility towards the weak, which may require us to impose restrictions on the strong, is broken. Authority consists simply in measuring, rating, comparing and contrasting the strong and the weak without judgement, showing the weak how much stronger they might be, and confirming to the strong that they are winning, at least for the time being.
Buried within the technocratic toolkits of neoliberal regulators and evaluators is a brutal political philosophy. This condemns most people to the status of failures, with only the faint hope of future victories to cling onto. That school in London ‘where the pupils are allowed to win just one race each, for fear that to win more would make the other pupils seem inferior’ was, in fact, a model of how to guard against a depressive-competitive disorder that few in 1977 could have seen coming. But that would also have required a different form of capitalism, which few policymakers today are prepared to warrant.