Other industrial revolutions cost a lot of people their heads, I’m not sure we have the time for the wonderful markets to fix all these problems.
Peter Brabeck1
A form of techno-feudalism is unnecessary. Above all, technology itself does not dictate the outcomes. Economic and political institutions do.
Martin Wolf, Financial Times2
Human beings’ fear of being replaced by machines is as old as economics itself. Famously, in early-nineteenth-century Britain textile workers known as the Luddites smashed machines in opposition to the introduction of spinning frames and looms, fearing, not without foundation, that mechanisation would leave them without jobs. Their protests did not prevent the Industrial Revolution sparking the greatest jump in production and living standards in human history.
While all agree that short-term unemployment occurs when workers are replaced by machines, the debate on the long-term effects of technological change on labour continues to divide, depending on the health of the jobs market. Optimists argue that, after a while, more jobs are created by new technology through compensation effects than will have been destroyed, although not necessarily for the same people. This process of ‘creative destruction’ is seen as the only way to move forward. J. B. Say provided the economic rationale for the long-term optimists. His central idea is that, as technology increases efficiency and productivity and so the economy’s output, the implied rise in real incomes fuels the means to buy the increased production. Keynes turned this into a catchy sound bite: ‘supply creates its own demand’. Eventually, advances in technology allegedly raise the wages of most, if not all, workers. The evidence for this includes the increased demand for labour during the Industrial Revolution and dramatic increase in living standards in the West over the last century and a half.
More recently, the optimistic consensus has been challenged by weak labour markets and stagnant wages in many countries. This has led to an argument between those who lay the blame for this economic malaise at the door of globalisation and international competition and those who focus on labour substitution by increasingly smart machines. The reality is that it is difficult to disentangle the effects of exporting jobs to low-cost countries from that of automation; in fact, they go hand in hand.
Erik Brynjolfsson and Andrew McAfee, in their book Race Against the Machine, cite Michael Spence, who makes the link between globalisation and technology. Technology has all but eliminated geographic barriers, gradually equalising factor prices and forcing American labour to compete with developing nations. Brynjolfsson cites the following dispassionate assessment by NASA of the advantages at the time (1965) of manned space flights: ‘Man is the lowest cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labour.’ The key question is how we respond to these two seemingly unstoppable forces that challenge fundamentally the way we work and are remunerated.
Technology pessimists have been gaining ground in the last few years. A Pew Research survey of technology experts revealed a split on the adverse long-term effects of technology on employment with 52 per cent thinking that technology will not displace more jobs than it creates by 2025, while 48 per cent thought it would.3 Former US treasury secretary and Harvard professor Lawrence Summers has stated that he no longer believes automation will create new jobs. And although recent research covering seventeen industrialised countries from 1993 to 2007 by Georg Graetz and Guy Michaels concludes that technology has not led to a fall in aggregate wages or employment, they agree that it has affected the distribution of employment: ‘While we find no significant effect of industrial robots on overall employment, there is some evidence that they crowd out employment of low-skilled and, to a lesser extent, middle-skilled workers.’4
Even if aggregate wages have risen, the trickle-down effect certainly seems to have dried up. The distribution of gains from technological change has become more skewed to those with higher skills and incomes, to the extent that large swathes of the population are unemployable except in low-skill low-pay jobs, and have ceased to progress in real inflation-adjusted terms for many years.
While anxiety over the effects of technology coincided in the past with weak labour markets and subsequently proved unjustified, it may well be that, like the many false alarms raised by the boy who cried wolf, this time is actually different.
In the late nineteenth and twentieth centuries new machines needed armies of people to use and maintain them, the automobile industry being a prime example, absorbing millions of people who migrated from the countryside in the 1950s and 60s to work in giant mechanised factories that vastly increased productivity. During this age of mechanisation machines were replacing physical muscle power and therefore complemented labour, making it more efficient and increasing its productivity. In the US nominal GDP per head increased steadily between 1960 and 1980, initially in lockstep with median income per head. Most of that productivity growth accrued to workers as higher wages. According to Martin Ford, real household median incomes in the US tracked per capita GDP and doubled from $25,000 to $50,000 between 1948 to 1973.5 This was indeed a golden era of technological change working with the labour force.
But from around 1980 productivity began to far outstrip growth in median income per head and real median household income. Median income is a less misleading measure of those in the middle, representing the individual halfway up the income distribution scale, since, unlike mean or average income, it is not distorted upwards by the super earners right at the very top.
This period is significant since it coincided with a three-decade fall in the cost of capital in the form of declining interest rates, as well as the entry of China into the global supply of labour. This in turn facilitated the steady leveraging up of most developed economies to an unprecedented level. Growing debt enabled a continuation of rising consumption and living standards among many poorer households, and masked the slowing momentum of rising real wages. At the top end of income distribution, borrowing was used to increase wealth through the acquisition of assets rising in value faster than nominal GDP, turbocharged by cheap money.
Studies carried out in the US show that in the thirty years after 1973 real median household income increased, due mainly to the increase in female participation in the labour force, but only by about 22 per cent compared to GDP growth of 50 per cent. The fruits of increased technological change began to accrue less and less to those in the bottom half of the distribution scale. In fact, in real terms, which is what matters, the top half all but ceased to share out any of the benefits of growth with those beneath them, an issue we’ll explore in more detail in Chapter 4. Today, the cost of machines has come down enough, and their sophistication has improved so much, that the symbiotic relationship to labour is completely breaking down.
Writing in Germany’s Süddeutsche Zeitung in 2014, Horst Neumann, Volkswagen board member for human resources, said robots would fill some of the retiring baby boomers’ jobs, not people. He added that robots that carry out routine tasks cost the company five euros an hour over their lifetimes including maintenance and energy costs while Chinese workers cost ten and Germans (including wages, pensions and healthcare) about forty euros an hour.6 Foxconn, a Chinese contract manufacturer for brand names such as Apple, has embarked on a process of replacing part of its workforce with robots. By 2017, China had become the world’s largest market for industrial robots, representing 58 per cent of the world’s sales, compared to 6 per cent in the US and 8 per cent in Germany.
Part of the recent ‘onshoring’ trend of returning production to the US is due to the fact that machines are now competitive with what was cheap foreign labour. That does not, of course, mean that ex-manufacturing workers are getting their old jobs back. This begs the question of where this trend will take us economically and socially. Not only are machines displacing labour in manual and repetitive tasks, but their inexorable advance even threatens jobs that require intelligence or higher education.
Gordon Moore, the co-founder of Intel, postulated that the number of transistors you could fit on a semiconductor chip would double roughly every eighteen months. The exponential rise in computer power following Moore’s law has qualitatively changed the relationship between man and machine for three reasons. First, the cost of producing machines is falling, thereby making more tasks open to the substitution of robots. Second, the progress of technology in production is faster than the speed at which displaced humans are being retrained into jobs that are not yet economically mechanised. Lastly, it is doubtful that workers displaced by machines can migrate into higher-skilled jobs that are not amenable to technological substitution.
The writing has been on the wall for some time now.
Way back in the 1980s, Nobel prize-winning economist Wassily Leontief foresaw that humans’ role in production would diminish, just like the horse in agriculture. While machines would replace muscle, humans could still find productive jobs using their brain power, although this was also now under threat.7 Indeed, the rise of ever smarter computers is eating away at the need for human brain power. As well as factory work, many white-collar jobs, the domain of the middle classes, are at risk. And if that is the case, so are the middle-class incomes that have been the engine for economic growth in the past. The remedies he proposed included sharing work time and subsidising employment through social security.
A study in the UK by the Chartered Institute of Personnel and Development claimed that the increase in the number of university graduates has outstripped the growth in higher-skill jobs, resulting in many graduates’ aspirations to embark on a secure middle-class career being dashed, and many ending up working in low-skilled jobs where their degree in not required.8 In 2014 Google’s co-founder Larry Page went as far as to suggest a four-day working week, so that as technology continues to displace jobs, more people can find employment.9 Google chairman Eric Schmidt is on record as saying, ‘The race is between computers and people, and people need to win … it is very important that we find the things that humans are really good at.’10 That did not prevent Google from purchasing UK AI company Deep Mind, and US robot maker Boston Dynamics.
In 2009 Martin Ford suggested that machines will do the work of many people who will not be able to find alternative employment.11 Already, many industries such as retailing are being transformed by online shopping and delivery. Large automated warehouses greatly reduce the need for human intervention, although some employment is created for truck drivers – for now. Intelligent vending machines and checkout counters also reduce the need for staff, as do automated passport controls and departure gates at airports. Successful new industries will make use of advanced information technology and are unlikely to be very labour intensive.
However, according to Ford, by the early 2020s the size of individual elements on computer chips will be reduced to about five nanometres (billionths of a metre) ‘and that is close to the fundamental limit beyond which no further miniaturisation is possible’. In other words, we are fast approaching the end of the advance of computer hardware using current technology. However, that does not take into account potential breakthroughs in new technologies such as quantum computing.
Furthermore, that is not the end of the story, since there is still room for more efficient software. According to Charles Simonyi, the man behind the development of Microsoft Word and Excel, this will enable us to eliminate the need for humans to do repetitive tasks that account for a lot of work. Indeed, Ford argues that knowledge-based jobs are more susceptible to replacement by improved software using algorithms than jobs requiring physical manipulation and fine motor skills (try building a robotic tennis champion). You can imagine a computer whose memory includes all past legal case histories that it can scan in order to select those that are useable to make a case based on precedent, for example; hence Marc Andreessen’s slogan, ‘Software is eating the world.’12
There is nothing to stop computers using algorithms from uncovering significant statistical relationships and employing those to improve themselves and what they can do. One of the principles of big data analysis is that the sheer power to crunch vast amounts of data can replace human (fallible) judgment and experience.
Ford also gives the example of a composition played by the London Symphony Orchestra in 2012, called Transits – Into an Abyss, which a reviewer praised as ‘artistic and delightful’.13 It was composed by a computer. So much for humans maintaining their monopoly on creativity.
What happens when computing power replaces professions employing human intelligence, especially in an age of globalisation? Keynes foresaw the possibility of long-term technological unemployment in the 1930s but considered this to be a good thing as it would liberate many from dreary and unpleasant jobs. Human beings would be able to spend more time on leisure and the arts and lead more enjoyable and fulfilling lives.
While this sounds attractive, society would need to overcome two problems. First, the political system would need to accept and put in place a permanent and sufficient income for doing nothing or very little – those no longer needed in the productive process. As Ford correctly points out, ‘we run the risk that a large and growing fraction of our population will no longer have sufficient discretionary income to continue propelling vibrant demand for the products and services that the economy produces’.
At the very birth of the manufacturing golden age Henry Ford famously increased his workers’ wages so that they were able to buy the goods they were making. Perhaps unusual among industrialists, he understood that workers were also consumers. While it may appear rational for one company to maximise profits by minimising labour costs without fear of the consequences, this logic breaks down if generalised across the economy, since my costs are your revenue. Keynes saw that Smith’s invisible hand did not always produce socially or economically optimal outcomes by each economic unit rationally pursuing its own self-interest. The machine carries the seeds of its own instability and breaks down, as economist Hyman Minsky has argued, because each economic agent does not take into account the consequences of the replication of its behaviour by its peers.14 Laying off workers may work for one company, but it also carries with it the danger of negatively affecting companies whose goods those workers can no longer afford to buy.
Second, and perhaps more challenging, individuals’ identity and sense of self-worth is often tied to their role within the economy. The involuntary unemployed suffer from loss of self-esteem, sometimes associated with worsening health. It is difficult to overcome the feeling that you are no longer needed by society, living a life that is useless economically and dependent on permanent handouts. This is hardly a recipe for healthy individuals and social cohesion.
A whole new culture, very removed from our current competitive, individualistic, materialistic and consumerist mindset would have to take over. But if, as many have observed, those with power and wealth shape our values and have a vested interest in pumping up a system that has served them so well, it is difficult to imagine such a transition being promoted by those at the top who have the most to lose, and occurring peacefully.
In the US the information technology age has arrived at a time when productivity growth and job creation are already slowing. Robert Gordon documents how productivity growth actually started slowing in the 1970s. The digital-tech revolution has not been all that it has been cracked up to be, showing nothing like the positive, generation-changing effects of electricity or the internal combustion engine. Its most positive effect in the developed world – the widespread use of the PC – was done by 1994.15 What is undeniable is that since 2008 average annual growth of GDP per hour worked in the US has retreated from about 2.5 per cent to a little over 1 per cent,16 while real median incomes in the US hardly rose from $29,998 in 2000 to $31,099 in 2016.17
Sluggish productivity growth is not only a problem in the USA. As we saw in Chapter 1, the UK’s productivity has recently plunged, a pattern replicated elsewhere. This is not good news for the ‘more tech is best’ brigade, and it’s fair to say that there is a considerable degree of confusion on the subject. The Financial Times, one of a few truly global daily newspapers, trotted out the pro-tech view in a 2015 leader article: ‘The advance of the internet and mobile technology … has generated new services and raised productivity,’ while also acknowledging, ‘The rapid advance of automation and artificial intelligence poses real challenges to how advanced industrial societies are organised.’18 Earlier that same year, Brian Groom claimed, ‘Weak productivity has been the main factor behind a decade of falling real wages in the UK since the [financial] crisis, the longest sustained drop for at least 50 years … Employers have expanded output by hiring extra workers, rather than investing in labour-saving technology or squeezing more efficiency out of their existing staff.’19
Hiring humans rather than machines, how retro! It seems we have suffered from too little labour-saving or efficiency-squeezing investment in the current decade, but this does not explain earlier diminishing productivity growth when Moore’s law was powering the exponential advance in information technology. Yet back in 2014 another FT leader described how manual and clerical workers in wealthier nations have struggled as a result of growth in trade and technology over the past three decades. Entrepreneurs and the ‘1 per cent’ have gained most, but ‘now the chill wind of automation may blow on the peaks’.20
So, you’re damned if you do and damned if you don’t. Not enough investment, and productivity and growth in living standards stall, but too much labour-saving investment, and (real) wage growth stalls. This sounds like a lose-lose situation.
In 2013 Carl Frey and Michael Osborne predicted that up to 47 per cent of US jobs were at risk from automation, with workers in transport and logistics, office and administrative support most at risk.21 This will increase inequality as income is transferred to the owners of robots. Meanwhile those clinging to the hope that improving education to shoe-horn workers into higher-skill occupations simplistically assume that we can predict what kind of skills will be needed several decades down the road. However, cold realism argues for many more living a life of leisure, which probably requires some redistribution of income and wealth from the few who reap the rewards from new technology.
Perhaps, as Ford and others argue, we have simply been experiencing the wrong kind of technological change. Information technology and computers don’t fundamentally change our lifestyles and quality of life the way electricity, the motor car and jet planes did. Perhaps the pace of truly transforming innovation has slowed since the first half of the twentieth century, when its fruits were more widely distributed.
Pessimists believe that information technology enables a small minority to prosper. Instead of a free market where the constant pressure of competition prevents firms and individuals from earning and maintaining excessive profits, technology seems to create a small number of players who wipe out or buy out their competitors. You only have to consider the dominant technology brands – Microsoft, Apple, Google, Amazon, Facebook, Netflix – to see that once established, these businesses have the means to extend their competitive advantage and crowd out competitors.
This is hardly the land of competition anticipated by Adam Smith’s invisible hand and totally inconsistent with economic theories which assume that abnormal profits will be arbitraged away by new entrants into the market. But if we are not in a world of perfect competition, what guarantee is there that the invisible hand of the market will guide us to optimal and efficient outcomes? None. Instead, is it not more likely that entrenched and dominant firms will be shielded from the pressure to improve, earning rent from their dominant position and relieved from the pressure to compete to provide the best-quality products? Some might view Microsoft’s dominance of the computer operating system market as a case in point.
At the individual level, in a winner-takes-all system aided and abetted by global communications, top sports, entertainment personalities, financiers and CEOs can leverage their earning power to unprecedented global levels.
Brynjolfsson and McAfee argue that it is wrong to lay the blame for the loss of earning power of much of the working population entirely on the effects of globalisation. The fact is, the degree of interconnectedness needed for globalised production and distribution would not be possible without modern technology for transport and communications. Technology has been the hand-maiden of globalisation.
Ford lays the blame for the loss of manufacturing jobs in the US squarely on technology rather than globalisation, since the downward trend in manufacturing employment started decades before China joined the global marketplace. He argues that the US actually produces more now, but with fewer workers, even if the rate of productivity growth has slowed.22 General Motors, the erstwhile car manufacturing giant, employed around 618,000 workers in the US at its peak in 1979, out of a total global workforce of around 850,000. Today, a little over 55,000 jobs have survived at its US plants. US Steel jobs peaked at 340,000 in 1943 and have shrunk to around 29,000 in 2018. But this analysis does not take into account the obvious point that both of these companies have lost market share, as their industries have been invaded by imports and foreign-owned plants; Toyota alone now operates around a dozen factories in the US.
The leading technology companies of today also employ fewer people than the old industrial companies. Microsoft employed around 80,000 workers in the US in 2018, compared to Facebook’s 35,000 and Netflix’s 7,000. Apple employed 84,000 US workers in 2018 and Google around 90,000. This represents a huge jump in productivity compared to the old manufacturing industries, but the flip side of the coin is that it no longer requires many people to produce a given unit of measured economic output – sidestepping the technicalities of how to measure the economic output of a Google or Facebook.
There is no way the numbers of old-economy workers that are now superfluous to requirements can be employed in the new knowledge industries. Even Amazon, the number-one online retailer, which has added workers over the last five years and in 2019 employed around 647,000 worldwide (including part-time and low-paid order fulfilment workers in its warehouses), is still dwarfed by the largest bricks-and-mortar retail chain Walmart, which boasts 1.5 million employees in the US alone.
If technology is one of the culprits for the loss of employment, particularly in manufacturing, it is difficult to point to the commensurate creation of secure high-paying, high-value jobs elsewhere to compensate for these losses. Even where other jobs have taken up the slack, in many cases the replacement jobs have not commanded decent wage levels; they’re effectively downgrades.
Optimists, on the other hand, continue to believe the opposite. Since unprecedented growth has occurred thanks to the advent of machines over the last 150 years, if structural growth and real income growth have slowed in the last ten years, it is because our rate of technological innovation has slowed down. There has been no new ‘big thing’ since the high-speed Internet and mobile phones. What we need is more technological change, not less.
This seems counter-intuitive given the continued growth in computing power and its application to more and more tasks. Examples abound: automated transport systems, driverless cars and drone deliveries are just the tip of the iceberg. Even the traditional bobby on the beat has been superseded by CCTV and controllers in front of their monitors directing a few rapid-response units. The effects of today’s technology are present everywhere, so that fewer and fewer jobs seem immune to substitution by machines.
There has been very little discussion by the authorities of the possibility that the relationship between technological innovation and employment and wage growth may have been turned upside down by computing power. Is it because they are in denial? Or perhaps they don’t know which way to turn so it is better to keep silent and await developments. Economist Jeffrey Sachs, in a 2018 interview, warned that a ‘tech tax’ is necessary if the world is to avoid a dystopian future in which AI leads to the concentration of wealth in the hands of a few thousand people. He argues that new technologies are dramatically shifting income distribution worldwide ‘from labour to intellectual property (IP) and other capital income’.
Suppose for a moment that Moore’s law comes to an end and the rate of advance of computing power slows or even grinds to a halt, constrained by the sheer physical limits of making the building blocks of computers’ brains smaller and smaller. Should we welcome this because it will give humans a breathing space to up their skills and work with machines in large numbers once more? Or should we fear the approaching end of the digital revolution? Could it be that the huge gains in productivity and living standards of the last 150 years are coming to an end so that from now on the cake will no longer grow, especially if population growth is constrained by environmental pressures? If that happens, what happens to the free-market economists’ trickle-down effect, which was supposed to support the social contract? If the masses can’t benefit from at least some of the growth as the pie expands even if their share shrinks, won’t social peace break down as it becomes a question of how the rewards are shared out? Are we already in the foothills of this phenomenon?
Conventional economists and politicians ridicule such notions as unfounded and alarmist. But they are no less well founded than the mass of dumbed-down free-market beliefs that have been accepted as valid these last thirty-five years. And we have just witnessed the lack of imagination and prescience exhibited by these same conventional thinkers ahead of the greatest financial and economic crisis since the 1930s. As conventional living standards level off, it is at best complacent to do nothing and not rethink our framework for understanding how the economy works.
One conclusion is that we must come round to the notion of a guaranteed living wage for part of the population. As far back as the 1960s, a group of distinguished academics and journalists, including Friedrich Hayek, proposed this in order to neutralise the effects of rising unemployment and inequality. More recently, thinkers ranging from Martin Ford23 and Erik Brynjolfsson24 to Robert Reich25 and Guy Standing in the UK26 have also started to advocate some form of basic income as a solution to technological unemployment. Given the vast output of machines, it would be possible to put this in place at little real cost. We will return to this theme in the concluding chapter.