IT IS IMPOSSIBLE FOR MOST OF US TO CONCEIVE OF A SOCIETY without jobs. But it is not frivolous to ask whether there will be anywhere near enough jobs to provide employment for all the people who require incomes, and whether the wages and conditions of the jobs that exist will be remotely close to satisfactory for a credible economy or a democratic society. This requires examining the role of technology and automation in the American capitalist economy and the relationship of technology to the pursuit of profit.
UNEMPLOYMENT AND CONTEMPORARY CAPITALISM
To understand the jobs picture in any capitalist economy, the point of beginning is the rate of economic growth. As a rule, when economies are growing rapidly, they tend to reach maximum employment levels, and often approach what has traditionally been defined as “full employment.” With labor becoming scarce, wages increase and good times abound. When economies are sluggish and have declines in investment and output, people are put out of work and jobs become scarce. The plethora of unemployed workers puts downward pressure on wages.
CHART 1. Percentage of Quarters with 6 Percent or Greater Real GDP Growth, 1930–2015
The foundation of our analysis of the jobs picture is provided in Chart 1, which was developed by economist Fred Magdoff; it demonstrates that the rate of growth in American capitalism has been on a downward trajectory for a good five decades, and that the process has accelerated in the new century.1 Our measure is the percentage of quarters in a time period in which the real growth rate exceeded 6 percent.2 These quarters of 6 percent annual growth point to periods that are boom times by any calculation. The Great Recession since 2008 has aggravated and highlighted the problem of slow growth, such that economists across the political spectrum now speak openly of the United States economy as being in a period of long-term secular stagnation.
Another way to express the downward growth trajectory of US capitalism is to look at private investment, which is the heart and soul of a capitalist economy. As Chart 2 reveals, private investment has been declining as a percentage of Gross Domestic Product. It uses a ten-year moving average to smooth out the fluctuations and provide a clearer trend line. Unless there is a large increase in government spending to compensate for the decline—which is a controversial policy option in a capitalist economy—everything else being equal, slower growth rates and higher levels of unemployment result. Indeed, even more striking is the massive and increasing amount of cash that corporations are holding, as shown in Chart 3. This “unemployed” capital is a sign of a stagnating economy, with profitable investment opportunities growing so scarce that firms would rather sit on their cash than risk it in real investments.
Why exactly US capitalism—and world capitalism, for that matter—has been and is stagnant with no end in sight is a crucial issue that can be traced in part to the way in which monopoly-finance capital produces stagnation. That’s another discussion, however.3 Our concern at this point is with the jobs picture, and Chart 4 demonstrates that unemployment has been increasing in general while capitalism has been tending toward stagnation. We provide here not only the total amount of “official” unemployment, but a broader assessment that includes people who have dropped out of the labor market and are no longer actively seeking employment—that is, people who constitute the “hidden unemployed.”
CHART 2. Net Private Non-Residential Fixed Investment as a Percentage of GDP, 1949–2013
CHART 3 Cash and Short-Term Investments of the Top 1,200 Non-Financial US Firms, 1970–2013
CHART 4 Official and Hidden Unemployment, 1962–2014
CHART 5 Duration of Job Losses in Selected Recessions
What is important about these first four charts is that they reveal that the employment situation in the United States is not simply a function of a short-term boom-and-bust business cycle. It is, instead, a longer-term problem of stagnation, such that even as the economy recovers after a downturn, it takes longer to return to the levels of employment seen in previous expansions, and the recessions can grow more severe and last longer.
Chart 5 continues in this vein. It shows how many months it has taken in each recovery since the early 1970s for the economy to regain the jobs lost to the downturn. Every single recovery has taken longer than the previous one, with the most recent Great Recession especially sluggish. On a weaker foundation, the system overall is more susceptible to panics and crashes of the 1929 and 2008 variety. In this scenario, there are other developments that reveal deterioration in the employment situation facing workers, beyond the traditional official rate of unemployment.
Consider the situation facing young workers. Chart 6 demonstrates that the economy is generating fewer middle-class jobs, and an increasing proportion of the jobs provide incomes at poverty levels. This is what economists call “labor market polarization”—great jobs for those at the top, a mountain of crappy jobs at the bottom, and fewer and fewer jobs in between.4 Studies reveal that this is a phenomenon across all sixteen European Union nations as well.5 This growth in dismal jobs is not because workers are less productive. Chart 7 shows the growing split between the growth in Gross Domestic Product and household income since the 1970s. Put another way, from 1945 to the early 1970s, as workers’ productivity increased, so did their wages by a comparable percentage. Since the 1970s, worker output has grown, in some cases sharply, but wages have stagnated.6
CHART 6 Changes in Job Growth by Median Wage for Selected Periods, 2000–2014
CHART 7 Index of Real Median Household Income and Real GDP, 1967–2013 (1967 = 100)
CHART 8 Median Wage and Salary Income of Persons 18–24 Years Old, Without College Experience, 1964–2014
Downward pressure on wages and working conditions is a consequence of the deteriorating jobs picture. When the Associated Press examined this issue in 2013, its report highlighted illustrative comments from a working-class woman in Appalachia, who said, “If you apply for a job, they’re not hiring people, and they’re not paying that much to even go to work.” Children, she said, have “nothing better to do than to get on drugs.”7 Chart 8 shows the decline in wages for young workers without a college education. Since 2000 the real wages for young people, both high school and college graduates, have plummeted.8 But it goes back decades. One of us left school to work at a lumberyard in the mid-1970s and earned an income that would translate into around $75,000 annually in 2015 dollars. Back then this sort of unionized blue-collar job seemed good but unremarkable, and was taken predominantly by people who had a high school education, if that. Today, the young people completing college whom we encounter would consider a job with that salary and its relatively lavish benefits the equivalent of winning the Irish Sweepstakes.
In this environment, it is not surprising that significant numbers of people have dropped out of the labor market and are no longer actively looking for work. As economist Tyler Cowen notes, “Most of the measured declines in employment participation have been coming from younger men, not early retirees.” He explains that “adult males are seceding from the workforce—or being kicked out—in frightening numbers. Few of these individuals are wealthy playboys.”9 In 2014 16 percent of men between the ages of 25 and 54 were not working; in the late 1960s, the figure was 5 percent.10 The percentage of women in the workforce peaked in the late 1990s and is now back to levels not seen since the 1980s. Charts 9 and 10 document these trends.
In fact, the drop-off in labor participation rates has been especially striking since 2000, when the historic increases stimulated by women joining the labor force began to reverse. How significant a factor is this? Economic observers note that the official labor force participation rate has been declining continually, from an annual average of 67.1 percent in 2000 to 62.5 percent in 2015. This translates to the disappearance of close to 7.2 million workers from the official labor force in 2015 (see Chart 11 sources in the Statistical Appendix). However, in this case (as in so many others), the official labor statistics are inadequate. Indeed, if we estimate how many more jobs would be needed to maintain the level of civilian employment that existed in 2000, the picture changes dramatically.11 Chart 11 does just this, revealing that the economy would need to generate nearly 14 million more jobs in 2015 if all those workers who have left the labor market since 2000 had remained in it and had jobs. The number, not to mention the trend, of these “missing jobs” is staggering and puts the employment crisis in a very different light from the official pronouncements of the officials who boast about “continual job growth.”12
CHART 9 Labor Force Participation Rate, Males, 1948–2014
CHART 10 Labor Force Participation Rate, Females, 1990–2014
This data creates a certain cognitive dissonance for those Americans who spent much time in 2014 and 2015 listening to politicians and pundits crow about how the unemployment rate was under 6 percent and the economy was partying like it was 1999. “Right now,” Gallup CEO Jim Clifton said in February 2015, “we’re hearing much celebrating from the media, the White House and Wall Street about how unemployment is ‘down’ to 5.6%. The cheerleading for this number is deafening. The media loves a comeback story, the White House wants to score political points and Wall Street would like you to stay in the market.” Clifton, head of one of the top public opinion survey organizations in the world, adds:
None of them will tell you this: If you, a family member or anyone is unemployed and has subsequently given up on finding a job—if you are so hopelessly out of work that you’ve stopped looking over the past four weeks—the Department of Labor doesn’t count you as unemployed. That’s right. While you are as unemployed as one can possibly be, and tragically may never find work again, you are not counted in the figure we see relentlessly in the news—currently 5.6%. Right now, as many as 30 million Americans are either out of work or severely underemployed. Trust me, the vast majority of them aren’t throwing parties to toast “falling” unemployment.
CHART 11 Estimated Number of Missing Jobs Since 2000 Peak in Labor Force Participation
CHART 12 Working Poor, Hidden Unemployed, and Officially Unemployed, 1968–2014
There’s another reason why the official rate is misleading. Say you’re an out-of-work engineer or healthcare worker or construction worker or retail manager: If you perform a minimum of one hour of work in a week and are paid at least twenty dollars—maybe someone pays you to mow their lawn—you’re not officially counted as unemployed in the much-reported 5.6%. Few Americans know this.
Yet another figure of importance that doesn’t get much press: those working part time but wanting full-time work. If you have a degree in chemistry or math and are working ten hours part time because it is all you can find—in other words, you are severely underemployed—the government doesn’t count you in the 5.6%. Few Americans know this.13
Chart 12 describes the world as it is experienced by Americans and illuminated by Jim Clifton’s data. It provides a comprehensive picture of those officially unemployed, as well as those who are underemployed, those who have given up looking for work, and also those working at poverty wages. This is the real unemployment and underemployment picture, and it is not pretty.
In this real world, the one not inhabited by politicians and pundits, by the first decade of this century the labor market had changed so much in the United States and other industrial nations that some economists wrote of the emergence of a “precariat,” a new category beneath the traditional working class.14 This precariat referred to a hodgepodge of part-time and freelance jobs where the workers had no rights, security, or benefits, and generally not much income—hence their precarious material and psychological state of existence. Members of the precariat are often educated and qualified for far better employment but are unable to find any positions.15 The Economist calculates that roughly a third of the “employed” young people in the advanced economies have these “informal and intermittent jobs.” Further, it notes somberly that research suggests that young people who enter such a dubious labor market often get “scarred”; they face a “wage penalty” of up to 20 percent for a good two decades, and the scarring is passed down to subsequent generations.16 As the economist John Schmitt put it in 2009, the power imbalance between this workforce and their employers “is a central cause of the problems facing the low-wage workers.”17
A contributing factor in the decline of wages and working conditions for workers has been the disintegration of the trade union movement in the United States.18 Union membership has collapsed from roughly a third of American workers in the 1950s and a quarter of American workers in the 1970s to around 11 percent today, and a mere 6.6 percent in the private sector.19 The real numbers for union membership had dropped from 17.7 million in 1983, when the United States was just coming out of a severe recession, to 14.5 million in 2013, when the United States was more than four years into a much-discussed “recovery.”20 Chart 13 illustrates this decline.
Prior to the 1960s, union membership was overwhelmingly in the private sector; so we begin to separate public from private in the graph in 1984, when the data becomes reliable. Public-sector unionization remained more robust for a number of years in the 1990s and 2000s because state and local governments were prohibited from engaging in the sort of aggressive union-busting campaigns that became increasingly common in the private sector, and it is generally implausible to threaten to move public-sector labor overseas to low-wage locales. However, since 2011 anti-labor state governors and legislatures in states such as Wisconsin, Michigan, and Indiana began to eliminate protections for public-sector workers, leading to the relatively new phenomenon of declining public-sector unionization rates.21 The quantitative decrease in overall unionization rates has become a qualitative change: organized labor now simply struggles to stop the continued erosion; even in many states where they were once strong, unions are no longer in a position to embark on major organizing drives and win outright victories in fights for higher wages and benefits. Chart 14 documents the number of strikes and work stoppages over the past six decades. Without the credible threat of a strike, management has little to fear. As Schmitt puts it, “The huge decline in unionization in the private sector has decimated the U.S. working class, which depends on the union wages and benefit premium to secure a middle-class standard of living.”22 Schmitt is talking about the broad working class, as even workers who do not belong to unions benefit when wages are higher across the economy.
CHART 13 Union Membership as a Percentage of Total Employed Workers, 1944–2014
These developments in the labor market both reflect and play a huge role in generating the massive increase in economic inequality that has been such a subject of conversation over the past few years.23 Most studies suggest that between one-fifth and one-third of the increase in economic inequality among men is due to the decline in unions, what one expert terms “the devastation in the labor movement.”24 Back in 1979 over half of American workers—often union workers—had pensions connected to their jobs; today it is around one-third.25 In a nutshell, income that once went to workers is now going to owners and bosses.26 Whereas the CEO of a large company made around twenty times more than the average worker in 1965, by 2013 the ratio had grown to nearly three hundred to one.27 To put it another way, if the United States had the same income distribution in 2015 that it had in 1979, $1 trillion in income going to the top 1 percent would instead go to the bottom 80 percent.28 A study by economists Michael Greenstone and Adam Looney concludes that “most men were earning substantially less in 2009 than men of similar ages and education did in 1969, adjusted for inflation.”29
Chart 15 shows, decade by decade, how the income shares have changed over the past eighty years. What this chart cannot convey is that this portends not only inequality but, in a stagnant economy, increasing poverty. By 2013 an Associated Press study concluded that four out of five American adults “struggle with joblessness, near-poverty or reliance on welfare for at least parts of their lives, a sign of deteriorating economic security and an elusive American dream.”30 The popular notion that poverty is something experienced largely by people of color—and that white people were “middle class”—has blown up like a trick cigar on April Fool’s Day.** Although people of color remain disproportionately among the ranks of the poor, they are being joined by a wave of working-class and middle-class whites moving down the economic ladder.31 The flip side of this coin is that upward economic mobility—people’s ability to improve their lot compared to that of their parents—has all but disappeared.32 The United States that was once broadly viewed as “the land of opportunity” today ranks near the bottom of advanced economies for social mobility.33
CHART 14 Number of Work Stoppages Idling 1,000 Workers, 1947–2014
The Great Recession that began in 2008 aggravated problems in the labor market and shifted the matter to another level altogether.34 Chart 16 shows the increase in long-term unemployment, people who have been out of work for at least fifteen weeks. That is not necessarily the worst of it. The vast majority of the jobs lost in the recession were considered “mid-wage,” while the majority of the new jobs created in the recovery were “low-wage.”35 The stock market skyrocketed and fortunes were made on Wall Street, but as New York Times financial reporter Felix Salmon put it, “These days a healthy stock market doesn’t mean a healthy economy, as a glance at the high unemployment rate or low labor-market participation rate will show.”36 In fact, when corporations announce plant closings and layoffs in the United States, media outlets report that the news does “wonders” for stock prices.37
CHART 15 Average Change in Income Share for Selected Income Groups, 1935–2014
CHART 16 Unemployed for at Least 15 Weeks as a Percentage of the Labor Force, 1962–2014
ENTER TECHNOLOGY AND CREATIVE DESTRUCTION
Analysis of this phenomenon does not merely point to the traditional meme that fat cats have rigged the game to get even fatter while others starve. Job-killing technology, in many instances, provided an explanation for the lack of job growth during the recovery of 2009–2013. This contradicts some longstanding and deeply held conventional wisdom, especially among economists. The general notion has been an optimistic one; as the Economist put it, “Although innovation kills some jobs, it creates new and better ones, as a more productive society becomes richer and its wealthier inhabitants demand more goods and services.”38 But the link between increasing private investment and rising employment appears to be weakening. American capitalism seems to have turned a corner: increases in private investment and worker productivity no longer necessarily lead to commensurate increases in employment or real incomes.39
As business reporters Bernard Condon and Paul Wiseman explain, “Technology is used by companies to run leaner and smarter in good times and bad, but never more than in bad. In a recession, sales fall and companies cut jobs to save money. They turn to technology to do tasks people used to do. Then it hits them: They realize they don’t have to re-hire the humans when business improves, or at least not as many.”40 The Nobel-prize winning economist Michael Spence put it this way in 2013:
Growth and employment are thus diverging in advanced countries. The key force driving this trend—technology—is playing multiple roles. The replacement of routine manual jobs by machines and robots is a powerful, continuing, and perhaps accelerating trend in manufacturing and logistics, while networks of computers are replacing routine white-collar jobs in information processing. Part of this is pure automation. Another important part is disintermediation—the elimination of intermediaries in banking, online retail, and a host of government services, to name just a few affected areas.41
Most Americans are aware that monopolized online sales and distribution operations (such as Amazon, for example) have decimated local bookstores and record shops, as well as chain operations such as Borders Books and Virgin Megastores. But disintermediation is also wiping out jobs at small, medium-sized, and large businesses that used to sell everything from computer hardware and software to airline tickets, toys, and contact lenses.42
As Chart 17 indicates, the one hundred largest US companies (in terms of total annual revenue) are able to generate more US revenues and earn more US profits with fewer American workers, and the process appears to be accelerating.** These one hundred firms accounted for 43 percent of US GDP in 2013, up from 26 percent in 1950, so this trend is hardly on the periphery of the economy.43 There is the palpable sense that technology is destroying more jobs than it is creating, an issue we will take up in short order.
For young people it is arguable that the employment picture is as dismal as it has been at any time since the Great Depression of the 1930s, and for college graduates it may be worse. The Federal Reserve Bank of New York in 2014 determined that 46 percent of recent college graduates were working at jobs that did not require a BA.44 That is bad news not only for college graduates but for high school graduates, “who find themselves competing with college graduates for basic jobs in service businesses.”45 Even before the Great Recession of 2008, the Bureau of Labor Statistics forecast that two-thirds of the jobs available between 2008 and 2018 would not require any post-secondary education.46 As the journalist Derek Thompson concludes, “The job market appears to be requiring more and more preparation for a lower and lower starting wage.”47 The Economist announces that young people are experiencing an “epidemic of joblessness.”48 Newsweek characterizes young Americans as constituting “Generation Screwed.”49 There are nowhere near enough jobs, and the jobs that do exist, to employ the vernacular, suck.50
CHART 17 US Revenue and US Gross Profit per Employee of the Top 100 US Firms, 1953–2013
By 2015 the precariat is evolving into a cybertariat, as digital technologies become more central to organizing and even constituting labor.51 The Economist credits the ubiquity of the smartphone for moving freelancing from the margins to the center of capitalism.52 By 2025 experts anticipate that one of every three global labor “transactions” will be conducted online as part of the “on-demand” or “crowd labor” economy, with a few gigantic digital hiring hall corporations using their networks and apps to get temp labor for employers.53 Informal work, or freelancing, already accounts for around one-third of the US workforce, fully 53 million workers, according to an Edelman Berland report prepared for the Freelancers Union.54 A Christian Science Monitor report stated that up to 50 percent of the new jobs in the recovery were freelance positions.55 Economic Modeling Specialists Intl., a labor market analytics firm, calculated that by 2014 some 18 percent of all US jobs were performed by part-time freelancers or part-time independent contractors. There was a 60 percent increase in the number of these part-time gig jobs from 2001.56
Proponents of the informal or 1099 sector—taken from the “independent contractor” 1099 form used by the Internal Revenue Service—play up the freedom, power, romance, and adventure that come with being one’s own boss, and for some of those 53 million workers that is no doubt the case as they swashbuckle their way to fame and fortune, or at least personal satisfaction. But as one New York Times examination concludes, many of these workers are “less microentrepreneurs than microearners. They often work seven-day weeks, trying to assemble a living wage from a series of one-off gigs.”57 According to the Government Accounting Office, these freelance workers are twice as likely as traditional full-time employees to have an annual income under $15,000.58
The venture capital firm SherpaVentures argues that this new freelance sector is the wave of the future and a win-win for everyone. “Perpetual hourly employment is often deeply inefficient for all parties involved,” one of their reports states.59 While many freelancers may disagree and wish for something more stable and remunerative, American corporations are A-OK with the new world order. “Major corporations,” Lawrence Summers and British parliamentarian Ed Balls noted in 2015, “have opted to use subcontracting to perform basic functions, and many workers are now classified as independent contractors, eroding basic labor-law protection.” Accordingly, data reveals that the percentage of male workers who have worked with the same firm for at least ten years has dropped sharply over the past two decades, especially for younger workers.60 “What once was a relationship” between firms and their employees, one reporter explains, “is now a transaction.”61 Businesses “have found that having a large nontraditional workforce makes them more competitive.”62 While the Economist has no illusion that this new freelance-based “on-demand” economy is a good thing for workers, it nonetheless regards the process as unstoppable.63 Arguably the leading expert on the emergence of the cybertariat is Ursula Huws. “We are now living in a period,” she wrote in 2015, when there has been “a sea change in the character of work.”64
It is left to the acclaimed pro-market economist Tyler Cowen to capture the logic of where all of this is going: “We will move from a society based on the pretense that everyone is given an okay standard of living to a society in which people are expected to fend for themselves much more than they do now. I imagine a world where, say, 10 to 15 percent of the citizenry is extremely wealthy and has fantastically comfortable and stimulating lives, the equivalent of current-day millionaires, albeit with better health care.”65
The other 85 to 90 percent of us? Not so much.
Whatever the society Cowen imagines, with the economic foundation that he describes, it will not be a democracy. Nor will it enjoy the rule of law or even be much of a civilization. And this is before we add to the mix the digital technological revolution, which is redefining the economy and employment across the board. This is a process that in crucial respects is only just beginning, and it is coming at us at what seems like historical warp speed. How will that change things?
A VERY SHORT HISTORY OF AUTOMATION
It is anything but a coincidence that the modern explosion in technologies correlates closely with the rise of modern industrial capitalism. Technology “was a tool in the arsenal of capitalist competition, the purpose of which was to increase the efficiency of labor,” economist James Galbraith writes, noting that in core respects that remains every bit as true today. “The big function of the new technologies is to save labor costs.”66 Businesses also develop technologies to allow them greater control over the work process and to minimize the power of labor. “The logic of capitalism, when combined with the history of scientific and technological progress, would seem to be a recipe for the eventual removal of labor from the processes of production,” Nicholas Carr writes. “Machines, unlike workers, don’t demand a share of the returns on capitalists’ investments. They don’t get sick or expect paid vacations or demand yearly wages. For the capitalist, labor is a problem that progress solves.”67
For this reason, workers have always had skepticism toward labor-saving technology, and it has sometimes been the subject of, for lack of a better term, intense class struggle. Most literature understandably focuses on the Luddites in 1810s England, but they were hardly anomalous. The great historian of capitalism Sven Beckert notes that, at the very beginning of the industrial system in 1770s England, entrepreneurs’ very first “innovations sometimes even brought down the wrath of their neighbors, who dreaded the job losses the innovators caused.” “Fear of mob violence,” he adds, drove some fledgling capitalists to move their residences “away from the places they had made their inventions.”68
Despite technology’s role in eliminating and altering jobs, it has also been seen as a uniquely progressive force for raising living standards. In 1930 the visionary economist John Maynard Keynes introduced the specter of “technological unemployment”; he defined this as “unemployment due to the discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.” He considered it “only a temporary phase of maladjustment.”69 Once the Great Depression had passed and was mostly forgotten, Keynes’s concerns about technological unemployment did not find much currency in the economics profession.70 The conventional wisdom concerning technology, especially technology that affects jobs, is that it is a net positive. It is true that some people lose in the short run because they are displaced, but as the economy grows, new industries arise and new employment opportunities appear. The lousiest and most mind-numbing jobs are the ones that tend to get replaced by machines, and the new jobs tend to require more education and skill and be more rewarding, we are told. The huge percentage of the population who would have worked in agriculture in the nineteenth century or manufacturing in the twentieth century invariably move on to bigger and better things when the nation needs far fewer workers to grow the food or make the products. It is a sign of progress.
Our concern is not with routine technological innovation or even with major product invention. What concerns us is what economists call a “general purpose technology” (GPT). As business scholars Erik Brynjolfsson and Andrew McAfee write, this refers to “a small group of technological innovations so powerful they interrupt and accelerate the normal march of economic progress.”71 In the past, economists Joseph Schumpeter, Paul Baran, and Paul Sweezy characterized these as “epoch-making” innovations because they so radically altered the course of capitalist development.72 These GPTs tend to be energy, transportation, and communication technologies, because radical changes in those areas redefine all markets, not just a specific industry or sector. They tend to have enormous geographical effects, extending markets and increasing effective demands for products. The classic examples of GPTs are steam power, electricity, railroads, and the internal combustion engine. In some cases, like railroads and automobiles, GPTs can drive massive waves of investment that lead to enormous spin-off industries. In these cases, the new technologies can drive a capitalist economy to much higher rates of investment, growth, and employment than would exist otherwise.
Computers and digital communication, connected by networks, are the newest GPTs, and by most accounts they are equal if not superior to any that have preceded them. They not only expand the size and scope of markets, they also radically lower the costs of production. As the leading scholar on automation, David F. Noble, has emphasized, the emergence of computers and automation was driven to no small extent by the military coming out of World War II and into the Cold War. From that base, “it took hold within industry, especially within those industries tied closely with the military and the military-sponsored technical community.”73 In November 1946 Fortune magazine published a large color spread on the “Automatic Factory.” “The threat and promise of laborless machines is closer than ever . . . all the parts are here,” it reported.74
Even at first blush, it was clear that something rather different from previous mechanization and ultimately revolutionary was at hand, and this was more than a little disconcerting. It is striking how sober the first response to automation was, even, or dare we say especially, by the one person closest to the science. On the heels of the very first computers being built, MIT professor and cybernetics pioneer Norbert Wiener speculated in 1950 that it would probably take two decades for automation to overhaul and dominate the economy. He wrote that the process
will lead to an immediate transition period of disastrous confusion. We have a good deal of experience as to how the industrialists regard a new industrial potential. Their whole propaganda is to the effect that it must not be considered as the business of the government but must be left open to whatever entrepreneurs wish to invest money in it. We also know that they have very few inhibitions when it comes to taking all the profit out of an industry that is there to be taken, and then letting the public pick up the pieces.
“The automatic machine,” he concluded, “is the precise economic equivalent of slave labor. Any labor which competes with slave labor must accept the economic conditions of slave labor. It is perfectly clear that this will produce an unemployment situation, in comparison with which the present recession or even the depression of the thirties will seem a pleasant joke.”75
Some of the period’s sharpest minds took notice. Kurt Vonnegut’s first novel, 1952’s Player Piano, was a dystopian tale about a near-future society where most work has been replaced by automation. It drew from Vonnegut’s experience seeing the introduction of computer-operated machinery in the General Electric plant where he worked after the war.76 Bertrand Russell wrote an essay in 1951 that asked “Are Human Beings Necessary?” as a response to computing. He concluded that “we shall have to change some of the fundamental assumptions upon which the world has been run ever since the beginning of civilization.”77 Critical theorist Erich Fromm addressed automation in his 1955 book, The Sane Society, and he was not immediately enthusiastic: “Is not the mode of work in itself an essential element in forming a person’s character? Does completely automatized work not lead to a completely automatized life?78 That same year, University of Cambridge engineer R. H. MacMillan asked whether humans might be “in danger of being destroyed by our own creations.” MacMillan argued that “the rapidly increasing part that automatic devices are playing in the peace-time industrial life of all civilized countries” might in time pose the same peril for humanity as nuclear weapons.79 This point was not lost on Wiener, who mused, “Those of us who have contributed to the new science of cybernetics thus stand in a moral position which is, to say the least, not very comfortable.”80
By the mid-1950s governments and policymakers turned their attention to automation. In 1955 a subcommittee of the Joint Committee on the Economic Report of the United States Congress held two weeks of hearings on “Automation and Technological Change.” It generated numerous examples of what automation had accomplished in only a few years. For example, two workers now assembled one thousand radios a day, whereas that had required two hundred workers previously. One man in a Ford plant ran a single machine that did the work once done by between thirty-five and seventy workers, and forty-eight workers could now do what had taken four hundred workers twice as long to accomplish.81 In 1958 UNESCO devoted a full volume of its International Social Science Bulletin to a series of papers on the “Social Consequences of Automation.” “Although it may be wise to remain skeptical toward the illusory hope of a golden age within our reach and toward the fear of unemployment, which the same people usually experience simultaneously,” the Bulletin’s editors wrote in their foreword, it is possible “to detect a vital turning point in the history of our societies.”82
The one group with its head most directly on the chopping block was labor. Carr’s review of automation concludes that industrial planners saw it as a way to lessen the role and importance of labor unions. “The lesson would prove important: in an automated system, power concentrates with those who control the programming.”83 In 1949, in a letter to United Auto Workers (UAW) president Walter Reuther, Wiener informed Reuther that he had, without success, “made repeated attempts to get in touch with the Labor Union movement, and try to acquaint them with what may be expected of automatic machinery in the near future.” Reuther telegrammed his enthusiasm for such a meeting, though it appears they simply maintained a correspondence for the next three years.84
It is no surprise that the trade union movement demonstrated a concern with what was happening. The International Labour Office held a conference on automation and employment in 1957; it was optimistic that “the long-run outlook is good,” while acknowledging severe “short-run problems” with displaced labor.85 Reuther’s public statements reflected both sentiments. “We know that you cannot hide from technological progress,” he told a Congress of Industrial Organizations–sponsored “National Conference on Automation” in 1955. “We know, too, that the labor movement, which is itself a progressive movement, must not stand in the way of scientific improvements.”86 “We cannot afford to hypnotize ourselves into passivity,” Reuther told the UAW convention that same year, with “the comforting thought, that in the long run, the economy will adjust to labor displacement and disruption which could result from the Second Industrial Revolution as it did from the First.”87
Perhaps the most sophisticated and prescient assessment of automation and its relationship to US capitalism came in a 1957 private report by the renowned Marxist economist Paul M. Sweezy.88 Sweezy examined the engineering research on leading technological developments, including automation and computerization and came away impressed. “These new technological developments are comparable in importance to the steam engine in its day, and in due course will have effects of a no less revolutionary character.”89 Because automation was capable of creating “closed-loop” processes, it could and would eliminate the worker.
The purpose of automation is to cut costs. In all cases it does this by saving labor. In some cases, it saves capital too. . . . Whether displaced workers will find other employment depends upon whether new jobs are being created as rapidly as the rate of displacement plus the rate at which new workers are coming on the labor market. And this in turn depends on a variety of forces most of which are only related indirectly, if at all, to the processes of automation.
A “probable long-run effect of automation,” Sweezy wrote, “is what some economists have called a ‘shift to profits,’ that is to say, an improvement in the share of national income accruing to the owners of capital, or at least a slowing down of any tendency that may exist for the share of capital in the national income to decline.” Unless there were significant public-policy interventions, especially in education, “we could conceivably be faced with the problem of what to do with millions of ‘misfits,’ people who would not be employable in the more advanced industries and would therefore have no way of sharing in the benefits of increased productivity. Some of these people might become and remain totally unemployed, but it seems more likely,” Sweezy observed, that “the bulk of them would provide a low-wage labor force for a sector of marginal, substandard, exploitative industries.”
Sweezy outlined crucial problems in what later came to be known as software and hardware that needed to be resolved before automation could truly explode, but he had no doubt that those problems would eventually be solved. “The present period of preparation and gestation will be followed by periods of very rapid introduction of the new automatic techniques. . . . What this may mean had better be left for the present to the writers of science fiction.” To those who dismissed computerization and automation as no big deal, Sweezy responded, “Come back in another thirty years. The transformation of society implicit in the new technologies will then be in full swing and you will be able to see signs of it on every hand.” What Sweezy grasped well before all others was that automation, unlike most other innovations, ultimately saved on capital in a manner almost as striking as how it saved on labor. This meant that it propelled the productive capacity of society to incredible heights, and therefore exacerbated a central problem under modern capitalism of firms being able to sell at a profitable price all that they are capable of producing.90
By the early 1960s automation had burst into the popular consciousness, and caution was thrown to the wind.91 One writer caustically labeled it the period of “automation hysteria.”92 RAND economist Richard Bellman predicted that in short order a mere 2 percent of the population would be able to produce all of society’s material goods. One writer in 1964 noted that “Fortune, Newsweek, the Advanced Management Journal, and many other periodical and professional organs, indicate the potential of cybernation for wiping out not only most blue-collar jobs but also most office and ‘middle management’ positions.” Construction work, too, was projected as soon to be extinct, or at least greatly reduced.93 “It is often claimed that automation is nothing more than the latest stage in the evolution of technological means for removing the toil from work,” Newsweek editor Robert E. Cubbedge wrote in his popular 1963 book Who Needs People? Automation and Your Future. “The assertion is misleading. There is a very good possibility that automation is so different in degree as to be profoundly different in kind; that it poses unique problems for society, by challenging patterns of work, education, manufacturing, and distribution.”94
It also became the stuff of politics. The 1962 Port Huron Statement—the visionary manifesto of the newly created Students for a Democratic Society (SDS) and the 1960s New Left, written primary by the young activist Tom Hayden—gave considerable attention to automation, which, he wrote, “is transforming society in ways that are scarcely comprehensible.” Automation “is destroying whole categories of work” while “it paradoxically is imparting the opportunity for men the world around to rise in dignity from their knees.” But such promise—an “economic utopia”—was impossible in a system with “elitist control,” where “automation is initiated according to its profitability.”95
On March 22, 1964, the “Ad Hoc Committee on the Triple Revolution” submitted a fourteen-page memorandum to President Lyndon Johnson, where the “cybernation revolution” was positioned alongside human rights and militarism as the main challenges to modern societies. The memo, which was signed by current and future Nobel Prize winners Linus Pauling and Gunnar Myrdal as well as the publisher of Scientific American, warned the president that
as machines take over production from men, they absorb an increasing proportion of resources while the men who are displaced become dependent on minimal and unrelated government measures—unemployment insurance, social security, welfare payments. These measures are less and less able to disguise a historic paradox: that a growing proportion of the population is subsisting on minimal incomes, often below the poverty line, at a time when sufficient productive potential is available to supply the needs of everyone in the United States.96
The memo called for a guaranteed basic income—not based upon one’s labor—for all Americans to solve the problem.
Although this memorandum has largely been forgotten, it had considerable influence at the time. Indeed, in his final sermon, delivered on March 31, 1968, before an audience in the thousands at Washington DC’s National Cathedral, Dr. Martin Luther King Jr. invoked the “triple revolution” and the importance of automation and cybernation at the beginning of his presentation. “Through our scientific and technological genius, we have made of this world a neighborhood and yet we have not had the ethical commitment to make of it a brotherhood,” King observed in words few others could muster. “But somehow, and in some way, we have got to do this. We must all learn to live together as brothers or we will all perish together as fools.”97
Automation generated a provocative set of arguments on the political left. Marxist theorist Herbert Marcuse captured the dilemma in his influential One-Dimensional Man, published in 1964, using Marx’s nomenclature of “dead labor” to refer to capital goods. “Now automation seems to alter qualitatively the relationship between dead and living labor; it tends to the point where productivity is determined ‘by the machines and not by the individual output.’ Moreover, the very measurement of individual output becomes impossible.”98 Two leftist activists argued that same year in Monthly Review that “the cybernation revolution poses an impasse for socialists also: it presents us with nothing less than the liquidation of the working class as a significant component of society. When human industrial labor is obsolescent, to project a worker’s state becomes an anachronism.” They stated that the memo on “The Triple Revolution” was correct, and a basic income should be guaranteed to all Americans independent of their labor.99 Sweezy and his Monthly Review co-editor Leo Huberman disagreed: “Our conclusion can only be that the idea of unconditionally guaranteed incomes is not the great revolutionary principle which the authors of ‘The Triple Revolution’ evidently believe it to be. If applied under our present system, it would be, like religion, an opiate of the people tending to strengthen the status quo [and] far from inaugurating an era of regeneration, it would merely tend to dull the sense of anger and outrage which is the natural human reaction to a society as corrupt and shameful as ours.” Instead, they called for socializing the economy so that the surplus generated by automation was controlled by society as a whole, not by the owners of a handful of large corporations.100
Organized labor, having suffered through relatively high levels of unemployment in the late 1950s and early 1960s, no longer saw, nor welcomed, the promise of automation. The Department of Labor estimated that two hundred thousand jobs were being lost to automation each year in the early 1960s; and in industry after industry output was up while employment levels were down.101 AFL-CIO president George Meany said that automation was “rapidly becoming a curse to this society . . . in a mad race to produce more and more with less and less labor and without any feeling [as to] what it may mean to the whole economy.” The business trade publication Printers’ Ink concluded in 1964 that the American workforce was “frightened and uncertain of its future.”102
It was not only student activists, labor, and people on the left who were concerned about automation, though they were at the forefront. The eventual 1978 Nobel Prize winner in economics, Herbert Simon, published his book The Shape of Automation for Men and Management in 1965. A computer scientist and a pioneer in artificial intelligence as well as an economist, Simon wrote that “in our time, computers will be able to do anything a man can do.”103 President John F. Kennedy referred to unemployment caused by automation as “the major domestic challenge of the 1960s.”104 “If men have the talent to invent new machines that put men out of work,” he stated in 1962, “they have the talent to put those men back to work.”105
Yet there was considerable anxiety over the prospect that new jobs might not appear to replace those being lost. In August 1964 President Johnson formally created the National Commission on Technology, Automation, and Economic Progress to examine the issues and file a report, first and foremost “on whether technological change is a major source of unemployment.” The ultimate report, published in 1966, extended its mandate to consider “the fear” that eventually technology “would eliminate all but a few jobs, with the major portion of what we now call work being performed automatically by machine.” It was a prestigious fifteen-member commission, including UAW head Reuther, IBM chair Thomas Watson, five other corporate leaders, and the intellectuals Daniel Bell and Robert Solow. The 1966 report concluded optimistically that government policies could successfully address unemployment arising from automation. It asserted that automation was a progressive development, and that “the vast majority of people quite rightly have accepted technological change as beneficial.”106
What is perhaps most striking for our purposes is what the commission did end up recommending in its report. It said that the technological threat to employment only underscored the crucial need for the government to “fulfill the promise of the Employment Act of 1946: ‘a job for all those able, willing, and seeking to work.’” The report called for the federal government to “be an employer of last resort, providing work for the ‘hard-core unemployed’ in useful community enterprises.” It specifically mentioned the sort of “unmet human and community needs” where this labor, and the new technologies, could be deployed as improving healthcare, transportation, and housing, and battling air pollution and water pollution—in short, a massive expansion of spending on vital infrastructure and cleaning up the environment. Moreover, to ensure that everyone benefited by “the abundance” generated by technological advances, the report called for a guaranteed annual income for all Americans, which would effectively end poverty. The report also specified that it was imperative that traditionally disadvantaged communities receive “compensatory” resources such that their public education gave them the capacity to participate alongside those from more privileged sectors. And it called for a commitment to “improvements in public education” overall, with free schooling for all Americans through grade fourteen.107
These recommendations are breathtaking from the present vantage point because they are so radical, and they were agreed to by some of the leading CEOs in the nation. Indeed, the report even went so far as to urge firms to use automation to “humanize” the workplace and develop the new technologies in such a manner as to make the work experience more rewarding for the worker.108 By the late 1970s or 1980s, with the changing political currents, one can only imagine how a subsequent commission on automation might have considered these issues. Indeed, one can “only imagine,” because no such independent body ever came into existence. This was the one and only time in American history that automation and employment were formally studied and considered by an official government commission.
The early to middle 1960s proved to be the high-water mark for popular recognition of automation as an important social and economic issue, and a problem demanding political attention. What is striking is that these writers posed almost the exact same concerns, questions, framing, and even solutions that are being raised today; they were simply fifty years ahead of their time. Why did the issue of automation drift into the background? The easy answer is that the alarmist concerns of the early 1960s did not materialize, as the capacity for firms to deploy automation to replace most human jobs was greatly exaggerated. When we now see how laughably primitive computers and digital technology were fifty years ago, the predictions made at the time seem preposterous. This experience has probably been a factor in making economists and observers of all stripes gun-shy about predicting automation’s elimination of most jobs, lest they be confused with the tinfoil-hat UFO crowd from the same time period. Whatever the precise reason for the shift, automation and displacement were no longer “news” after the mid-1960s. Chart 18 documents the decline in stories mentioning automation in the New York Times from 1955 through February 2015.
But the disappearance of automation as a political issue owes to more than the exaggerated claims of the early 1960s. To a large extent it reflected the fact that organized labor, aside from a handful of progressive unions like the United Electrical Workers (UEW), the International Association of Machinists, and more recently National Nurses United, threw in the towel. This shift in focus was encouraged in the late 1960s by the virtual disappearance of unemployment with the booming economy that accompanied the Vietnam War. It was also encouraged by the persistent management stratagem to label any critic of automation a “Luddite,” as if asking questions about whether all automation was always good was tantamount to saying that society should abandon cooked food, electricity, and indoor plumbing.109
In the decades that followed, when unemployment became a political issue, the major public concern regarding job losses was with the many millions shifted to overseas low-wage locales. This aspect of globalization was not unrelated to computerization. Martin Wolf of the Financial Times writes that “information technology has turbo-charged globalization by making it vastly easier to organize global supply chains, run 24-hour global financial markets, and spread technological know-how.”110 As leading economists have noted, computer technology and the Internet have made it “easier for businesses to outsource or relocate all or part of their operations to countries where wages, labor, and environmental standards are low.”111 Moreover, globalization made automation appear necessary. “If America hopes to match foreign competition,” Time magazine wrote in 1983, “it may have to rely more heavily on automation.”112 That same year, Fortune wrote of “the race to the automatic factory,” with none of the alarmism of a generation earlier. It was now a very good thing, a competitive requirement to keep up with the Japanese.113 This argument won the day, at least to some extent because, until the latest recession, it seemed that the system could generate enough new jobs—albeit not especially good jobs in many cases—to prevent a full-throttle jobs crisis.114
CHART 18 Number of New York Times Stories Mentioning Automation (through February 2015)
By the 1990s only a few iconoclastic and heterodox economists and writers continued to study the issue and sound alarm bells about radical changes in the labor market. In 1995 economist Jeremy Rifkin argued that impending improvements in “software technologies are going to bring civilization ever closer to a near-workerless world.” The great historian of economic thought Robert Heilbroner said the effect of automation on the economy was “a problem that we will be living with for the rest of our own and our children’s lives,” and that it “ought to become the center of a long-lasting and deep-probing conversation for the nation.”115
It didn’t. Instead, beginning in the 1990s, the Internet moved from the margins of our social and economic lives to become a nearly ubiquitous medium in the space of a decade. It quickly came to be regarded as the great new communication medium. When business pages and cable news shows considered computers and digital communication, they focused on the bounty it provided for web surfers and consumers, the challenge for existing businesses, and the magnificent new opportunity for investors and entrepreneurs. By the second decade of this century, the digital revolution had redefined the era in its image, and had entered the bone marrow not only of media and communication but of every aspect of the economy. Indeed, the three most valuable corporations in the US economy in the autumn of 2015 were Internet/computer firms—as well as five of the top ten and twelve of the top thirty-one, and that does not include related firms like General Electric and Disney—and only a few of them even existed in the 1960s.116 The critics of the 1960s were not wrong that an explosion was coming. They were just a little off on the sequence, scope, and timing.
Nor does this mean that automation, or the powerful application of computers and networks to the economy, slowed down or became passé. To the contrary, it continued its persistent increase, but it simply became part of the woodwork, a necessary option for businesses to be competitive.117 In this context, Nobel Prize–winning economist Wassily Leontief provided a more optimistic vision of automation’s effects in 1983 at a Paris conference. “How will working people adjust themselves to being on the job only a few hours a day?” he asked.118 Three years later, Leontief and economist Faye Duchin released their detailed examination of automation, which projected that it was likely to replace many millions more jobs by 2000. At the same time, they argued that “the computer revolution” by the year 2000 “will be no more advanced than the mechanization of European economies had advanced by, let us say, the year 1820.”119 In recent times, near-workerless factories run by computer programs have become common. Thanks to computerized programs and robotics, for example, US steel industry production rose from 75 million tons to 120 million tons between 1982 and 2002, while the number of steelworkers fell from 289,000 to 74,000.120 In the 1960s, for another example, a single textile worker operated five machines, each able to run a thread through the loom at one hundred times per minute. By 2014 machines ran at six times that speed and a single operator supervised one hundred looms.121 Office work increasingly became the target of automation and computerization.122
To some extent, this process was so comprehensive and overwhelming—and part of a broader digitalization of all aspects of social life—that it eluded sustained analysis, as water escapes the comprehension of the proverbial fish. It certainly paved the way for what was and is about to come. American jobs were being radically changed by technology, and more than a few were being lost to technology, but until the Great Recession it did not seem to be much of a loss. And even then, as Galbraith put it, “you can’t distinguish a job lost to technology from a job lost to a business slump. The two are, actually, the same thing.”123
To some economists, who believe they can see what is in front of them as long as they keep their eyes glued to the rear-view mirror, this settles the matter. The future will look like the past. Technology will have no fundamental effect upon employment levels and need not concern policymakers when they address unemployment. But to an increasing number of engineers, computer scientists, investors, business leaders, business and economic reporters, and scholars—and more than a few eminent economists—the economy stands at a precipice, and society is facing the type of revolutionary GPT that occurs maybe once a century, if that. “There is a wave of what certainly appears to be labor substitutive innovation. . . . It appears technology is permitting very large-scale substitutions,” Summers observed in 2015. “Probably, we are only in the early innings of such a wave.”124
THE SECOND HALF OF THE CHESSBOARD
The question that jumps out is, why now? Why isn’t this a warmed-over version of the early 1960s automation hysteria that proved to be bogus? The answer begins, appropriately enough, in the 1960s with Gordon Moore, a computer engineer and a founder of Intel. Moore wrote an article in 1965 in which he projected, in effect, that due to continuous technological improvements, the computing power one could buy for a dollar would double every year for a good ten years. This became Moore’s Law. He later suggested that it would double every two years, and most observers have come to use the notion that it would double every eighteen months. People once anticipated that Moore’s Law would peter out or at least slow down over time, but it has proven resilient and astonishingly accurate. “Over and over again,” Brynjolfsson and McAfee write, “brilliant tinkering has found ways to skirt the limitations imposed by physics.”125
What does this mean? By the early 1980s, for example, the cost of computer power relative to manual computing power was eight thousand times less than what it had been thirty years earlier.126 It took scientists a decade of intensive work to sequence the three billion base pairs in the human genome by 2003. By 2013, a single computer facility could sequence that much DNA in a day.127 More recently, the Economist reports, “the new iPhones sold over the weekend of their release in September 2014 contained 25 times more computing power than the whole world had at its disposal in 1995.”128
What becomes clear is that if Moore’s Law is extended for an appreciable period of time, the growth becomes mind-boggling. The futurist Ray Kurzweil is famous for explaining this with the parable of the person who invented the game of chess in sixth-century CE India. The inventor presented the new game to the local emperor, who was so impressed that he invited the inventor to name his reward. The inventor asked for a single grain of rice on the first day—then two grains on the second day, four grains on the third day, eight grains on the fourth day, and so on, with the number of grains to continue doubling every day for sixty-four days, to account for every square on the chessboard. The emperor instantly agreed, surprised by such a modest request. After ten days, the gift was around a thousand grains. After twenty days it was a million grains. After thirty days it was a billion grains, and then after thirty-two days, or halfway through the chessboard, the total was four billion grains, about one large field’s quantity. But thereafter the doubling became astronomical. By the time one got to the sixty-fourth square, the number of grains would be eighteen quintillion, vastly more rice than has ever been produced in history. The emperor obviously could not comply. (In some versions of the story, the inventor is beheaded by the angry emperor.) “Kurzweil’s point,” Brynjolfsson and McAfee write, “is that constant doubling, reflecting exponential growth, is deceptive because it is initially unremarkable.”129
If we return to Moore’s Law, and begin in the 1960s, we are now entering the second half of the chessboard. The “exponential growth eventually leads to staggeringly big numbers, ones that leave our intuition and experience behind.” Chart 19 provides a graph of what doubling looks like, beginning in 1960 at a rate of fifty computer “instructions per second” and doubling every eighteen months. To fit this chart on the page, all the spectacular growth in the “first half” of the chessboard barely registers before 2008. We are now at the part of the curve that is shooting straight up like an oil-well gusher. Even if the rate of growth eventually does slow down, we are deep into uncharted terrain, as though we have traveled through a wormhole to some distant galaxy.130 As Brynjolfsson and McAfee note, “Things get weird in the second half of the chessboard.”131 In their view, the world is at an inflection point, where all sorts of operations that only recently were thought impossible for computers and uniquely the province of humans—driverless cars, anyone? robot “nurses”?—will be easily done by computers, and soon other tasks that presently are considered unthinkable for computers will become standard fare. The most striking feature may well be how very quickly this will take place in historical terms.
To put the moment we are entering in perspective, consider the analysis of Gill A. Pratt. Until 2015 Pratt served as Program Director at the Pentagon’s Defense Advanced Research Projects Agency (DARPA), where he oversaw work on robotics. This is important because DARPA has been at the center of technological innovation throughout the digital era. Pratt argues that humanity may be on the verge of experiencing something comparable in impact to the “Cambrian Explosion,” referring to the relatively brief period 540 million years ago when life underwent an astonishingly rapid diversification, including arguably the evolution of vision. It was crucial for the subsequent development of complex and intelligent life. Pratt outlined a series of related and complementary breakthroughs in robotics and computing that will make it possible for machines “to replicate the performance of many of the perceptual parts of the brain,” including, ironically enough, vision itself. At the very least, Pratt observes, “the effects on economic output and human workers are certain to be profound.” He refuses to predict when exactly this will occur, “as the timing of tipping points is hard to predict,” but it is on its way.132
CHART 19 Theoretical Growth in Computing Power
In this context it is almost banal to discuss Deep Blue, the IBM computer that defeated Garry Kasparov, the world champion, in a game of chess in what seems like the computer dark ages of 1997. Or even IBM’s Watson computer from 2011 that was able to defeat all human competition in television’s game of Jeopardy. This was done in real time and required the computer to interpret oral questions filled with wordplay and irony.133 It was not just a gimmick. As Martin Ford notes, “IBM is already positioning Watson to play a significant role in fields like medicine and customer service.”134 “Artificial intelligence has become vastly more sophisticated in a short time,” the New York Times reports, “with machines now able to learn, not just follow programmed instructions, and to respond to human language and movement.”135
Computers can now access an unimaginably large body of stored information that is growing by leaps and bounds and process that information almost instantaneously with ever more sophisticated algorithms. This is what is referred to as “big data.”136 Computers, as Nicholas Carr explains, may never be able to replicate “tacit” or “procedural” knowledge, which refers to the stuff we do without thinking about it, like riding a bike or driving a car. Instead, computers are very good at “explicit” or “declarative” knowledge, which is the stuff we do that we can write down instructions for, like how to change a flat tire or solve a quadratic equation. “The superhuman speed with which computers can follow instructions, calculate probabilities, and receive and send data,” Carr notes, “means that they can use explicit knowledge to perform many of the complicated tasks we do with tacit knowledge.” Driverless cars are just the tip of that iceberg. The implications for automation are striking, if not revolutionary. “Even highly trained analysts and other so-called knowledge workers are seeing their work circumscribed by decision-support systems that turn the making of judgments into a data-processing routine.”137
Much of this “big data” is accumulated in the “cloud,” a group of enormous “server farms” controlled by a handful of massive corporations like Google, Apple, Amazon, and Microsoft. The cloud becomes the rational and most cost-effective way for businesses to store and analyze their data. One of the great benefits and therefore consequences of cloud computing, according to Vincent Mosco, the leading scholar on the subject, is that it “essentially deepens and extends opportunities to eliminate jobs and restructure the workforce.” This is, in fact, a primary selling point that cloud computing firms use to drum up business.138 (That seems fitting, as these vast corporate server farms “virtually run themselves,” Carr writes.139) Ford observes that with “the migration of much of the intelligence that animates mobile robots” into the cloud, it makes it possible “to build less expensive robots, since less onboard computational power and memory are required, and allows for instant software upgrade across multiple machines.”140 In the meantime, for the same reason, as the Economist notes, cloud computing is also ideal for harnessing freelance workers to replace higher-paid labor.141
Another possibility opened up by being in the second half of the chessboard is the “Internet of Things,” a term for the billions of human-made devices that are connected to each other on a universal computing infrastructure. Each of these devices has its own Internet address, and will communicate with other devices more than with people. “That’s the whole point of the thing,” technology writer Michael Miller enthuses, “to connect just about everything in the aptly named Internet of Things.” It promises “more automatic, and more intelligent services provided by interconnected smart devices—with a minimal amount of human interaction.”142 “Make no mistake,” author Samuel Greengard writes, “we are entering a brave new world of immersive and embedded technology. . . . It’s entirely clear that a more technology-centric world is in the cards.”143
Depending upon the source, by 2020 or very soon thereafter, it is expected that there will be as many as fifty billion such devices, and only a small fraction of them will be personal computers, tablets, or smartphones controlled by individual humans. “Engineers expect so many of these connected devices,” Philip Howard writes in his book Pax Technica, “that they have reconfigured the addressing system to allow for 2 to the 128th power addresses—enough for each atom on the face of the earth to have 100 addresses.”144 Much of the economy will run through the Internet of Things. As Carr notes, “Manufacturers are spending billions of dollars to outfit factories with network-connected sensors, and technology giants like GE, IBM, and Cisco, hoping to spearhead the creation of an ‘Internet of Things,’ are rushing to develop standards for sharing the resulting data.”145 Soon one-half of German manufacturing investment is going to building out the Internet of Things, and PricewaterhouseCoopers expects global investment in the industrial Internet to top $500 billion by 2020.146 That does not seem like so much when one considers that Cisco Systems forecasts that by 2022 the Internet of Things will generate $14.4 trillion in cost savings and revenue.147 A large share of these savings will come by eliminating jobs. “Business processes that once took place among human beings are now being executed electronically,” economist and technology theorist W. Brian Arthur puts it. Commerce is increasingly managed through “a huge conversation conducted entirely among machines.”148
In conjunction with all this, an open source Robot Operating System (ROS) “is rapidly becoming the standard software platform for robotics development,” according to Ford. “The history of computing shows pretty clearly that once a standard operating system, together with inexpensive and easy-to-use programming tools, becomes available, an explosion of application software is likely to follow.” What does this mean? “It’s a good bet,” Ford says, that “we are, in all likelihood at the leading edge of an explosive wave of innovation that will ultimately produce robots geared to nearly every conceivable commercial, industrial, and consumer task.”149
Robotics expert Ben Way anticipates the next decade will see a qualitative leap in what robots can do, which will radically increase their range and efficiency.150 These “robots” are far more sophisticated in their abilities and their applications than the clunky machines of old movies. They will be not only in factories, they will be everywhere.
Then there is 3D printing, which Jeremy Rifkin describes as the “manufacturing” model that accompanies an Internet of Things economy.151 This is the until-very-recently-unimaginable process of having a “printer” stamp out a three-dimensional product that one designs on one’s computer. The possibilities are endless—it has used plastic and metals to create the products—and they look to revolutionize manufacturing, such that whole categories of workers can cease to be a factor. Forbes magazine compares it to the emergence of the industrial revolution and the assembly line. “3D Printing will be a game changer.”152 And this is not on some dude’s drawing board in Palo Alto. It is already in existence and widely used, as Brynjolfsson and McAfee note: “It’s used by countless companies every day to make prototypes and model parts.” Then, to put an exclamation point on their analysis, they say that 3D printing, robotics, driverless cars, and computers like Watson “are not the crowning achievements of the computer era. They’re the warm-up act.”153
WHAT DOES THIS MEAN FOR JOBS?
In short, as we get to the second half of the chessboard, the ability of employers to cost-effectively deploy their new tools and tactics of organization increases, dare we say it, exponentially. Any way you slice it, the outlook is bad, very bad, for workers. The Economist has been at the forefront of studying and writing about the issue.154 “Until now,” it wrote in 2014, “the jobs most vulnerable to machines were those that involved routine, repetitive tasks. But thanks to the exponential rise in processing power and the ubiquity of digitized information (‘big data’), computers are increasingly able to perform complicated tasks more cheaply and effectively than people.”155 As computer science reporter Federico Pistono puts it, “Millions of algorithms created by computer scientists are frantically running on servers all over the world, with one sole purpose: do whatever humans can do, but better.”156 What does this mean? “The combination of big data and smart machines will take over some occupations wholesale; in others it will allow firms to do more with fewer workers.”157
In earlier stages of automation, Brynjolfsson explains, firms automated the physical work but required humans to be the control system. Now the control system can be automated, and when it is, “then it is less clear what the role for humans is.”158
The Economist notes that new technologies also make it possible for firms to “reshape” those jobs that remain, so that they can “be done by less skilled contract workers.”159 “In case after case,” Carr writes, “we’ve seen as machines become more sophisticated, the work left to people becomes less so.”160 This was anticipated first by Harvard Business School professor James R. Bright in his 1958 book Automation and Management. “It seems that the more automatic the machine, the less the operator has to do,” Bright wrote. “The progressive effect of automation is first to relieve the operator of manual effort and then to relieve him of the need to apply continuous mental effort.”161
In 1966 Bright filed a report for President Johnson’s National Commission on Technology, Automation, and Economic Progress: “The lesson should be increasingly clear; it is not necessarily true that highly complex equipment requires skilled operators. The ‘skill’ can be built into the machine.” With his orientation toward management, Bright was the first dissenting voice regarding the notion that automation required workers to have better education and training: “I suggest that excessive educational and skill specification is a serious mistake and potential hazard to our educational and social system. We will hurt individuals, raise labor costs improperly, create disillusion and resentment, and destroy valid job standards by setting standards that are not truly needed for a given task.”162 He was decades ahead of his time.163 As Tyler Cowen puts it, most of these new jobs that interact with sophisticated machines “won’t be much harder than, in today’s world, operating a tollbooth on the New Jersey Garden State Parkway, a job performed by both man and machines.”164 Thompson’s examination of labor in the Atlantic concludes that “most jobs are still boring, repetitive, and easily learned.”165
Indeed, as the Economist notes, the de-skilling of the remaining jobs can be seen as providing a way station to their eventual elimination.** “Gobbetising jobs with the aim of parceling them out to people who don’t see or need to see the big picture is not that different from gobbetising them in a way that allows automation. Often the first activity may prove a prelude to the second.”166 The magazine offers up Uber as an example of a business that may well be “a forerunner to an eventual system that has no drivers at all.”167
Martin Ford points to a New York–based start-up, Work Fusion, which sells software to firms to automate big projects formerly done by office workers. Where people are still needed, the software recruits freelance workers online to do the temp work, and then the software monitors what the workers do to learn from them so that their jobs, too, can be automated. “As the freelance workers do their jobs they are, in effect, training the system to replace them. That’s a pretty good preview of what the future looks like.”168 “The combination of advanced sensors, voice recognition, artificial intelligence, big data, text-mining, and pattern-recognition algorithms, is generating smart robots capable of quickly learning human actions, and even learning from one another,” writes former US Labor Secretary Robert Reich. “If you think being a ‘professional’ makes your job safe, think again.”169
So exactly which jobs are on the chopping block? “Accountants may follow travel agents and bank tellers into the unemployment line as tax software improves. Machines are already turning basic sports results and financial data into “good-enough” news stories,” the Economist writes. “A taxi driver will be a rarity in many places by the 2030s or 2040s. That sounds like bad news for journalists who rely on that most reliable source of local knowledge and prejudice—but will there be any journalists left to care? Will there be airline pilots? Or traffic cops? Or soldiers?”170 A report in the New York Times adds “counselors, salespeople, chefs, paralegals and researchers” to the list.171 Or consider utility meter readers. In 2001, 56,000 American workers held that job. By 2010 the number was down to 36,000. By 2023 the number is expected to be zero.172
Consider that the four most common occupations in the United States are retail salesperson, cashier, food and beverage server, and office clerk. Nearly 10 percent of the labor force, over fifteen million workers—more workers than there are in Texas and Massachusetts combined—are so employed. Thompson notes that these jobs are highly susceptible to automation.173 Ford sees 50 percent of fast-food jobs disappearing, and argues it is likely there will be “explosive growth of the fully automated self-service retail sector—or, in other words, intelligent vending machines and kiosks.”174 Or consider driverless cars. Robotics scientists like MIT’s Daniela Rus make a powerful and convincing case that the impending shift to a driverless world—the technology is in its final stages—will be much more efficient, vastly improve the transportation system, and do wonders for the environment and the quality of life.175 One problem: the most common occupation for American men is driving some sort of vehicle, be it automobile, bus, or truck. What happens to them?176
Then there are the two sectors of the economy harboring the most professionals—health care and education. They “are under increasing pressure to cut costs,” Reich notes. “And expert machines are poised to take over.”177 A 2014 article asked: “Robot Replacing Nurses: Is It Really That Far-Fetched?” The answer:
Dr. Rosalind Picard, professor at the Massachusetts Institute of Technology (MIT), recently told the British Broadcasting Corporation (BBC) that robots should be made available to healthcare providers (nurses and physicians) in order to enhance healthcare delivery. However, when pressed by the interviewer to guarantee that robots will not fully replace nurses as a way for hospitals to save money, she answered: “You know, when people are in charge all kinds of things can happen . . . right?”178
For education “entrepreneur” John Katzman, the great question is, “How do we use technology so that we require fewer highly qualified teachers?”179
The better question may be: What jobs aren’t susceptible to elimination or radical de-skilling and downsizing by automation? Computer entrepreneur Peter H. Diamondis and technology reporter Steven Kotler concur. Within a decade, they write, robots “will make up the majority of the blue-collar workforce.” They will be doing everything from “shelf-stocking” inventory at Costco to “burger-slinging” at McDonald’s.180 That’s not all. Ford argues that the last remaining labor-intensive areas in agriculture—primarily picking—are soon to be susceptible to automation.181
In manufacturing, the original target of automation, the list is endless. “Being newly able to do brain work will not stop computers from doing ever more formerly manual labour,” the Economist reports. “It will make them better at it.”182 This is probably where the invasion of the revolutionary new robots will be noticed first. “Robots deployed in manufacturing today,” the Wall Street Journal reported in 2015,
tend to be large, dangerous to anyone who strays too close to their whirling arms, and limited to one task, like welding, painting or hoisting heavy parts. The latest models entering factories and being developed in labs are a different breed. They can work alongside humans without endangering them and help assemble all sorts of objects, as large as aircraft engines and as small and delicate as smartphones. Soon, some should be easy enough to program and deploy that they no longer will need expert overseers.
Robots are getting much lighter, they can be repurposed easily and can do delicate work humans find very difficult and once regarded as impossible for machines. “One company promises its robots eventually will be sewing garments in the U.S., taking over one of the ultimate sweatshop tasks.”183
The one proviso historically offered by economists was that low wages slow down the incentive for businesses to turn to automation. The “good news” for manufacturing workers was that, as long as they did not press management for higher wages or better working conditions, they might keep some of their jobs. How has this worked in practice? A firm like Nissan relies upon robots for its factories in Japan, but its factories in India rely on cheap local labor.184 Indeed, a good deal of economic analysis of the speed and intensity of technological innovation and diffusion is based upon the cost of labor. When wages are relatively low, innovation slows down, and when wages are high, firms have a greater incentive to turn to automation. The logic is that as American labor costs continue to decline, firms will be more likely to hire real workers and less inclined to turn to automation, or to move manufacturing jobs abroad to low-wage locales.
As we enter the second half of the chessboard, that economic thinking can be filed next to the discredited notion that market economies always gravitate toward full employment, or that market economies tend to reduce inequality.185 “China, India, Mexico, and other emerging nations are learning quickly,” Rifkin writes, “that the cheapest workers in the world are not as cheap, efficient, and productive as the information technology, robotics, and artificial intelligence that replaces them.”186 A recent study by University of Chicago economists Loukas Karabarbounis and Brent Nieman found that labor’s share of GDP has been declining in those three nations as well as most of the other nations they examined. Their explanation? Advances in information technology caused the price of plant, machinery, and equipment to drop, so companies have shifted investment away from labor and toward capital. They determine that in the United States almost one-half of the decline in the share of labor in the national income can be attributed to businesses’ replacing workers with computers and software.187
By 2012 the global sales of industrial robots was a $28 billion annual market, and the fastest-growing market is China, where robot installations have been increasing at a 25 percent annual rate since 2005.188 China still has a long way to go, as it has just thirty robots per 10,000 manufacturing employees compared to South Korea (437), Japan (323), Germany (282), and the United States (152), according to the International Federation of Robotics, a trade group. The research firm IHS Technology projects that robot sales in China will increase from 55,000 units in 2014, to 211,000 units in 2019.189
Consider Foxconn, the largest maker of electronic components in the world and the largest exporter in Greater China. Foxconn is single-handedly responsible for manufacturing nearly half of the consumer technology in the world, and much, if not most, of what Americans own in terms of smartphones and tablet computers. It has annual revenues of $135 billion and is the third-largest employer in the world, with 1.2 million workers. Foxconn grabbed its market share by providing a low-paid and heavily exploited workforce for Western firms, working in conditions right out of a Charles Dickens novel. In 2010, world attention shifted to Foxconn’s factories that produced Apple products following a string of suicides by its workers. Soon thereafter, Foxconn began an aggressive program to eventually replace many, or most, of its workers with one million robots.190 Foxconn CEO Terry Gou said in 2015 that the firm has been adding thirty thousand industrial robots annually since then, and the process is being accelerated to the point where he expects robots and automation to complete 70 percent of its assembly-line work by 2018. Gou eventually foresees a “robot army”—Foxconn has invested heavily in robotics research—as a way to offset labor costs. “I think in the future young people won’t do this kind of work, and won’t enter the factories,” Gou says.191 Foxconn is not an outlier or some kind of “futurist” firm.192 It is part of a trend. The headline of a 2015 New York Times report from China said it all: “Cheaper Robots, Fewer Workers.” It explained that
a few low-tech industries, like garment manufacturing, are moving from China to places that still have very low wages, like Bangladesh. But many industries, particularly electronics, are still moving factories to China. That is because so many of the parts suppliers are now in China that it is often more costly to do assembly elsewhere. So although building robots to replace workers is seldom cheap, a growing number of companies are finding it less costly than either paying ever-higher wages in China or moving to another country.193
We doubt that automation will replace most labor in China, India, or the global south in the near term; there is still more than enough cheap labor.194 We also doubt that firms such as Foxconn anticipate a workerless world in the visible future—although Gou says the firm already has a fully automated plant in Chendu that works 24/7 with the lights off. But there is not much doubt that Foxconn managers intend to use the company’s vast resources and immense market power to create a permanent low-cost position worldwide, combining low wages with the most advanced technology.195 For the Foxconn business model to survive, it must ensure that the dominant and unrivaled manufacturing infrastructure and networks built in China remain attractive as automation increases in prominence and lessens the importance of a cheap, non-unionized labor force. But the historical clock is ticking. One industry analyst says that there will be a few million manufacturing jobs left in 2040. In 2003 there were 163 million manufacturing jobs worldwide.196
All of this is bad news for a capitalist economy, which needs workers with decent incomes so they can become consumers who purchase products. That is when capitalists make profits and have incentives to invest and expand the economy. The loss of jobs and therefore the shrinking of a consumer base with disposable income is a recurring problem in capitalist economies, and it provides a good shorthand description of the Great Depression. It looks to be our fate again in one form or another. As Galbraith puts it, the United States is experiencing “a permanent move toward lower rates of employment in the private, for-profit sector.”197
But can the same computer technology that seems to be aggravating the problems of capitalism also provide a solution? One of the attributes of earlier GPTs, like railroads and automobiles, is that they stimulated a massive body of investment (and employment) in related industries that became powerhouses in their own right. Consider automobiles. We both grew up in the industrial Midwest—in Cleveland and southeastern Wisconsin, respectively—during the heyday of the Rust Belt, back before there was much rust. From Buffalo and Pittsburgh in the east to Cleveland, Akron, Toledo, and Detroit in the middle, and on to Gary, Chicago, and Milwaukee in the west, gigantic factories producing steel, glass, rubber, machine tools, and the like were ever-present, in addition to the iconic auto plants. Millions of people earned good wages and the economies were strong, and at the center of it all was the automobile. That doesn’t even begin to factor in all the construction and real estate development—that is, suburbanization—and other ancillary industries that resulted as well. One can make the case that automobilization was a central factor in the health of US capitalism for much of the twentieth century. It more than offset the losses to employment that the automobile had created by ending the “ecology of horse and plow and the semimodern technology of the railroad and the streetcar.”198
Is anything like this occurring or on the horizon due to computerization and the Internet? To our knowledge, Galbraith has studied that question as much as anyone, and his answer is an emphatic “no.” “With computers and the Internet, this scope for secondary employment is far less.” Indeed, the evidence is that the opposite is the case. “The ratio of jobs killed to jobs created in this process is high,” Galbraith writes. “Moreover, many of those displaced are not only unemployed but also obsolete.” One of the virtues of computerization and the Internet proves to be a great problem for a capitalist economy: it not only saves on labor, but it also saves on capital, as it becomes so much more efficient.199 Ironically, perhaps the only tangible new sector of jobs for humans has been provided by Rifkin, who states that “there is one last surge of work: in the next 35 years we will have to put the infrastructure of the automated economy in place—robots are not going to do that.” Exactly how many jobs that will require is unclear, but Rifkin notes that “this transformation will keep two more generations busy but the downside is of course that the smarter technology gets, the less workers it needs to run it properly.” And, this “surge of work” is paving the way for the end of work by mid-century.200
People who study the list of the most common occupations find it to be largely the province of the types of jobs that predated the computer and are now in its crosshairs. “Nine out of 10 workers today are in occupations that existed 100 years ago,” Thompson writes, “and just 5 percent of the jobs generated between 1993 and 2013 came from ‘high tech’ sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people.”201 Apple directly employs less than 10 percent of the one million workers around the globe involved in producing and selling its products; many of those nine hundred thousand gig and contracted jobs seem like good candidates for the digital chopping block in the future.202 Amazon, for another example, already uses some fifteen thousand robots in its warehouses.203 Moreover, those humans who remain on the payroll tend to experience exactly what one would expect in any other sector of the economy: in a labor market marked by a massive surplus of desperate workers, working conditions can revert to levels once regarded as barbaric.204
It is difficult to see where new jobs—certainly the tens of millions of new jobs that will be needed—are supposed to come from. As economic historian Robert Sidelsky puts it, the stock response from optimists is that society needs to “simply train people for better jobs.” The problem here is that “technological progress is now eating up the better jobs, too.”205 There are not enough “specialized new digital jobs, like people who create apps,” a business reporter writes, “no matter how we’re educating people. Our new industries simply aren’t labor-intensive.”206 When Sidelsky pressed the optimists to describe some of the “many new types of jobs” that will be created, they came up with “lead drivers of multi-car road trains” in the coming era of driverless cars, “big data analysts, or robot mechanics. That does not sound like too many new jobs to me.”207
In 2014 Pew conducted a “Future of the Internet” survey of nearly two thousand technology experts on how advances in robotics and artificial intelligence might affect the economy and employment in the coming decade. Half of these experts expected no loss in employment, even though “this group anticipates that many jobs currently performed by humans will be substantially taken over by robots or digital agents by 2025.” What evidence provided the basis for this optimism about job creation? They offered “faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution.”208 So that’s it? No need for evidence, just have faith? From scientists?
Pistono had an experience like ours when he attempted to locate tangible examples or a credible explanation of what the new industries might be and where new jobs might come from to replace the ones being lost. “I have read several books, watched hundreds of debates and interviews on the subject, and I have not so far heard a single argument to support the idea that we can make this work, or how.”209 By the end of 2014, former Treasury Secretary Lawrence Summers stated that he no longer believed that the automation process would create new jobs to replace the ones it was eliminating. “This isn’t some hypothetical future possibility,” he said. “This is something that’s emerging before us right now.”210 Due to automation, “there is no reason to believe there will be jobs for all people at socially acceptable wages,” the commission on the state of the US economy headed by Summers and Balls concluded in 2015. “The rapid pace in computer innovation of routine tasks has rightfully worried policymakers, as this scale of automation has little precedent in industrialized economies.”211
In 2013 two Oxford University scholars published a detailed research paper that concluded that 47 percent of existing US jobs—including many “middle-class” service jobs—were at “high risk” of being eliminated due to automation.212 This has led some respected observers to predict unemployment rates in the coming decades in the 50 percent range. We are dubious about the value of making predictions of that nature, because there are so many factors no one can anticipate. It is possible that unemployment will be buffered by scads of presently unforeseeable new jobs, though even then it is hard to see how they will be particularly good jobs. We need to be mindful of Amara’s Law, named after systems engineer Roy Amara: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”213
What we are comfortable saying—and what we believe must be said loudly and emphatically—is that the present course is taking all the trends toward increased inequality and poverty already in existence and making them worse. Technological displacement of workers, Summers correctly concludes, “is likely to be a substantial factor pushing toward more inequality in the future.”214 No evidence provided by anyone suggests otherwise. And that alone, not a prospective frightening rate of unemployment decades down the road, should be more than enough to get everyone’s attention.
This conclusion comes as no surprise to labor unions and progressive economists. Paul Krugman writes that “we could be looking at a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots.”215 But the exact same conclusion is being reached by many of those who are enthralled with capitalism and enamored with the new technologies, and who benefit materially by what is taking place. Brynjolfsson and McAfee stand as arguably the world’s greatest cheerleaders for automation and what they refer to as “the second machine age.” But they acknowledge that “the gains, however large, have been concentrated among a relatively small group of winners, leaving the majority of people worse off than before.”216 The Economist writes that “the prosperity unleashed by the digital revolution has gone overwhelmingly to the owners of capital and the highest-skilled workers.”217 It will continue into the future and “will contribute to pressure to reduce labour rights in all sorts of situations.”218 The Economist also notes there is a “squeezing out” of the middle class, whose emergence in the twentieth century “was a hugely important political and social development across the world.”219
There are crucial existential questions that the new era of artificial intelligence, robotics, and computerization brings to the forefront. “It’s apparent,” Greengard notes, “that society is hitting a tipping point where humans are engineering our own obsolescence.”220 What is the relationship of humans to their machines?221 At what point are they no longer “our” machines? What does human being mean? What makes us happy? Or, the question that technology historian George Dyson posed: “What if the cost of machines that think is people who don’t?”222 Organizations like the Future of Life Institute, funded in part by Tesla founder Elon Musk, the Lifeboat Foundation, and the recently created Center for the Study of Existential Risk at Cambridge University all address the “existential risks” for humanity posed by genetic engineering, nanotechnology, and artificial intelligence, particularly as we approach the so-called singularity, the hypothetical moment when artificial intelligence surpasses the human intellect. As renowned Cambridge astrophysicist Sir Martin Rees puts it, the risk is exponentially greater because of “the ease with which a single person or company can cause catastrophic harm.”223 In July 2015 the Future of Life Institute released a letter signed by some three thousand artificial intelligence researchers and sixteen thousand other noted scholars calling for a global ban on offensive autonomous military weapons. “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”224
We share these concerns. For our purposes, this technological revolution is of singular importance because it is contributing to an unsustainable future, and the crises of unemployment, inequality, and poverty are the direct and visible manifestations. Martin Ford is spot-on when he writes that “the problem is not with technology; it is with our economic system, and it lies specifically in that system’s inability to continue thriving in the new reality that is being created.”225 We would only add this: it is not even an economic problem as much as it is a political one, because the only plausible way to solve the great structural problems facing the economy will be through politics.
THE GREAT PARADOX(ES)
Whatever the virtues of capitalism, computerization and automation substantially elevate two related core paradoxes that are intrinsic to the profit system.
The first core paradox is that capitalism rewards businesses and investors for engaging in certain types of behavior and punishes them for not engaging in such behavior. When all businesses respond the same way, however, it creates major problems no one wanted, or it makes existing small problems much larger. The classic example is how businesses respond to an economic slowdown. The prudent course for an individual firm is to cut back on prospective investment and lay off workers and marshal resources—minimize losses—until a turnaround appears on the horizon. But when all firms follow the same course, and stop investing while laying off millions of workers, the recession grows much worse and proves far more intractable. All businesses suffer as a result, not to mention workers and everyone else. What is rational for the individual capitalist produces utterly irrational results when it is done by capitalists as a whole.
Likewise, for individuals it is regarded as commendable financial management when people save money instead of spending their entire income. The individual who is willing to forgo immediate consumption will become wealthier in the long run and have far more money to use for consumption. It is also good for the economy, since increased savings means there is more money for investment, and the overall economy can grow at a faster rate, with the result being higher incomes for everyone. But if everyone follows that course and increases their savings, overall consumption plummets and the economy can enter a recession. If the economy is already in a recession, a high rate of savings can slow down any recovery and contribute to a depression. Then incomes stagnate or fall, and the benefits of saving are lost. Economists have a term for this phenomenon: it is called “the paradox of thrift.” What is rational behavior for the individual produces disastrous consequences when everyone does it.
So it is with automation. The profit system pushes firms to automate as much as possible, and to de-skill remaining jobs as well. Firms that do not compete on these terms will be defeated in the marketplace, their profits will be lower, and management heads will roll. If a firm does not get its act together, the business will go under. For the individual firm this makes perfect sense, and there really isn’t any choice. But when all firms in the economy automate and de-skill as much as possible, it means that there is substantially less demand for the products many of them produce, and the economy stagnates. When what seems utterly progressive, dynamic, and rational for an individual firm is done by all of them, it produces an economy that is marked by low growth and mounting inequality and poverty. In an extreme case, the fruits of automation may then be denied to all.
The second core paradox of the profit system is that computerization and automation greatly enhance a criticism of capitalism that goes back to the time of Karl Marx, who with Friedrich Engels wrote in The Communist Manifesto that capitalism “cannot exist without constantly revolutionising the instruments of production.”226 This was the basis for arguably Marx’s strongest and most persistent critique—what David Harvey terms the “central contradiction” of capitalism in the Marxist tradition—and the one that never goes away: the system inexorably generates an increasing gap between what the economy is capable of producing—both quantitatively and qualitatively—and what it actually produces.227 In the present time, this inability for capitalism to successfully sell at a profit all that it can produce leads to stagnation, underinvestment, unemployment, and never-ending demands for austerity. But there is a larger, existential conflict between the increasingly advanced capacity to produce and what the economy actually produces as it awaits the green light from capitalists confident in their ability to maximize profits.
This insight was not reserved to those who were hostile to the capitalist system. The great nineteenth-century liberal economist John Stuart Mill, a contemporary of Marx, regarded capitalism as ideally suited for developing technology and expanding the productive capacity of a society, but he was dubious about its long-term suitability for a decent society.
Hitherto it is questionable if all the mechanical inventions yet made have lightened the day’s toil of any human being. They have enabled a greater population to live the same life of drudgery and imprisonment, and an increased number of manufacturers and others to make fortunes. They have increased the comforts of the middle classes. But they have not yet begun to effect those great changes in human destiny, which it is in their nature and in their futurity to accomplish.
In Mill’s mind, it was “only in the backward countries of the world that increased production is still an important object: in those most advanced, what is economically needed is a better distribution, of which one indispensable means is a stricter restraint on population.” He chastised his fellow economists for their overriding belief that “the test of prosperity is high profits,” and that the existing economic system was the optimal human condition for the rest of time. “The increase of wealth is not boundless,” he wrote. “I know not why it should be [a] matter of congratulation that persons who are already richer than any one needs to be, should have doubled their means of consuming things which give little or no pleasure except as representative of wealth.”
In Mill’s view, capitalist development would rightly lead to what he termed a “stationary state,” where the endless pursuit of ever more profit would no longer exist. Such a
society would exhibit these leading features: a well-paid and affluent body of labourers; no enormous fortunes, except what were earned and accumulated during a single lifetime; but a much larger body of persons than at present, not only exempt from the coarser toils, but with sufficient leisure, both physical and mental, from mechanical details, to cultivate freely the graces of life, and afford examples of them to the classes less favourably circumstanced for their growth. . . . The best state for human nature is that in which, while no one is poor, no one desires to be richer, nor has any reason to fear being thrust back by the efforts of others to push themselves forward.228
By the end of the second decade of the twentieth century, capitalism had undergone extraordinary growth in productive capacity—the modern US industrial economy exploded in size in the five decades following the Civil War—and the greatest and most original-thinking economists of the time returned to the paradox of spectacular productivity alongside spectacular deprivation. The visionary economist Thorstein Veblen regarded that as the exact description of the American economy. “The mechanical industry of the new order is inordinately productive,” he wrote in 1919.
So the rate and volume of output have to be regulated with a view to what the traffic will bear—that is to say, what will yield the largest net return in terms of price to the business men who manage the country’s industrial system. Otherwise, there will be “overproduction,” business depression, and consequent hard times all around. . . . That is to say, in no such community can the industrial system be allowed to work at full capacity for any appreciable interval of time, on pain of business stagnation and consequent privation for all classes and conditions of men. The requirements of profitable business will not tolerate it. So the rate and volume of output must be adjusted to the needs of the market, not to the working capacity of the available resources, equipment and man power, nor to the community’s need of consumable goods.
To Veblen this was a tragic state of affairs, as the technology that produces this state of affairs “is in an eminent sense a joint stock of knowledge and experience held in common by the civilized peoples.” Were it not “being manhandled by ignorant business men with an eye single to maximum profits, the resulting output of goods and services would doubtless exceed the current output by several hundred percent.”229
In 1930, as capitalism entered the worst depression of the twentieth century, and as the world was in the midst of “a bad attack of economic pessimism,” Keynes wrote a short piece to remind people that the problems of the economy were due not to its weakness, but rather to its extraordinary productivity. He noted that US factory output per worker was 40 percent greater in 1925 than in 1919. He projected that within readers’ lifetimes, the number of workers needed to “perform all the operations of agriculture, mining, and manufacture” would be reduced by 75 percent. Keynes hypothesized that in a century’s time, the “economic problem” would be solved, and very little human labor would be required to provide all people with living standards at least eight times greater than those of 1930.230 He thought it would possibly lead to more than a little angst as people struggled, for the first time in human existence, with the prospect that the “economic problem” was no longer all-consuming. Keynes wrote of reaching “our destination of economic bliss,” and with that a shift in human nature.231
Keynes’s insight regarding human nature is of particular importance. “Industriousness has served as America’s unofficial religion since its founding,” Thompson writes in the Atlantic. “The sanctity and preeminence of work lie at the heart of the country’s politics, economics, and social interactions. What might happen if work goes away?” Keynes’s pessimism at least for the short term is well founded. “The paradox of work is that many people hate their jobs,” Thompson notes, “but they are considerably more miserable doing nothing.” This leads Thompson to a provocative conclusion: “Most people do need to achieve things through, yes, work to feel a lasting sense of purpose. To envision a future that offers more than minute-to-minute satisfaction, we have to imagine how millions of people might find meaningful work without formal wages.”232
Keynes wrote before anyone anticipated how computerization and digital communication networks would turn everything upside down. “The development of automation and cybernation in the last two decades signals the end of the long, long era in which the inevitability of scarcity constituted the central fact of human existence,” Baran and Sweezy wrote in their seminal 1966 work Monopoly Capital.233 Herbert Marcuse, a close confidant and frequent correspondent of Baran, grasped the implications, much like Keynes: “Complete automation in the realm of necessity would open the dimension of free time as the one in which man’s private and societal existence would constitute itself. This would be the historical transcendence toward a new civilization.”234 But the outcome was not inexorable. “The central question is whether the prevailing relations of production promote or block, encourage or discourage the translation of these potentialities into practice,” Baran and Sweezy added. “The appearance and the widening of the gap between what is and what could be, demonstrate thus that the existing property relations and the economic, social and political institutions resting upon them have turned into an effective obstacle to the achievement of what has become possible.”235
The growth in the economy’s capacity to produce since the 1930s, or even the 1960s, has been extraordinary, much as these economists anticipated. If the experts we used as counsel for this chapter are anywhere near accurate, the next four or five decades could make the twentieth century look like the twelfth century.
In popular economic theory, such revolutionary increases in productive capacity are supposed to translate into higher living standards, much shorter workweeks, richer public infrastructure, and a greater overall social security. Society should have the resources to tackle vexing environmental problems with the least amount of pain possible. In fact, however, nothing on the horizon suggests that this is in the offing. As automation and computerization take productive capacity to undreamed-of heights, jobs grow more scarce and are de-skilled, many people are poorer, and all the talk is of austerity and seemingly endless cutbacks in social services. There is growing wealth for the few combined with greater insecurity for the many. Washington, we’ve got a problem.
The false assumptions, of course, are that the benefits of the technology accrue to more than the owners of the firms deploying the technologies. And also that capitalists have incentive to produce far more than they do to satisfy the needs of people worldwide. In fact, Veblen had it right: capitalists produce as much as they do only as long as it remains profitable to do so. Producing more than that lowers prices and lessens profits. In short, to follow Keynes’s logic to a place he did not go, capitalism would seem to have little or no reason to exist if the “economic problem” is solved, so it is imperative that the economic problem remain. For business and wealthy investors to continue to win, everyone else has to lose.
In our view, the evidence points in one direction: the economy needs to be fundamentally reformed, if not replaced. Capitalism as we know it is the wrong economic system for the material world that is emerging. This is a radical conclusion, but it is not made merely by radicals. The number of true believers who think leaving firms and wealthy investors alone to do as they wish will ultimately solve the employment problem and give us a great economy that can be the foundation for a vibrant democracy is shrinking, primarily because it is a faith-based position. There are also some who have a similar faith that technology is innately progressive and all-powerful, so it can and will solve capitalism’s problems for us. They tell us that all we have to do is get out of the way, make some fresh popcorn, and grab a front-row seat as the future unfolds.
But researching this book, what has been striking to us is that many, perhaps most, of the people who have studied these matters—from across the political spectrum—recognize that if the system is left alone, it will not right itself. Instead, structural changes are needed, and government will have to play the central role in determining and instituting these changes. Even those who believe that the existing capitalist system provides benefits that make it worth saving realize that significant reforms and government policy interventions are necessary to prevent intolerable outcomes. “It’s time to start discussing what kind of society we should construct around a labor-light economy,” Brynjolfsson and McAfee conclude. “How should the abundance of such an economy be shared? How can the tendency of modern capitalism to produce high levels of inequality be muted while preserving its ability to allocate resources efficiently and reward initiative and effort? What do fulfilling lives look like when they no longer center on industrial-era conceptions of work? How should education, the social safety net, taxation, and other important elements of civic society be rethought?”236
Where markets and business and private investment figure into the new economy is a matter to be studied, debated, and resolved; we only know that it cannot be the same as what we have had for generations. The solutions to the employment and economic crises in the United States are political. The great debate is over what types of reforms there should be, and what type of system we should end up with. A core responsibility of the democratic state is to provide the ground rules and basis for an economy that will best serve the democratically determined needs of the people. An unavoidable part of this debate is to take up the issues last taken seriously in the 1960s: How should technology best be deployed to serve human needs? Never has the need for such a democratic debate and policymaking been greater than it is today.
* This growing inequality and poverty makes liberal and progressive economists apoplectic. Lower incomes mean there is less and less consumer demand for products and services, so businesses have less incentive to make new investments, and to hire more workers who then have more income, which reinforces stagnation and makes significant long-term growth all but impossible. It makes the overall economy weaker and more susceptible to panics and crises. Despite the existence of proven policies to counter stagnation, elite policymakers refuse to countenance policies that would forcefully address inequality or strengthen the power of workers to gain better wages and working conditions, not to mention secure, long-term employment. The business-as-usual approach means that nothing in this chapter is likely to change for the better going forward. For that to happen, the narrow business-as-usual range of debate has to be obliterated.
* As the statistical appendix demonstrates (see notes to Chart 17), if the data is expanded to the top one hundred global firms—that is all workers (foreign and domestic) and all revenues (foreign and domestic)—it does not alter the pattern of steadily increasing returns per worker. If anything it appears that the ratio of revenues (or sales) to workers actually increases for these firms when foreign sales and employees are included. We stick to the United States because the data is more comprehensive for illustrating historical trends.
* This insight is hardly new; it was made in 1776 at the beginning of the industrial revolution by Adam Smith in The Wealth of Nations, when he wrote how simplifying labor processes made it easier to replace workers with machines. See Adam Smith, The Wealth of Nations (New York: Modern Library, 2000), p. 4.