CHAPTER TWO

THE ROBOT AT THE GATE

IN THE EARLY 1950S, Leslie Illingworth, a much-admired political cartoonist at the British satirical magazine Punch, drew a dark and foreboding sketch. Set at dusk on what appears to be a stormy autumn day, it shows a worker peering anxiously from the doorway of an anonymous manufacturing plant. One of his hands grips a small tool; the other is balled into a fist. He looks out across the muddy factory yard to the plant’s main gate. There, standing beside a sign reading “Hands Wanted,” looms a giant, broad-shouldered robot. Across its chest, emblazoned in block letters, is the word “Automation.”

The illustration was a sign of its times, a reflection of a new anxiety seeping through Western society. In 1956, it was reprinted as the frontispiece of a slender but influential book called Automation: Friend or Foe? by Robert Hugh Macmillan, an engineering professor at Cambridge University. On the first page, Macmillan posed an unsettling question: “Are we in danger of being destroyed by our own creations?” He was not, he explained, referring to the well-known “perils of unrestricted ‘push-button’ warfare.” He was talking about a less discussed but more insidious threat: “the rapidly increasing part that automatic devices are playing in the peace-time industrial life of all civilized countries.”1 Just as earlier machines “had replaced man’s muscles,” these new devices seemed likely to “replace his brains.” By taking over many good, well-paying jobs, they threatened to create widespread unemployment, leading to social strife and upheaval—of just the sort Karl Marx had foreseen a century earlier.2

But, Macmillan continued, it didn’t have to be that way. If “rightly applied,” automation could bring economic stability, spread prosperity, and relieve the human race of its toils. “My hope is that this new branch of technology may eventually enable us to lift the curse of Adam from the shoulders of man, for machines could indeed become men’s slaves rather than their masters, now that practical techniques have been devised for controlling them automatically.”3 Whether technologies of automation ultimately proved boon or bane, Macmillan warned, one thing was certain: they would play an ever greater role in industry and society. The economic imperatives of “a highly competitive world” made that inevitable.4 If a robot could work faster, cheaper, or better than its human counterpart, the robot would get the job.

image

“WE ARE brothers and sisters of our machines,” the technology historian George Dyson once remarked.5 Sibling relations are notoriously fraught, and so it is with our technological kin. We love our machines—not just because they’re useful to us, but because we find them companionable and even beautiful. In a well-built machine, we see some of our deepest aspirations take form: the desire to understand the world and its workings, the desire to turn nature’s power to our own purposes, the desire to add something new and of our own fashioning to the cosmos, the desire to be awed and amazed. An ingenious machine is a source of wonder and of pride.

But machines are ugly too, and we sense in them a threat to things we hold dear. Machines may be a conduit of human power, but that power has usually been wielded by the industrialists and financiers who own the contraptions, not the people paid to operate them. Machines are cold and mindless, and in their obedience to scripted routines we see an image of society’s darker possibilities. If machines bring something human to the alien cosmos, they also bring something alien to the human world. The mathematician and philosopher Bertrand Russell put it succinctly in a 1924 essay: “Machines are worshipped because they are beautiful and valued because they confer power; they are hated because they are hideous and loathed because they impose slavery.” 6

As Russell’s comment suggests, the tension in Macmillan’s view of automated machines—they’d either destroy us or redeem us, liberate us or enslave us—has a long history. The same tension has run through popular reactions to factory machinery since the start of the Industrial Revolution more than two centuries ago. While many of our forebears celebrated the arrival of mechanized production, seeing it as a symbol of progress and a guarantor of prosperity, others worried that machines would steal their jobs and even their souls. Ever since, the story of technology has been one of rapid, often disorienting change. Thanks to the ingenuity of our inventors and entrepreneurs, hardly a decade has passed without the arrival of new, more elaborate, and more capable machinery. Yet our ambivalence toward these fabulous creations, creations of our own hands and minds, has remained a constant. It’s almost as if in looking at a machine we see, if only dimly, something about ourselves that we don’t quite trust.

In his 1776 masterwork The Wealth of Nations, the foundational text of free enterprise, Adam Smith praised the great variety of “very pretty machines” that manufacturers were installing to “facilitate and abridge labour.” By enabling “one man to do the work of many,” he predicted, mechanization would provide a great boost to industrial productivity.7 Factory owners would earn more profits, which they would then invest in expanding their operations—building more plants, buying more machines, hiring more employees. Each individual machine’s abridgment of labor, far from being bad for workers, would actually stimulate demand for labor in the long run.

Other thinkers embraced and extended Smith’s assessment. Thanks to the higher productivity made possible by labor-saving equipment, they predicted, jobs would multiply, wages would go up, and prices of goods would come down. Workers would have some extra cash in their pockets, which they would use to purchase products from the manufacturers that employed them. That would provide yet more capital for industrial expansion. In this way, mechanization would help set in motion a virtuous cycle, accelerating a society’s economic growth, expanding and spreading its wealth, and bringing to its people what Smith had termed “convenience and luxury.”8 This view of technology as an economic elixir seemed, happily, to be borne out by the early history of industrialization, and it became a fixture of economic theory. The idea wasn’t compelling only to early capitalists and their scholarly brethren. Many social reformers applauded mechanization, viewing it as the best hope for raising the urban masses out of poverty and servitude.

Economists, capitalists, and reformers could afford to take the long view. With the workers themselves, that wasn’t the case. Even a temporary abridgment of labor could pose a real and immediate threat to their livelihoods. The installation of new factory machines put plenty of people out of jobs, and it forced others to exchange interesting, skilled work for the tedium of pulling levers and pressing foot-pedals. In many parts of Britain during the eighteenth and the early nineteenth century, skilled workers took to sabotaging the new machinery as a way to defend their jobs, their trades, and their communities. “Machine-breaking,” as the movement came to be called, was not simply an attack on technological progress. It was a concerted attempt by tradesmen to protect their ways of life, which were very much bound up in the crafts they practiced, and to secure their economic and civic autonomy. “If the workmen disliked certain machines,” writes the historian Malcolm Thomis, drawing on contemporary accounts of the uprisings, “it was because of the use to which they were being put, not because they were machines or because they were new.”9

Machine-breaking culminated in the Luddite rebellion that raged through the industrial counties of the English Midlands from 1811 to 1816. Weavers and knitters, fearing the destruction of their small-scale, locally organized cottage industry, formed guerrilla bands with the intent of stopping big textile mills and factories from installing mechanized looms and stocking frames. The Luddites—the rebels took their now-notorious name from a legendary Leicestershire machine-breaker known as Ned Ludlam—launched nighttime raids against the plants, often wrecking the new equipment. Thousands of British troops had to be called in to battle the rebels, and the soldiers put down the revolt with brutal force, killing many and incarcerating others.

Although the Luddites and other machine-breakers had some scattered success in slowing the pace of mechanization, they certainly didn’t stop it. Machines were soon so commonplace in factories, so essential to industrial production and competition, that resisting their use came to be seen as an exercise in futility. Workers acquiesced to the new technological regime, though their distrust of machinery persisted.

image

IT WAS Marx who, a few decades after the Luddites lost their fight, gave the deep divide in society’s view of mechanization its most powerful and influential expression. Frequently in his writings, Marx invests factory machinery with a demonic, parasitic will, portraying it as “dead labour” that “dominates, and pumps dry, living labour power.” The workman becomes a “mere living appendage” of the “lifeless mechanism.”10 In a darkly prophetic remark during an 1856 speech, he said, “All our invention and progress seem to result in endowing material forces with intellectual life, and stultifying human life into a material force.”11 But Marx didn’t just talk about the “infernal effects” of machines. As the media scholar Nick Dyer-Witheford has explained, he also saw and lauded “their emancipatory promise.”12 Modern machinery, Marx observed in that same speech, has “the wonderful power of shortening and fructifying human labour.”13 By freeing workers from the narrow specializations of their trades, machines might allow them to fulfill their potential as “totally developed” individuals, able to shift between “different modes of activity” and hence “different social functions.”14 In the right hands—those of the workers rather than the capitalists—technology would no longer be the yoke of oppression. It would become the uplifting block and tackle of self-fulfillment.

The idea of machines as emancipators took stronger hold in Western culture as the twentieth century approached. In an 1897 article praising the mechanization of American industry, the French economist Émile Levasseur ticked off the benefits that new technology had brought to “the laboring classes.” It had raised workers’ wages and pushed down the prices they paid for goods, providing them with greater material comfort. It had spurred a redesign of factories, making workplaces cleaner, better lit, and generally more hospitable than the dark satanic mills that characterized the early years of the Industrial Revolution. Most important of all, it had elevated the kind of work that factory hands performed. “Their task has become less onerous, the machine doing everything which requires great strength; the workman, instead of bringing his muscles into play, has become an inspector, using his intelligence.” Levasseur acknowledged that laborers still grumbled about having to operate machinery. “They reproach [the machine] with demanding such continued attention that it enervates,” he wrote, and they accuse it of “degrading man by transforming him into a machine, which knows how to make but one movement, and that always the same.” Yet he dismissed such complaints as blinkered. The workers simply didn’t understand how good they had it.15

Some artists and intellectuals, believing the imaginative work of the mind to be inherently superior to the productive labor of the body, saw a technological utopia in the making. Oscar Wilde, in an essay published at about the same time as Levasseur’s, though aimed at a very different audience, foresaw a day when machines would not just alleviate toil but eliminate it. “All unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery,” he wrote. “On mechanical slavery, on the slavery of the machine, the future of the world depends.” That machines would assume the role of slaves seemed to Wilde a foregone conclusion: “There is no doubt at all that this is the future of machinery, and just as trees grow while the country gentleman is asleep, so while Humanity will be amusing itself, or enjoying cultivated leisure—which, and not labour, is the aim of man—or making beautiful things, or reading beautiful things, or simply contemplating the world with admiration and delight, machinery will be doing all the necessary and unpleasant work.”16

The Great Depression of the 1930s curbed such enthusiasm. The economic collapse prompted a bitter outcry against what had, in the Roaring Twenties, come to be known and celebrated as the Machine Age. Labor unions and religious groups, crusading editorial writers and despairing citizens—all railed against the job-destroying machines and the greedy businessmen who owned them. “Machinery did not inaugurate the phenomenon of unemployment,” wrote the author of a best-selling book called Men and Machines, “but promoted it from a minor irritation to one of the chief plagues of mankind.” It appeared, he went on, that “from now on, the better able we are to produce, the worse we shall be off.”17 The mayor of Palo Alto, California, wrote a letter to President Herbert Hoover imploring him to take action against the “Frankenstein monster” of industrial technology, a scourge that was “devouring our civilization.”18 At times the government itself inflamed the public’s fears. One report issued by a federal agency called the factory machine “as dangerous as a wild animal.” The uncontrolled acceleration of progress, its author wrote, had left society chronically unprepared to deal with the consequences.19

But the Depression did not entirely extinguish the Wildean dream of a machine paradise. In some ways, it rendered the utopian vision of progress more vivid, more necessary. The more we saw machines as our foes, the more we yearned for them to be our friends. “We are being afflicted,” wrote the great British economist John Maynard Keynes in 1930, “with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment.” The ability of machines to take over jobs had outpaced the economy’s ability to create valuable new work for people to do. But the problem, Keynes assured his readers, was merely a symptom of “a temporary phase of maladjustment.” Growth and prosperity would return. Per-capita income would rise. And soon, thanks to the ingenuity and efficiency of our mechanical slaves, we wouldn’t have to worry about jobs at all. Keynes thought it entirely possible that in a hundred years, by the year 2030, technological progress would have freed humankind from “the struggle for subsistence” and propelled us to “our destination of economic bliss.” Machines would be doing even more of our work for us, but that would no longer be cause for worry or despair. By then, we would have figured out how to spread material wealth to everyone. Our only problem would be to figure out how to put our endless hours of leisure to good use—to teach ourselves “to enjoy” rather than “to strive.”20

We’re still striving, and it seems a safe bet that economic bliss will not have descended upon the planet by 2030. But if Keynes let his hopes get the best of him in the dark days of 1930, he was fundamentally right about the economy’s prospects. The Depression did prove temporary. Growth returned, jobs came back, incomes shot up, and companies continued buying more and better machines. Economic equilibrium, imperfect and fragile as always, reestablished itself. Adam Smith’s virtuous cycle kept turning.

By 1962, President John F. Kennedy could proclaim, during a speech in West Virginia, “We believe that if men have the talent to invent new machines that put men out of work, they have the talent to put those men back to work.”21 From the opening “we believe,” the sentence is ringingly Kennedyesque. The simple words become resonant as they’re repeated: men, talent, men, work, talent, men, work. The drum-like rhythm marches forward, giving the stirring conclusion—“back to work”—an air of inevitability. To those listening, Kennedy’s words must have sounded like the end of the story. But they weren’t. They were the end of one chapter, and a new chapter had already begun.

image

WORRIES ABOUT technological unemployment have been on the rise again, particularly in the United States. The recession of the early 1990s, which saw exalted U.S. companies such as General Motors, IBM, and Boeing fire tens of thousands of workers in massive “restructurings,” prompted fears that new technologies, particularly cheap computers and clever software, were about to wipe out middle-class jobs. In 1994, the sociologists Stanley Aronowitz and William DiFazio published The Jobless Future, a book that implicated “labor-displacing technological change” in “the trend toward more low-paid, temporary, benefit-free blue- and white-collar jobs and fewer decent permanent factory and office jobs.”22 The following year, Jeremy Rifkin’s unsettling The End of Work appeared. The rise of computer automation had inaugurated a “Third Industrial Revolution,” declared Rifkin. “In the years ahead, new, more sophisticated software technologies are going to bring civilization ever closer to a near workerless world.” Society had reached a turning point, he wrote. Computers could “result in massive unemployment and a potential global depression,” but they could also “free us for a life of increasing leisure” if we were willing to rewrite the tenets of contemporary capitalism.23 The two books, and others like them, caused a stir, but once again fears about technology-induced joblessness passed quickly. The resurgence of economic growth through the middle and late 1990s, culminating in the giddy dot-com boom, turned people’s attention away from apocalyptic predictions of mass unemployment.

A decade later, in the wake of the Great Recession of 2008, the anxieties returned, stronger than ever. In mid-2009, the American economy, recovering fitfully from the economic collapse, began to expand again. Corporate profits rebounded. Businesses ratcheted their capital investments up to pre-recession levels. The stock market soared. But hiring refused to bounce back. While it’s not unusual for companies to wait until a recovery is well established before recruiting new workers, this time the hiring lag seemed interminable. Job growth remained unusually tepid, the unemployment rate stubbornly high. Seeking an explanation, and a culprit, people looked to the usual suspect: labor-saving technology.

Late in 2011, two respected MIT researchers, Erik Brynjolfsson and Andrew McAfee, published a short electronic book, Race against the Machine, in which they gently chided economists and policy makers for dismissing the possibility that workplace technology was substantially reducing companies’ need for new employees. The “empirical fact” that machines had bolstered employment for centuries “conceals a dirty secret,” they wrote. “There is no economic law that says that everyone, or even most people, automatically benefit from technological progress.” Although Brynjolfsson and McAfee were anything but technophobes—they remained “hugely optimistic” about the ability of computers and robots to boost productivity and improve people’s lives over the long run—they made a strong case that technological unemployment was real, that it had become pervasive, and that it would likely get much worse. Human beings, they warned, were losing the race against the machine.24

Their ebook was like a match thrown onto a dry field. It sparked a vigorous and sometimes caustic debate among economists, a debate that soon drew the attention of journalists. The phrase “technological unemployment,” which had faded from use after the Great Depression, took a new grip on the public mind. At the start of 2013, the TV news program 60 Minutes ran a segment, called “March of the Machines,” that examined how businesses were using new technologies in place of workers at warehouses, hospitals, law firms, and manufacturing plants. Correspondent Steve Kroft lamented “a massive high-tech industry that’s contributed enormous productivity and wealth to the American economy but surprisingly little in the way of employment.”25 Shortly after the program aired, a team of Associated Press writers published a three-part investigative report on the persistence of high unemployment. Their grim conclusion: jobs are “being obliterated by technology.” Noting that science-fiction writers have long “warned of a future when we would be architects of our own obsolescence, replaced by our machines,” the AP reporters declared that “the future has arrived.”26 They quoted one analyst who predicted that the unemployment rate would reach 75 percent by the century’s end.27

Such forecasts are easy to dismiss. Their alarmist tone echoes the refrain heard time and again since the eighteenth century. Out of every economic downturn rises the specter of a job-munching Frankenstein monster. And then, when the economic cycle emerges from its trough and jobs return, the monster goes back in its cage and the worries subside. This time, though, the economy isn’t behaving as it normally does. Mounting evidence suggests that a troubling new dynamic may be at work. Joining Brynjolfsson and McAfee, several prominent economists have begun questioning their profession’s cherished assumption that technology-fueled productivity gains will bring job and wage growth. They point out that over the last decade U.S. productivity rose at a faster pace than we saw in the preceding thirty years, that corporate profits have hit levels we haven’t seen in half a century, and that business investments in new equipment have been rising sharply. That combination should bring robust employment growth. And yet the total number of jobs in the country has barely budged. Growth and employment are “diverging in advanced countries,” says economist Michael Spence, a Nobel laureate, and technology is the main reason why: “The replacement of routine manual jobs by machines and robots is a powerful, continuing, and perhaps accelerating trend in manufacturing and logistics, while networks of computers are replacing routine white-collar jobs in information processing.”28

Some of the heavy spending on robots and other automation technologies in recent years may reflect temporary economic conditions, particularly the ongoing efforts by politicians and central banks to stimulate growth. Low interest rates and aggressive government tax incentives for capital investment have likely encouraged companies to buy labor-saving equipment and software that they might not otherwise have purchased.29 But deeper and more prolonged trends also seem to be at work. Alan Krueger, the Princeton economist who chaired Barack Obama’s Council of Economic Advisers from 2011 to 2013, points out that even before the recession “the U.S. economy was not creating enough jobs, particularly not enough middle-class jobs, and we were losing manufacturing jobs at an alarming rate.”30 Since then, the picture has only darkened. It might be assumed that, at least when it comes to manufacturing, jobs aren’t disappearing but simply migrating to countries with low wages. That’s not so. The total number of worldwide manufacturing jobs has been falling for years, even in industrial powerhouses like China, while overall manufacturing output has grown sharply.31 Machines are replacing factory workers faster than economic expansion creates new manufacturing positions. As industrial robots become cheaper and more adept, the gap between lost and added jobs will almost certainly widen. Even the news that companies like GE and Apple are bringing some manufacturing work back to the United States is bittersweet. One of the reasons the work is returning is that most of it can be done without human beings. “Factory floors these days are nearly empty of people because software-driven machines are doing most of the work,” reports economics professor Tyler Cowen.32 A company doesn’t have to worry about labor costs if it’s not employing laborers.

The industrial economy—the economy of machines—is a recent phenomenon. It has been around for just two and a half centuries, a tick of history’s second hand. Drawing definitive conclusions about the link between technology and employment from such limited experience was probably rash. The logic of capitalism, when combined with the history of scientific and technological progress, would seem to be a recipe for the eventual removal of labor from the processes of production. Machines, unlike workers, don’t demand a share of the returns on capitalists’ investments. They don’t get sick or expect paid vacations or demand yearly raises. For the capitalist, labor is a problem that progress solves. Far from being irrational, the fear that technology will erode employment is fated to come true “in the very long run,” argues the eminent economic historian Robert Skidelsky: “Sooner or later, we will run out of jobs.”33

How long is the very long run? We don’t know, though Skidelsky warns that it may be “uncomfortably close” for some countries.34 In the near term, the impact of modern technology may be felt more in the distribution of jobs than in the overall employment figures. The mechanization of manual labor during the Industrial Revolution destroyed some good jobs, but it led to the creation of vast new categories of middle-class occupations. As companies expanded to serve bigger and more far-flung markets, they hired squads of supervisors and accountants, designers and marketers. Demand grew for teachers, doctors, lawyers, librarians, pilots, and all sorts of other professionals. The makeup of the job market is never static; it changes in response to technological and social trends. But there’s no guarantee that the changes will always benefit workers or expand the middle class. With computers being programmed to take over white-collar work, many professionals are being forced into lower-paying jobs or made to trade full-time posts for part-time ones.

While most of the jobs lost during the recent recession were in well-paying industries, nearly three-fourths of the jobs created since the recession are in low-paying sectors. Having studied the causes of the “incredibly anemic employment growth” in the United States since 2000, MIT economist David Autor concludes that information technology “has really changed the distribution of occupation,” creating a widening disparity in incomes and wealth. “There is an abundance of work to do in food service and there is an abundance of work in finance, but there are fewer middle-wage, middle-income jobs.”35 As new computer technologies extend automation into even more branches of the economy, we’re likely to see an acceleration of this trend, with a further hollowing of the middle class and a growing loss of jobs among even the highest-paid professionals. “Smart machines may make higher GDP possible,” notes Paul Krugman, another Nobel Prize–winning economist, “but also reduce the demand for people—including smart people. So we could be looking at a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots.”36

The news is not all dire. As the U.S. economy gained steam during the second half of 2013, hiring strengthened in several sectors, including construction and health care, and there were encouraging gains in some higher-paying professions. The demand for workers remains tied to the economic cycle, if not quite so tightly as in the past. The increasing use of computers and software has itself created some very attractive new jobs as well as plenty of entrepreneurial opportunities. By historical standards, though, the number of people employed in computing and related fields remains modest. We can’t all become software programmers or robotics engineers. We can’t all decamp to Silicon Valley and make a killing writing nifty smartphone apps.* With average wages stagnant and corporate profits continuing to surge, the economy’s bounties seem likely to go on flowing to the lucky few. And JFK’s reassuring words will sound more and more suspect.

Why might this time be different? What exactly has changed that may be severing the old link between new technologies and new jobs? To answer that question we have to look back to that giant robot standing at the gate in Leslie Illingworth’s cartoon—the robot named Automation.

image

THE WORD automation entered the language fairly recently. As best we can tell, it was first spoken in 1946, when engineers at the Ford Motor Company felt the need to coin a term to describe the latest machinery being installed on the company’s assembly lines. “Give us some more of that automatic business,” a Ford vice president reportedly said in a meeting. “Some more of that—that—‘automation.’ ”37 Ford’s plants were already famously mechanized, with sophisticated machines streamlining every job on the line. But factory hands still had to lug parts and subassemblies from one machine to the next. The workers still controlled the pace of production. The equipment installed in 1946 changed that. Machines took over the material-handling and conveyance functions, allowing the entire assembly process to proceed automatically. The alteration in work flow may not have seemed momentous to those on the factory floor. But it was. Control over a complex industrial process had shifted from worker to machine.

The new word spread quickly. Two years later, in a report on the Ford machinery, a writer for the magazine American Machinist defined automation as “the art of applying mechanical devices to manipulate work pieces . . . in timed sequence with the production equipment so that the line can be put wholly or partially under push-button control at strategic stations.”38 As automation reached into more industries and production processes, and as it began to take on metaphorical weight in the culture, its definition grew more diffuse. “Few words of recent years have been so twisted to suit a multitude of purposes and phobias as this new word, ‘automation,’ ” grumbled a Harvard business professor in 1958. “It has been used as a technological rallying cry, a manufacturing goal, an engineering challenge, an advertising slogan, a labor campaign banner, and as the symbol of ominous technological progress.” He then offered his own, eminently pragmatic definition: “Automation simply means something significantly more automatic than previously existed in that plant, industry, or location.”39 Automation wasn’t a thing or a technique so much as a force. It was more a manifestation of progress than a particular mode of operation. Any attempt at explaining or predicting its consequences would necessarily be tentative. As with many technological trends, automation would always be both old and new, and it would require a fresh reevaluation at each stage of its advance.

That Ford’s automated equipment arrived just after the end of the Second World War was no accident. It was during the war that modern automation technology took shape. When the Nazis began their bombing blitz against Great Britain in 1940, English and American scientists faced a challenge as daunting as it was pressing: How do you knock high-flying, fast-moving bombers out of the sky with heavy missiles fired from unwieldy antiaircraft guns on the ground? The mental calculations and physical adjustments required to aim a gun accurately—not at a plane’s current position but at its probable future position—were far too complicated for a soldier to perform with the speed necessary to get a shot off while a plane was still in range. This was no job for mortals. The missile’s trajectory, the scientists saw, had to be computed by a calculating machine, using tracking data coming in from radar systems along with statistical projections of a plane’s course, and then the calculations had to be fed automatically into the gun’s aiming mechanism to guide the firing. The gun’s aim, moreover, had to be adjusted continually to account for the success or failure of previous shots.

As for the members of the gunnery crews, their work would have to change to accommodate the new generation of automated weapons. And change it did. Artillerymen soon found themselves sitting in front of screens in darkened trucks, selecting targets from radar displays. Their identities shifted along with their jobs. They were no longer seen “as soldiers,” writes one historian, but rather “as technicians reading and manipulating representations of the world.” 40

In the antiaircraft cannons born of the Allied scientists’ work, we see all the elements of what now characterizes an automated system. First, at the system’s core, is a very fast calculating machine—a computer. Second is a sensing mechanism (radar, in this case) that monitors the external environment, the real world, and communicates essential data about it to the computer. Third is a communication link that allows the computer to control the movements of the physical apparatus that performs the actual work, with or without human assistance. And finally there’s a feedback method—a means of returning to the computer information about the results of its instructions so that it can adjust its calculations to correct for errors and account for changes in the environment. Sensory organs, a calculating brain, a stream of messages to control physical movements, and a feedback loop for learning: there you have the essence of automation, the essence of a robot. And there, too, you have the essence of a living being’s nervous system. The resemblance is no coincidence. In order to replace a human, an automated system first has to replicate a human, or at least some aspect of a human’s ability.

Automated machines existed before World War II. James Watt’s steam engine, the original prime mover of the Industrial Revolution, incorporated an ingenious feedback device—the fly-ball governor—that enabled it to regulate its own operation. As the engine sped up, it rotated a pair of metal balls, creating a centrifugal force that pulled a lever to close a steam valve, keeping the engine from running too fast. The Jacquard loom, invented in France around 1800, used steel punch cards to control the movements of spools of different-colored threads, allowing intricate patterns to be woven automatically. In 1866, a British engineer named J. Macfarlane Gray patented a steamship steering mechanism that was able to register the movement of a boat’s helm and, through a gear-operated feedback system, adjust the angle of the rudder to maintain a set course.41 But the development of fast computers, along with other sensitive electronic controls, opened a new chapter in the history of machines. It vastly expanded the possibilities of automation. As the mathematician Norbert Wiener, who helped write the prediction algorithms for the Allies’ automated antiaircraft gun, explained in his 1950 book The Human Use of Human Beings, the advances of the 1940s enabled inventors and engineers to go beyond “the sporadic design of individual automatic mechanisms.” The new technologies, while designed with weaponry in mind, gave rise to “a general policy for the construction of automatic mechanisms of the most varied type.” They paved the way for “the new automatic age.” 42

Beyond the pursuit of progress and productivity lay another impetus for the automatic age: politics. The postwar years were characterized by intense labor strife. Managers and unions battled in most American manufacturing sectors, and the tensions were often strongest in industries essential to the federal government’s Cold War buildup of military equipment and armaments. Strikes, walkouts, and slowdowns were daily events. In 1950 alone, eighty-eight work stoppages were staged at a single Westinghouse plant in Pittsburgh. In many factories, union stewards held more power over operations than did corporate managers—the workers called the shots. Military and industrial planners saw automation as a way to shift the balance of power back to management. Electronically controlled machinery, declared Fortune magazine in a 1946 cover story titled “Machines without Men,” would prove “immensely superior to the human mechanism,” not least because machines “are always satisfied with working conditions and never demand higher wages.” 43 An executive with Arthur D. Little, a leading management and engineering consultancy, wrote that the rise of automation heralded the business world’s “emancipation from human workers.” 44

In addition to reducing the need for laborers, particularly skilled ones, automated equipment provided business owners and managers with a technological means to control the speed and flow of production through the electronic programming of individual machines and entire assembly lines. When, at the Ford plants, control over the pace of the line shifted to the new automated equipment, the workers lost a great deal of autonomy. By the mid-1950s, the role of labor unions in charting factory operations was much diminished.45 The lesson would prove important: in an automated system, power concentrates with those who control the programming.

Wiener foresaw, with uncanny clarity, what would come next. The technologies of automation would advance far more rapidly than anyone had imagined. Computers would get faster and smaller. The speed and capacity of electronic communication and storage systems would increase exponentially. Sensors would see, hear, and feel the world with ever greater sensitivity. Robotic mechanisms would come “to replicate more nearly the functions of the human hand as supplemented by the human eye.” The cost to manufacture all the new devices and systems would plummet. The use of automation would become both possible and economical in ever more areas. And since computers could be programmed to carry out logical functions, automation’s reach would extend beyond the work of the hand and into the work of the mind—the realm of analysis, judgment, and decision making. A computerized machine didn’t have to act by manipulating material things like guns. It could act by manipulating information. “From this stage on, everything may go by machine,” Wiener wrote. “The machine plays no favorites between manual labor and white-collar labor.” It seemed obvious to him that automation would, sooner or later, create “an unemployment situation” that would make the calamity of the Great Depression “seem a pleasant joke.” 46

The Human Use of Human Beings was a best seller, as was Wiener’s earlier and much more technical treatise, Cybernetics, or Control and Communication in the Animal and the Machine. The mathematician’s unsettling analysis of technology’s trajectory became part of the intellectual texture of the 1950s. It inspired or informed many of the books and articles on automation that appeared during the decade, including Robert Hugh Macmillan’s slim volume. An aging Bertrand Russell, in a 1951 essay, “Are Human Beings Necessary?,” wrote that Wiener’s work made it clear that “we shall have to change some of the fundamental assumptions upon which the world has been run ever since civilization began.” 47 Wiener even makes a brief appearance as a forgotten prophet in Kurt Vonnegut’s first novel, the 1952 dystopian satire Player Piano, in which a young engineer’s rebellion against a rigidly automated world ends with an epic episode of machine-breaking.

image

THE IDEA of a robot invasion may have seemed threatening, if not apocalyptic, to a public already rattled by the bomb, but automation technologies were still in their infancy during the 1950s. Their ultimate consequences could be imagined, in speculative tracts and science-fiction fantasies, but those consequences were still a long way from being experienced. Through the 1960s, most automated machines continued to resemble the primitive robotic haulers on Ford’s postwar assembly lines. They were big, expensive, and none too bright. Most of them could perform only a single, repetitive function, adjusting their movements in response to a few elementary electronic commands: speed up, slow down; move left, move right; grasp, release. The machines were extraordinarily precise, but otherwise their talents were few. Toiling anonymously inside factories, often locked within cages to protect passersby from their mindless twists and jerks, they certainly didn’t look like they were about to take over the world. They seemed little more than very well-behaved and well-coordinated beasts of burden.

But robots and other automated systems had one big advantage over the purely mechanical contraptions that came before them. Because they ran on software, they could hitch a ride on the Moore’s Law Express. They could benefit from all the rapid advances—in processor speed, programming algorithms, storage and network capacity, interface design, and miniaturization—that came to characterize the progress of computers themselves. And that, as Wiener predicted, is what happened. Robots’ senses grew sharper; their brains, quicker and more supple; their conversations, more fluent; their ability to learn, more capacious. By the early 1970s, they were taking over production work that required flexibility and dexterity—cutting, welding, assembling. By the end of that decade, they were flying planes as well as building them. And then, freed from their physical embodiments and turned into the pure logic of code, they spread out into the business world through a multitude of specialized software applications. They entered the cerebral trades of the white-collar workforce, sometimes as replacements but far more often as assistants.

Robots may have been at the factory gate in the 1950s, but it’s only recently that they’ve marched, on our orders, into offices, shops, and homes. Today, as software of what Wiener termed “the judgment-replacing type” moves from our desks to our pockets, we’re at last beginning to experience automation’s true potential for changing what we do and how we do it. Everything is being automated. Or, as Netscape founder and Silicon Valley grandee Marc Andreessen puts it, “software is eating the world.” 48

That may be the most important lesson to be gleaned from Wiener’s work—and, for that matter, from the long, tumultuous history of labor-saving machinery. Technology changes, and it changes more quickly than human beings change. Where computers sprint forward at the pace of Moore’s law, our own innate abilities creep ahead with the tortoise-like tread of Darwin’s law. Where robots can be constructed in a myriad of forms, replicating everything from snakes that burrow in the ground to raptors that swoop across the sky to fish that swim through the sea, we’re basically stuck with our old, forked bodies. That doesn’t mean our machines are about to leave us in the evolutionary dust. Even the most powerful supercomputer evidences no more consciousness than a hammer. It does mean that our software and our robots will, with our guidance, continue to find new ways to outperform us—to work faster, cheaper, better. And, like those antiaircraft gunners during World War II, we’ll be compelled to adapt our own work, behavior, and skills to the capabilities and routines of the machines we depend on.

 

* The internet, it’s often noted, has opened opportunities for people to make money through their own personal initiative, with little investment of capital. They can sell used goods through eBay or crafts through Etsy. They can rent out a spare room through Airbnb or turn their car into a ghost cab with Lyft. They can find odd jobs through TaskRabbit. But while it’s easy to pick up spare change through such modest enterprise, few people are going to be able to earn a middle-class income from the work. The real money goes to the software companies running the online clearinghouses that connect buyer and seller or lessor and lessee—clearinghouses that, being highly automated themselves, need few employees.