CHAPTER FIVE

WHITE-COLLAR COMPUTER

LATE IN THE SUMMER OF 2005, researchers at the venerable RAND Corporation in California made a stirring prediction about the future of American medicine. Having completed what they called “the most detailed analysis ever conducted of the potential benefits of electronic medical records,” they declared that the U.S. health-care system “could save more than $81 billion annually and improve the quality of care” if hospitals and physicians automated their record keeping. The savings and other benefits, which RAND had estimated “using computer simulation models,” made it clear, one of the think tank’s top scientists said, “that it is time for the government and others who pay for health care to aggressively promote health information technology.”1 The last sentence in a subsequent report detailing the research underscored the sense of urgency: “The time to act is now.”2

When the RAND study appeared, excitement about the computerization of medicine was already running high. Early in 2004, George W. Bush had issued a presidential order establishing the Health Information Technology Adoption Initiative with the goal of digitizing most U.S. medical records within ten years. By the end of 2004, the federal government was handing out millions of dollars in grants to encourage the purchase of automated systems by doctors and hospitals. In June of 2005, the Department of Health and Human Services established a task force of government officials and industry executives, the American Health Information Community, to help spur the adoption of electronic medical records. The RAND research, by putting the anticipated benefits of electronic records into hard and seemingly reliable numbers, stoked both the excitement and the spending. As the New York Times would later report, the study “helped drive explosive growth in the electronic records industry and encouraged the federal government to give billions of dollars in financial incentives to hospitals and doctors that put the systems in place.”3 Shortly after being sworn in as president in 2009, Barack Obama cited the RAND numbers when he announced a program to dole out an additional $30 billion in government funds to subsidize purchases of electronic medical record (EMR) systems. A frenzy of investment ensued, as some three hundred thousand doctors and four thousand hospitals availed themselves of Washington’s largesse.4

Then, in 2013, just as Obama was being sworn in for a second term, RAND issued a new and very different report on the prospects for information technology in health care. The exuberance was gone; the tone now was chastened and apologetic. “Although the use of health IT has increased,” the authors of the paper wrote, “quality and efficiency of patient care are only marginally better. Research on the effectiveness of health IT has yielded mixed results. Worse yet, annual aggregate expenditures on health care in the United States have grown from approximately $2 trillion in 2005 to roughly $2.8 trillion today.” Worst of all, the EMR systems that doctors rushed to install with taxpayer money are plagued by problems with “interoperability.” The systems can’t talk to each other, which leaves critical patient data locked up in individual hospitals and doctors’ offices. One of the great promises of health IT has always been that it would, as the RAND authors noted, allow “a patient or provider to access needed health information anywhere at any time,” but because current EMR applications employ proprietary formats and conventions, they simply “enforce brand loyalty to a particular health care system.” While RAND continued to express high hopes for the future, it confessed that the “rosy scenario” in its original report had not panned out.5

Other studies back up the latest RAND conclusions. Although EMR systems are becoming common in the United States, and have been common in other countries, such as the United Kingdom and Australia, for years, evidence of their benefits remains elusive. In a broad 2011 review, a team of British public-health researchers examined more than a hundred recently published studies of computerized medical systems. They concluded that when it comes to patient care and safety, there’s “a vast gap between the theoretical and empirically demonstrated benefits.” The research that has been used to promote the adoption of the systems, the scholars found, is “weak and inconsistent,” and there is “insubstantial evidence to support the cost-effectiveness of these technologies.” As for electronic medical records in particular, the investigators reported that the existing research is inconclusive and provides “only anecdotal evidence of the fundamental expected benefits and risks.” 6 Some other researchers offer slightly sunnier assessments. Another 2011 literature review, by Department of Health and Human Services staffers, found that “a large majority of the recent studies show measurable benefits emerging from the adoption of health information technology.” But noting the limitations of the existing research, they also concluded that “there is only suggestive evidence that more advanced systems or specific health IT components facilitate greater benefits.”7 To date, there is no strong empirical support for claims that automating medical record keeping will lead to major reductions in health-care costs or significant improvements in the well-being of patients.

But if doctors and patients have seen few benefits from the scramble to automate record keeping, the companies that supply the systems have profited. Cerner Corporation, a medical software outfit, saw its revenues triple, from $1 billion to $3 billion, between 2005 and 2013. Cerner, as it happens, was one of five corporations that provided RAND with funding for the original 2005 study. The other sponsors, which included General Electric and Hewlett Packard, also have substantial business interests in health-care automation. As today’s flawed systems are replaced or upgraded in the future, to fix their interoperability problems and other shortcomings, information technology companies will reap further windfalls.

image

THERE’S NOTHING unusual about this story. A rush to install new and untested computer systems, particularly when spurred by grand claims from technology companies and analysts, almost always produces great disappointments for the buyers and great profits for the sellers. That doesn’t mean that the systems are doomed to be a bust. As bugs are worked out, features refined, and prices cut, even overhyped systems can eventually save companies a lot of money, not least by reducing their need to hire wage-earning workers. (The investments are, of course, far more likely to generate attractive returns when businesses are spending taxpayer money rather than their own.) This historical pattern seems likely to unfold again with EMR applications and related systems. As physicians and hospitals continue to computerize their record keeping and other operations—the generous government subsidies are still flowing—demonstrable efficiency gains may be achieved in some areas, and the quality of care may well improve for some patients, particularly when that care requires the coordinated efforts of several specialists. The fragmentation and cloistering of patient data are real problems in medicine, which well-designed, standardized information systems can help fix.

Beyond standing as yet another cautionary tale about rash investments in unproven software, the original RAND report, and the reaction to it, provide deeper lessons. For one thing, the projections of “computer simulation models” should always be viewed with skepticism. Simulations are also simplifications; they replicate the real world only imperfectly, and their outputs often reflect the biases of their creators. More important, the report and its aftermath reveal how deeply the substitution myth is entrenched in the way society perceives and evaluates automation. The RAND researchers assumed that beyond the obvious technical and training challenges in installing the systems, the shift from writing medical reports on paper to composing them with computers would be straightforward. Doctors, nurses, and other caregivers would substitute an automated method for a manual method, but they wouldn’t significantly change how they practice medicine. In fact, studies show that computers can “profoundly alter patient care workflow processes,” as a group of doctors and academics reported in the journal Pediatrics in 2006. “Although the intent of computerization is to improve patient care by making it safer and more efficient, the adverse effects and unintended consequences of workflow disruption may make the situation far worse.”8

Falling victim to the substitution myth, the RAND researchers did not sufficiently account for the possibility that electronic records would have ill effects along with beneficial ones—a problem that plagues many forecasts about the consequences of automation. The overly optimistic analysis led to overly optimistic policy. As the physicians and medical professors Jerome Groopman and Pamela Hartzband noted in a withering critique of the Obama administration’s subsidies, the 2005 RAND report “essentially ignore[d] downsides to electronic medical records” and also discounted earlier research that failed to find benefits in shifting from paper to digital records.9 RAND’s assumption that automation would be a substitute for manual work proved false, as human-factors experts would have predicted. But the damage, in wasted taxpayer money and misguided software installations, was done.

EMR systems are used for more than taking and sharing notes. Most of them incorporate decision-support software that, through on-screen checklists and prompts, provides guidance and suggestions to doctors during the course of consultations and examinations. The EMR information entered by the doctor then flows into the administrative systems of the medical practice or hospital, automating the generation of bills, prescriptions, test requests, and other forms and documents. One of the unexpected results is that physicians often end up billing patients for more and more costly services than they would have before the software was installed. As a doctor fills out a computer form during an examination, the system automatically recommends procedures—checking the eyes of a diabetes patient, say—that the doctor might want to consider performing. By clicking a checkbox to verify the completion of the procedure, the doctor not only adds a note to the record of the visit, but in many cases also triggers the billing system to add a new line item to the bill. The prompts may serve as useful reminders, and they may, in rare cases, prevent a doctor from overlooking a critical component of an exam. But they also inflate medical bills—a fact that system vendors have not been shy about highlighting in their sales pitches.10

Before doctors had software to prompt them, they were less likely to add an extra charge for certain minor procedures. The procedures were subsumed into more general charges—for an office visit, say, or a yearly physical. With the prompts, the individual charges get added to the invoice automatically. Just by making an action a little easier or a little more routine, the system alters the doctor’s behavior in a small but meaningful way. The fact that the doctor often ends up making more money by following the software’s lead provides a further incentive to defer to the system’s judgment. Some experts worry that the monetary incentive may be a little too strong. In response to press reports about the unforeseen increase in medical charges resulting from electronic records, the federal government launched, in October 2012, an investigation to determine whether the new systems were abetting systematic overbilling or even outright fraud in the Medicare program. A 2014 report from the Office of the Inspector General warned that “health care providers can use [EMR] software features that may mask true authorship of the medical record and distort information in the record to inflate health care claims.”11

There’s also evidence that electronic records encourage doctors to order unnecessary tests, which also ends up increasing rather than reducing the cost of care. One study, published in the journal Health Affairs in 2012, showed that when doctors are able to easily call up a patient’s past x-rays and other diagnostic images on a computer, they are more likely to order a new imaging test than if they lacked immediate access to the earlier images. Overall, doctors with computerized systems ordered new imaging tests in 18 percent of patient visits, while those without the systems ordered new tests in just 13 percent of visits. One of the common assumptions about electronic records is that by providing easy and immediate access to past test results, they would reduce the frequency of diagnostic testing. But this study indicates that, as its authors put it, “the reverse may be true.” By making it so easy to receive and review test results, the automated systems appear to “provide subtle encouragement to physicians to order more imaging studies,” the researchers argue. “In borderline situations, substituting a few keystrokes for the sometimes time-consuming task of tracking down results from an imaging facility may tip the balance in favor of ordering a test.”12 Here again we see how automation changes people’s behavior, and the way work gets done, in ways that are virtually impossible to predict—and that may run directly counter to expectations.

image

THE INTRODUCTION of automation into medicine, as with its introduction into aviation and other professions, has effects that go beyond efficiency and cost. We’ve already seen how software-generated highlights on mammograms alter, sometimes for better and sometimes for worse, the way radiologists read images. As physicians come to rely on computers to aid them in more facets of their everyday work, the technology is influencing the way they learn, the way they make decisions, and even their bedside manner.

A study of primary-care physicians who adopted electronic records, conducted by Timothy Hoff, a professor at SUNY’s University at Albany School of Public Health, reveals evidence of what Hoff terms “deskilling outcomes,” including “decreased clinical knowledge” and “increased stereotyping of patients.” In 2007 and 2008, Hoff interviewed seventy-eight physicians from primary-care practices of various sizes in upstate New York. Three-fourths of the doctors were routinely using EMR systems, and most of them said they feared computerization was leading to less thorough, less personalized care. The physicians using computers told Hoff that they would regularly “cut-and-paste” boilerplate text into their reports on patient visits, whereas when they dictated notes or wrote them by hand they “gave greater consideration to the quality and uniqueness of the information being read into the record.” Indeed, said the doctors, the very process of writing and dictation had served as a kind of “red flag” that forced them to slow down and “consider what they wanted to say.” The doctors complained to Hoff that the homogenized text of electronic records can diminish the richness of their understanding of patients, undercutting their “ability to make informed decisions around diagnosis and treatment.”13

Doctors’ growing reliance on the recycling, or “cloning,” of text is a natural outgrowth of the adoption of electronic records. EMR systems change the way clinicians take notes just as, years ago, the adoption of word-processing programs changed the way writers write and editors edit. The traditional practices of dictation and composition, whatever their benefits, come to feel slow and cumbersome when forced to compete with the ease and speed of cut-and-paste, drag-and-drop, and point-and-click. Stephen Levinson, a physician and the author of a standard textbook on medical record keeping and billing, sees extensive evidence of the rote reuse of old text in new records. As doctors employ computers to take notes on patients, he says, “records of every visit read almost word for word the same except for minor variations confined almost exclusively to the chief complaint.” While such “cloned documentation” doesn’t “make sense clinically” and “doesn’t satisfy the patient’s needs,” it nevertheless becomes the default method simply because it is faster and more efficient—and, not least, because cloned text often incorporates lists of procedures that serve as another trigger for adding charges to patients’ bills.14

What cloning shears away is nuance. Nearly all the contents of a typical electronic record “is boilerplate,” one internist told Hoff. “The story’s just not there. Not in my notes, not in other doctors’ notes.” The cost of diminished specificity and precision is compounded as cloned records circulate among other doctors. Physicians end up losing one of their main sources of on-the-job learning. The reading of dictated or handwritten notes from specialists has long provided an important educational benefit for primary-care doctors, deepening their understanding not only of individual patients but of everything from “disease treatments and their efficacy to new modes of diagnostic testing,” Hoff writes. As those reports come to be composed more and more of recycled text, they lose their subtlety and originality, and they become much less valuable as learning tools.15

Danielle Ofri, an internist at Bellevue Hospital in New York City who has written several books on the practice of medicine, sees other subtle losses in the switch from paper to electronic records. Although flipping through the pages of a traditional medical chart may seem archaic and inefficient these days, it can provide a doctor with a quick but meaningful sense of a patient’s health history, spanning many years. The more rigid way that computers present information actually tends to foreclose the long view. “In the computer,” Ofri writes, “all visits look the same from the outside, so it is impossible to tell which were thorough visits with extensive evaluation and which were only brief visits for medication refills.” Faced with the computer’s relatively inflexible interface, doctors often end up scanning a patient’s records for “only the last two or three visits; everything before that is effectively consigned to the electronic dust heap.”16

A recent study of the shift from paper to electronic records at University of Washington teaching hospitals provides further evidence of how the format of electronic records can make it harder for doctors to navigate a patient’s chart to find notes “of interest.” With paper records, doctors could use the “characteristic penmanship” of different specialists to quickly home in on critical information. Electronic records, with their homogenized format, erase such subtle distinctions.17 Beyond the navigational issues, Ofri worries that the organization of electronic records will alter the way physicians think: “The system encourages fragmented documentation, with different aspects of a patient’s condition secreted in unconnected fields, so it’s much harder to keep a global synthesis of the patient in mind.”18

The automation of note taking also introduces what Harvard Medical School professor Beth Lown calls a “third party” into the exam room. In an insightful 2012 paper, written with her student Dayron Rodriquez, Lown tells of how the computer itself “competes with the patient for clinicians’ attention, affects clinicians’ capacity to be fully present, and alters the nature of communication, relationships, and physicians’ sense of professional role.”19 Anyone who has been examined by a computer-tapping doctor probably has firsthand experience of at least some of what Lown describes, and researchers are finding empirical evidence that computers do indeed alter in meaningful ways the interactions between physician and patient. In a study conducted at a Veterans Administration clinic, patients who were examined by doctors taking electronic notes reported that “the computer adversely affected the amount of time the physician spent talking to, looking at, and examining them” and also tended to make the visit “feel less personal.”20 The clinic’s doctors generally agreed with the patients’ assessments. In another study, conducted at a large health maintenance organization in Israel, where the use of EMR systems is more common than in the United States, researchers found that during appointments with patients, primary-care physicians spend between 25 and 55 percent of their time looking at their computer screen. More than 90 percent of the Israeli doctors interviewed in the study said that electronic record keeping “disturbed communication with their patients.”21 Such a loss of focus is consistent with what psychologists have learned about how distracting it can be to operate a computer while performing some other task. “Paying attention to the computer and to the patient requires multitasking,” observes Lown, and multitasking “is the opposite of mindful presence.”22

The intrusiveness of the computer creates another problem that’s been widely documented. EMR and related systems are set up to provide on-screen warnings to doctors, a feature that can help avoid dangerous oversights or mistakes. If, for instance, a physician prescribes a combination of drugs that could trigger an adverse reaction in a patient, the software will highlight the risk. Most of the alerts, though, turn out to be unnecessary. They’re irrelevant, redundant, or just plain wrong. They seem to be generated not so much to protect the patient from harm as to protect the software vendor from lawsuits. (In bringing a third party into the exam room, the computer also brings in that party’s commercial and legal interests.) Studies show that primary-care physicians routinely dismiss about nine out of ten of the alerts they receive. That breeds a condition known as alert fatigue. Treating the software as an electronic boy-who-cried-wolf, doctors begin to tune out the alerts altogether. They dismiss them so quickly when they pop up that even the occasional valid warning ends up being ignored. Not only do the alerts intrude on the doctor-patient relationship; they’re served up in a way that can defeat their purpose.23

A medical exam or consultation involves an extraordinarily intricate and intimate form of personal communication. It requires, on the doctor’s part, both an empathic sensitivity to words and body language and a coldly rational analysis of evidence. To decipher a complicated medical problem or complaint, a clinician has to listen carefully to a patient’s story while at the same time guiding and filtering that story through established diagnostic frameworks. The key is to strike the right balance between grasping the specifics of the patient’s situation and inferring general patterns and probabilities derived from reading and experience. Checklists and other decision guides can serve as valuable aids in this process. They bring order to complicated and sometimes chaotic circumstances. But as the surgeon and New Yorker writer Atul Gawande explained in his book The Checklist Manifesto, the “virtues of regimentation” don’t negate the need for “courage, wits, and improvisation.” The best clinicians will always be distinguished by their “expert audacity.”24 By requiring a doctor to follow templates and prompts too slavishly, computer automation can skew the dynamics of doctor-patient relations. It can streamline patient visits and bring useful information to bear, but it can also, as Lown writes, “narrow the scope of inquiry prematurely” and even, by provoking an automation bias that gives precedence to the screen over the patient, lead to misdiagnoses. Doctors can begin to display “ ‘screen-driven’ information-gathering behaviors, scrolling and asking questions as they appear on the computer rather than following the patient’s narrative thread.”25

Being led by the screen rather than the patient is particularly perilous for young practitioners, Lown suggests, as it forecloses opportunities to learn the most subtle and human aspects of the art of medicine—the tacit knowledge that can’t be garnered from textbooks or software. It may also, in the long run, hinder doctors from developing the intuition that enables them to respond to emergencies and other unexpected events, when a patient’s fate can be sealed in a matter of minutes. At such moments, doctors can’t be methodical or deliberative; they can’t spend time gathering and analyzing information or working through templates. A computer is of little help. Doctors have to make near-instantaneous decisions about diagnosis and treatment. They have to act. Cognitive scientists who have studied physicians’ thought processes argue that expert clinicians don’t use conscious reasoning, or formal sets of rules, in emergencies. Drawing on their knowledge and experience, they simply “see” what’s wrong—oftentimes making a working diagnosis in a matter of seconds—and proceed to do what needs to be done. “The key cues to a patient’s condition,” explains Jerome Groopman in his book How Doctors Think, “coalesce into a pattern that the physician identifies as a specific disease or condition.” This is talent of a very high order, where, Groopman says, “thinking is inseparable from acting.”26 Like other forms of mental automaticity, it develops only through continuing practice with direct, immediate feedback. Put a screen between doctor and patient, and you put distance between them. You make it much harder for automaticity and intuition to develop.

image

IT DIDN’T take long, after their ragtag rebellion was crushed, for the surviving Luddites to see their fears come true. The making of textiles, along with the manufacture of many other goods, went from handicraft to industry within a few short years. The sites of production moved from homes and village workshops to large factories, which, to ensure access to sufficient laborers, materials, and customers, usually had to be built in or near cities. Craft workers followed the jobs, uprooting their families in a great wave of urbanization that was swollen by the loss of farming jobs to threshers and other agricultural equipment. Inside the new factories, ever more efficient and capable machines were installed, boosting productivity but also narrowing the responsibility and autonomy of those who operated the equipment. Skilled craftwork became unskilled factory labor.

Adam Smith had recognized how the specialization of factory jobs would lead to the deskilling of workers. “The man whose whole life is spent in performing a few simple operations, of which the effects too are, perhaps, always the same, or very nearly the same, has no occasion to exert his understanding, or to exercise his invention in finding out expedients for removing difficulties which never occur,” he wrote in The Wealth of Nations. “He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become.”27 Smith viewed the degradation of skills as an unfortunate but unavoidable by-product of efficient factory production. In his famous example of the division of labor at a pin-manufacturing plant, the master pin-maker who once painstakingly crafted each pin is replaced by a squad of unskilled workers, each performing a narrow task: “One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on, is a peculiar business, to whiten the pins is another; it is even a trade by itself to put them into the paper; and the important business of making a pin is, in this manner, divided into about eighteen distinct operations.”28 None of the men knows how to make an entire pin, but working together, each plying his own peculiar business, they churn out far more pins than could an equal number of master craftsmen working separately. And because the workers require little talent or training, the manufacturer can draw from a large pool of potential laborers, obviating the need to pay a premium for expertise.

Smith also appreciated how the division of labor eased the way for mechanization, which served to narrow workers’ skills even further. Once a manufacturer had broken an intricate process into a series of well-defined “simple operations,” it became relatively easy to design a machine to carry out each operation. The division of labor within a factory provided a set of specifications for its machinery. By the early years of the twentieth century, the deskilling of factory workers had become an explicit goal of industry, thanks to Frederick Winslow Taylor’s philosophy of “scientific management.” Believing, in line with Smith, that “the greatest prosperity” would be achieved “only when the work of [companies] is done with the smallest combined expenditure of human effort,” Taylor counseled factory owners to prepare strict instructions for how each employee should use each machine, scripting every movement of the worker’s body and mind.29 The great flaw in traditional ways of working, Taylor believed, was that they granted too much initiative and leeway to individuals. Optimum efficiency could be achieved only through the standardization of work, enforced by “rules, laws, and formulae” and reflected in the very design of machines.30

Viewed as a system, the mechanized factory, in which worker and machine merge into a tightly controlled, perfectly productive unit, was a triumph of engineering and efficiency. For the individuals who became its cogs, it brought, as the Luddites had foreseen, a sacrifice not only of skill but of independence. The loss in autonomy was more than economic. It was existential, as Hannah Arendt would emphasize in her 1958 book The Human Condition: “Unlike the tools of workmanship, which at every given moment in the work process remain the servants of the hand, the machines demand that the laborer serve them, that he adjust the natural rhythm of his body to their mechanical movement.”31 Technology had progressed—if that’s the right word—from simple tools that broadened the worker’s latitude to complex machines that constrained it.

In the second half of the last century, the relation between worker and machine grew more complicated. As companies expanded, technological progress accelerated, and consumer spending exploded, employment branched out into new forms. Managerial, professional, and clerical positions proliferated, as did jobs in the service sector. Machines assumed a welter of new forms as well, and people used them in all sorts of ways, on the job and off. The Taylorist ethos of achieving efficiency through the standardization of work processes, though still exerting a strong influence on business operations, was tempered in some companies by a desire to tap workers’ ingenuity and creativity. The coglike employee was no longer the ideal. Brought into this situation, the computer quickly took on a dual role. It served a Taylorist function of monitoring, measuring, and controlling people’s work; companies found that software applications provided a powerful means for standardizing processes and preventing deviations. But in the form of the PC, the computer also became a flexible, personal tool that granted individuals greater initiative and autonomy. The computer was both enforcer and emancipator.

As the uses of automation multiplied and spread from factory to office, the strength of the connection between technological progress and the deskilling of labor became a topic of fierce debate among sociologists and economists. In 1974, the controversy came to a head when Harry Braverman, a social theorist and onetime coppersmith, published a passionate book with a dry title, Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. In reviewing recent trends in employment and workplace technology, Braverman argued that most workers were being funneled into routine jobs that offered little responsibility, little challenge, and little opportunity to gain know-how in anything important. They often acted as accessories to their machines and computers. “With the development of the capitalist mode of production,” he wrote, “the very concept of skill becomes degraded along with the degradation of labor, and the yardstick by which it is measured shrinks to such a point that today the worker is considered to possess a ‘skill’ if his or her job requires a few days’ or weeks’ training, several months of training is regarded as unusually demanding, and the job that calls for a learning period of six months or a year—such as computer programming—inspires a paroxysm of awe.”32 The typical craft apprenticeship, he pointed out, by way of comparison, had lasted at least four years and often as many as seven. Braverman’s dense, carefully argued treatise was widely read. Its Marxist perspective fit with the radical atmosphere of the 1960s and early 1970s as neatly as a tenon in a mortise.

Braverman’s argument didn’t impress everyone.33 Critics of his work—and there were plenty—accused him of overstating the importance of traditional craft workers, who even in the eighteenth and nineteenth centuries hadn’t accounted for all that large a proportion of the labor force. They also thought he placed too much value on the manual skills associated with blue-collar production jobs at the expense of the interpersonal and analytical skills that come to the fore in many white-collar and service posts. The latter criticism pointed to a bigger problem, one that complicates any attempt to diagnose and interpret broad shifts in skill levels across the economy. Skill is a squishy concept. Talent can take many forms, and there’s no good, objective way to measure or compare them. Is an eighteenth-century cobbler making a pair of shoes at a bench in his workshop more or less skilled than a twenty-first-century marketer using her computer to develop a promotional plan for a product? Is a plasterer more or less skilled than a hairdresser? If a pipefitter in a shipyard loses his job and, after some training, finds new work repairing computers, has he gone up or down the skill ladder? The criteria necessary to provide good answers to such questions elude us. As a result, debates about trends in deskilling, not to mention upskilling, reskilling, and other varieties of skilling, often bog down in bickering over value judgments.

But if the broad skill-shift theories of Braverman and others are fated to remain controversial, the picture becomes clearer when the focus shifts to particular trades and professions. In case after case, we’ve seen that as machines become more sophisticated, the work left to people becomes less so. Although it’s now been largely forgotten, one of the most rigorous explorations of the effect of automation on skills was completed during the 1950s by the Harvard Business School professor James Bright. He examined, in exhaustive detail, the consequences of automation on workers in thirteen different industrial settings, ranging from an engine-manufacturing plant to a bakery to a feed mill. From the case studies, he derived an elaborate hierarchy of automation. It begins with the use of simple hand tools and proceeds up through seventeen levels to the use of complex machines programmed to regulate their own operation with sensors, feedback loops, and electronic controls. Bright analyzed how various skill requirements—physical effort, mental effort, dexterity, conceptual understanding, and so on—change as machines become more fully automated. He found that skill demands increase only in the very earliest stages of automation, with the introduction of power hand tools. As more complex machines are introduced, skill demands begin to slacken, and the demands ultimately fall off sharply when workers begin to use highly automated, self-regulating machinery. “It seems,” Bright wrote in his 1958 book Automation and Management, “that the more automatic the machine, the less the operator has to do.”34

To illustrate how deskilling proceeds, Bright used the example of a metalworker. When the worker uses simple manual tools, such as files and shears, the main skill requirements are job knowledge, including in this case an appreciation of the qualities and uses of metal, and physical dexterity. When power hand tools are introduced, the job grows more complicated and the cost of errors is magnified. The worker is called on to display “new levels of dexterity and decision-making” as well as greater attentiveness. He becomes a “machinist.” But when hand tools are replaced by mechanisms that perform a series of operations, such as milling machines that cut and grind blocks of metal into precise three-dimensional shapes, “attention, decision-making, and machine control responsibilities are partially or largely reduced” and “the technical knowledge requirement of machine functioning and adjustment is reduced tremendously.” The machinist becomes a “machine operator.” When mechanization becomes truly automatic—when machines are programmed to control themselves—the worker “contributes little or no physical or mental effort to the production activity.” He doesn’t even require much job knowledge, as that knowledge has effectively gone into the machine through its design and coding. His job, if it still exists, is reduced to “patrolling.” The metalworker becomes “a sort of watchman, a monitor, a helper.” He might best be thought of as “a liaison man between machine and operating management.” Overall, concluded Bright, “the progressive effect of automation is first to relieve the operator of manual effort and then to relieve him of the need to apply continuous mental effort.”35

When Bright began his study, the prevailing assumption, among business executives, politicians, and academics alike, was that automated machinery would demand greater skills and training on the part of workers. Bright discovered, to his surprise, that the opposite was more often the case: “I was startled to find that the upgrading effect had not occurred to anywhere near the extent that is often assumed. On the contrary, there was more evidence that automation had reduced the skill requirements of the operating work force.” In a 1966 report for a U.S. government commission on automation and employment, Bright reviewed his original research and discussed the technological developments that had occurred in the succeeding years. The advance of automation, he noted, had continued apace, propelled by the rapid deployment of mainframe computers in business and industry. The early evidence suggested that the broad adoption of computers would continue rather than reverse the deskilling trend. “The lesson,” he wrote, “should be increasingly clear—it is not necessarily true that highly complex equipment requires skilled operators. The ‘skill’ can be built into the machine.”36

image

IT MAY seem as though a factory worker operating a noisy industrial machine has little in common with a highly educated professional entering esoteric information through a touchscreen or keyboard in a quiet office. But in both cases, we see a person sharing a job with an automated system—with another party. And, as Bright’s work and subsequent studies of automation make clear, the sophistication of the system, whether it operates mechanically or digitally, determines how roles and responsibilities are divided and, in turn, the set of skills each party is called upon to exercise. As more skills are built into the machine, it assumes more control over the work, and the worker’s opportunity to engage in and develop deeper talents, such as those involved in interpretation and judgment, dwindles. When automation reaches its highest level, when it takes command of the job, the worker, skillwise, has nowhere to go but down. The immediate product of the joint machine-human labor, it’s important to emphasize, may be superior, according to measures of efficiency and even quality, but the human party’s responsibility and agency are nonetheless curtailed. “What if the cost of machines that think is people who don’t?” asked George Dyson, the technology historian, in 2008.37 It’s a question that gains salience as we continue to shift responsibility for analysis and decision making to our computers.

The expanding ability of decision-support systems to guide doctors’ thoughts, and to take control of certain aspects of medical decision making, reflects recent and dramatic gains in computing. When doctors make diagnoses, they draw on their knowledge of a large body of specialized information, learned through years of rigorous education and apprenticeship as well as the ongoing study of medical journals and other relevant literature. Until recently, it was difficult, if not impossible, for computers to replicate such deep, specialized, and often tacit knowledge. But inexorable advances in processing speed, precipitous declines in data-storage and networking costs, and breakthroughs in artificial-intelligence methods such as natural language processing and pattern recognition have changed the equation. Computers have become much more adept at reviewing and interpreting vast amounts of text and other information. By spotting correlations in the data—traits or phenomena that tend to be found together or to occur simultaneously or sequentially—computers are often able to make accurate predictions, calculating, say, the probability that a patient displaying a set of symptoms has or will develop a particular disease or the odds that a patient with a certain disease will respond well to a particular drug or other treatment regimen.

Through machine-learning techniques like decision trees and neural networks, which dynamically model complex statistical relationships among phenomena, computers are also able to refine the way they make predictions as they process more data and receive feedback about the accuracy of earlier guesses.38 The weightings they give different variables get more precise, and their calculations of probability better reflect what happens in the real world. Today’s computers get smarter as they gain experience, just as people do. New “neuromorphic” microchips, which have machine-learning protocols hardwired into their circuitry, will boost computers’ learning ability in coming years, some computer scientists believe. Machines will become more discerning. We may bristle at the idea that computers are “smart” or “intelligent,” but the fact is that while they may lack the understanding, empathy, and insight of doctors, computers are able to replicate many of the judgments of doctors through the statistical analysis of large amounts of digital information—what’s come to be known as “big data.” Many of the old debates about the meaning of intelligence are being rendered moot by the brute number-­crunching force of today’s data-processing machines.

The diagnostic skills of computers will only get better. As more data about individual patients are collected and stored, in the form of electronic records, digitized images and test results, pharmacy transactions, and, in the not-too-distant future, readings from personal biological sensors and health-monitoring apps, computers will become more proficient at finding correlations and calculating probabilities at ever finer levels of detail. Templates and guidelines will become more comprehensive and elaborate. Given the current stress on achieving greater efficiency in health care, we’re likely to see the Taylorist ethos of optimization and standardization take hold throughout the medical field. The already strong trend toward replacing personal clinical judgment with the statistical outputs of so-called evidence-based medicine will gain momentum. Doctors will face increasing pressure, if not outright managerial fiat, to cede more control over diagnoses and treatment decisions to software.

To put it into uncharitable but not inaccurate terms, many doctors may soon find themselves taking on the role of human sensors who collect information for a decision-making computer. The doctors will examine the patient and enter data into electronic forms, but the computer will take the lead in suggesting diagnoses and recommending therapies. Thanks to the steady escalation of computer automation through Bright’s hierarchy, physicians seem destined to experience, at least in some areas of their practice, the same deskilling effect that was once restricted to factory hands.

They will not be alone. The incursion of computers into elite professional work is happening everywhere. We’ve already seen how the thinking of corporate auditors is being shaped by expert systems that make predictions about risks and other variables. Other financial professionals, from loan officers to investment managers, also depend on computer models to guide their decisions, and Wall Street is now largely under the control of correlation-sniffing computers and the quants who program them. The number of people employed as securities dealers and traders in New York City plummeted by a third, from 150,000 to 100,000, between 2000 and 2013, despite the fact that Wall Street firms were often posting record profits. The overriding goal of brokerage and investment banking firms is “automating the system and getting rid of the traders,” one financial industry analyst explained to a Bloomberg reporter. As for the traders who remain, “all they do today is hit buttons on computer screens.”39

That’s true not only in the trading of simple stocks and bonds but also in the packaging and dealing of complex financial instruments. Ashwin Parameswaran, a technology analyst and former investment banker, notes that “banks have made a significant effort to reduce the amount of skill and know-how required to price and trade financial derivatives. Trading systems have been progressively modified so that as much knowledge as possible is embedded within the software.” 40 Predictive algorithms are even moving into the lofty realm of venture capitalism, where top investors have long prided themselves on having a good nose for business and innovation. Prominent venture-capital firms like the Ironstone Group and Google Ventures now use computers to sniff out patterns in records of entrepreneurial success, and they place their bets accordingly.

A similar trend is under way in the law. For years, attorneys have depended on computers to search legal databases and prepare documents. Recently, software has taken a more central role in law offices. The critical process of document discovery, in which, traditionally, junior lawyers and paralegals read through reams of correspondence, email messages, and notes in search of evidence, has been largely automated. Computers can parse thousands of pages of digitized documents in seconds. Using e-discovery software with language-analysis algorithms, the machines not only spot relevant words and phrases but also discern chains of events, relationships among people, and even personal emotions and motivations. A single computer can take over the work of dozens of well-paid professionals. Document-preparation software has also advanced. By filling out a simple checklist, a lawyer can assemble a complex contract in an hour or two—a job that once took days.

On the horizon are bigger changes. Legal software firms are beginning to develop statistical prediction algorithms that, by analyzing many thousands of past cases, can recommend trial strategies, such as the choice of a venue or the terms of a settlement offer, that carry high probabilities of success. Software will soon be able to make the kinds of judgments that up to now required the experience and insight of a senior litigator.41 Lex Machina, a company started in 2010 by a group of Stanford law professors and computer scientists, offers a preview of what’s coming. With a database covering some 150,000 intellectual property cases, it runs computer analyses that predict the outcomes of patent lawsuits under various scenarios, taking into account the court, the presiding judge and participating attorneys, the litigants, the outcomes of related cases, and other factors.

Predictive algorithms are also assuming more control over the decisions made by business executives. Companies are spending billions of dollars a year on “people analytics” software that automates decisions about hiring, pay, and promotion. Xerox now relies exclusively on computers to choose among applicants for its fifty thousand call-center jobs. Candidates sit at a computer for a half-hour personality test, and the hiring software immediately gives them a score reflecting the likelihood that they’ll perform well, show up for work reliably, and stick with the job. The company extends offers to those with high scores and sends low scorers on their way.42 UPS uses predictive algorithms to chart daily routes for its drivers. Retailers use them to determine the optimal arrangement of merchandise on store shelves. Marketers and ad agencies use them in deciding where and when to run advertisements and in generating promotional messages on social networks. Managers increasingly find themselves playing a subservient role to software. They review and rubber-stamp plans and decisions produced by computers.

There’s an irony here. In shifting the center of the economy from physical goods to data flows, computers brought new status and wealth to information workers during the last decades of the twentieth century. People who made their living by manipulating signs and symbols on screens became the stars of the new economy, even as the factory jobs that had long buttressed the middle class were being transferred overseas or handed off to robots. The dot-com bubble of the late 1990s, when for a few euphoric years riches flooded out of computer networks and into personal brokerage accounts, seemed to herald the start of a golden age of unlimited economic opportunity—what technology boosters dubbed a “long boom.” But the good times proved fleeting. Now we’re seeing that, as Norbert Wiener predicted, automation doesn’t play favorites. Computers are as good at analyzing symbols and otherwise parsing and managing information as they are at directing the moves of industrial robots. Even the people who operate complex computer systems are losing their jobs to software, as data centers, like factories, become increasingly automated. The vast server farms operated by companies like Google, Amazon, and Apple essentially run themselves. Thanks to virtualization, an engineering technique that uses software to replicate the functions of hardware components like servers, the facilities’ operations can be monitored and controlled by algorithms. Network problems and application glitches can be detected and fixed automatically, often in a matter of seconds. It may turn out that the late twentieth century’s “intellectualization of labor,” as the Italian media scholar Franco Berardi has termed it,43 was just a precursor to the early twenty-first century’s automation of intellect.

It’s always risky to speculate how far computers will go in mimicking the insights and judgments of people. Extrapolations based on recent computing trends have a way of turning into fantasies. But even if we assume, contrary to the extravagant promises of big-data evangelists, that there are limits to the applicability and usefulness of correlation-based predictions and other forms of statistical analysis, it seems clear that computers are a long way from bumping up against those limits. When, in early 2011, the IBM supercomputer Watson took the crown as the reigning champion of Jeopardy!, thrashing two of the quiz show’s top players, we got a preview of where computers’ analytical talents are heading. Watson’s ability to decipher clues was astonishing, but by the standards of contemporary artificial-­intelligence programming, the computer was not performing an exceptional feat. It was, essentially, searching a vast database of documents for potential answers and then, by working simultaneously through a variety of prediction routines, determining which answer had the highest probability of being the correct one. But it was performing that feat so quickly that it was able to outthink exceptionally smart people in a tricky test involving trivia, wordplay, and recall.

Watson represents the flowering of a new, pragmatic form of artificial intelligence. Back in the 1950s and 1960s, when digital computers were still new, many mathematicians and engineers, and quite a few psychologists and philosophers, came to believe that the human brain had to operate like some sort of digital calculating machine. They saw in the computer a metaphor and a model for the mind. Creating artificial intelligence, it followed, would be fairly straightforward: you’d figure out the algorithms that run inside our skulls and then you’d translate those programs into software code. It didn’t work. The original artificial-intelligence strategy failed miserably. Whatever it is that goes on inside our brains, it turned out, can’t be reduced to the computations that go on inside computers.* Today’s computer scientists are taking a very different approach to artificial intelligence that’s at once less ambitious and more effective. The goal is no longer to replicate the process of human thought—that’s still beyond our ken—but rather to replicate its results. These scientists look at a particular product of the mind—a hiring decision, say, or an answer to a trivia question—and then program a computer to accomplish the same result in its own mindless way. The workings of Watson’s circuits bear little resemblance to the workings of the mind of a person playing Jeopardy!, but Watson can still post a higher score.

In the 1930s, while working on his doctoral thesis, the British mathematician and computing pioneer Alan Turing came up with the idea of an “oracle machine.” It was a kind of computer that, applying a set of explicit rules to a store of data through “some unspecified means,” could answer questions that normally would require tacit human knowledge. Turing was curious to figure out “how far it is possible to eliminate intuition, and leave only ingenuity.” For the purposes of his thought experiment, he posited that there would be no limit to the machine’s number-crunching acumen, no upper bound to the speed of its calculations or the amount of data it could take into account. “We do not mind how much ingenuity is required,” he wrote, “and therefore assume it to be available in unlimited supply.” 44 Turing was, as usual, prescient. He understood, as few others did at the time, the latent intelligence of algorithms, and he foresaw how that intelligence would be released by speedy calculations. Computers and databases will always have limits, but in systems like Watson we see the arrival of operational oracle machines. What Turing could only imagine, engineers are now building. Ingenuity is replacing intuition.

Watson’s data-analysis acumen is being put to practical use as a diagnostic aid for oncologists and other doctors, and IBM foresees further applications in such fields as law, finance, and education. Spy agencies like the CIA and the NSA are also reported to be testing the system. If Google’s driverless car reveals the newfound power of computers to replicate our psychomotor skills, to match or exceed our ability to navigate the physical world, Watson demonstrates computers’ newfound power to replicate our cognitive skills, to match or exceed our ability to navigate the world of symbols and ideas.

image

BUT THE replication of the outputs of thinking is not thinking. As Turing himself stressed, algorithms will never replace intuition entirely. There will always be a place for “spontaneous judgments which are not the result of conscious trains of reasoning.” 45 What really makes us smart is not our ability to pull facts from documents or decipher statistical patterns in arrays of data. It’s our ability to make sense of things, to weave the knowledge we draw from observation and experience, from living, into a rich and fluid understanding of the world that we can then apply to any task or challenge. It’s this supple quality of mind, spanning conscious and unconscious cognition, reason and inspiration, that allows human beings to think conceptually, critically, metaphorically, speculatively, wittily—to take leaps of logic and imagination.

Hector Levesque, a computer scientist and roboticist at the University of Toronto, provides an example of a simple question that people can answer in a snap but that baffles computers:

The large ball crashed right through the table because it was made of Styrofoam.

What was made of Styrofoam, the large ball or the table?

We come up with the answer effortlessly because we understand what Styrofoam is and what happens when you drop something on a table and what tables tend to be like and what the adjective large implies. We grasp the context, both of the situation and of the words used to describe it. A computer, lacking any true understanding of the world, finds the language of the question hopelessly ambiguous. It remains locked in its algorithms. Reducing intelligence to the statistical analysis of large data sets “can lead us,” says Levesque, “to systems with very impressive performance that are nonetheless idiot-savants.” They might be great at chess or Jeopardy! or facial recognition or other tightly circumscribed mental exercises, but they “are completely hopeless outside their area of expertise.” 46 Their precision is remarkable, but it’s often a symptom of the narrowness of their perception.

Even when aimed at questions amenable to probabilistic answers, computer analysis is not flawless. The speed and apparent exactitude of computer calculations can mask limitations and distortions in the underlying data, not to mention imperfections in the data-mining algorithms themselves. Any large data set holds an abundance of spurious correlations along with the reliable ones. It’s not hard to be misled by mere coincidence or to conjure a phantom association.47 Once a particular data set becomes the basis for important decisions, moreover, the data and its analysis become vulnerable to corruption. Seeking financial, political, or social advantage, people will try to game the system. As the social scientist Donald T. Campbell explained in a renowned 1976 paper, “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” 48

Flaws in data and algorithms can leave professionals, and the rest of us, susceptible to an especially pernicious form of automation bias. “The threat is that we will let ourselves be mindlessly bound by the output of our analyses even when we have reasonable grounds for suspecting something is amiss,” warn Viktor Mayer-Schönberger and Kenneth Cukier in their 2013 book Big Data. “Or that we will attribute a degree of truth to the data which it does not deserve.” 49 A particular risk with correlation-calculating algorithms stems from their reliance on data about the past to anticipate the future. In most cases, the future behaves as expected; it follows precedent. But on those peculiar occasions when conditions veer from established patterns, the algorithms can make wildly inaccurate predictions—a fact that has already spelled disaster for some highly computerized hedge funds and brokerage firms. For all their gifts, computers still display a frightening lack of common sense.

The more we embrace what Microsoft researcher Kate Crawford terms “data fundamentalism,”50 the more tempted we’ll be to devalue the many talents computers can’t mimic—to grant so much control to software that we restrict people’s ability to exercise the know-how that comes from real experience and that can often lead to creative, counterintuitive insights. As some of the unforeseen consequences of electronic medical records show, templates and formulas are necessarily reductive and can all too easily become straightjackets of the mind. The Vermont doctor and medical professor Lawrence Weed has, since the 1960s, been a forceful and eloquent advocate for using computers to help doctors make smart, informed decisions.51 He’s been called the father of electronic medical records. But even he warns that the current “misguided use of statistical knowledge” in medicine “systematically excludes the individualized knowledge and data essential to patient care.”52

Gary Klein, a research psychologist who studies how people make decisions, has deeper worries. By forcing physicians to follow set rules, evidence-based medicine “can impede scientific progress,” he writes. Should hospitals and insurers “mandate EBM, backed up by the threat of lawsuits if adverse outcomes are accompanied by any departure from best practices, physicians will become reluctant to try alternative treatment strategies that have not yet been evaluated using randomized controlled trials. Scientific advancement can become stifled if front-line physicians, who blend medical expertise with respect for research, are prevented from exploration and are discouraged from making discoveries.”53

If we’re not careful, the automation of mental labor, by changing the nature and focus of intellectual endeavor, may end up eroding one of the foundations of culture itself: our desire to understand the world. Predictive algorithms may be supernaturally skilled at discovering correlations, but they’re indifferent to the underlying causes of traits and phenomena. Yet it’s the deciphering of causation—the meticulous untangling of how and why things work the way they do—that extends the reach of human understanding and ultimately gives meaning to our search for knowledge. If we come to see automated calculations of probability as sufficient for our professional and social purposes, we risk losing or at least weakening our desire and motivation to seek explanations, to venture down the circuitous paths that lead toward wisdom and wonder. Why bother, if a computer can spit out “the answer” in a millisecond or two?

In his 1947 essay “Rationalism in Politics,” the British philosopher Michael Oakeshott provided a vivid description of the modern rationalist: “His mind has no atmosphere, no changes of season and temperature; his intellectual processes, so far as possible, are insulated from all external influence and go on in the void.” The rationalist has no concern for culture or history; he neither cultivates nor displays a personal perspective. His thinking is notable only for “the rapidity with which he reduces the tangle and variety of experience” into “a formula.”54 Oakeshott’s words also provide us with a perfect description of computer intelligence: eminently practical and productive and entirely lacking in curiosity, imagination, and worldliness.

 

* The use of terms like neural network and neuromorphic processing may give the impression that computers operate the way brains operate (or vice versa). But the terms shouldn’t be taken literally; they’re figures of speech. Since we don’t yet know how brains operate, how thought and consciousness arise from the interplay of neurons, we can’t build computers that work as brains do.