CHAPTER FOUR

THE DEGENERATION EFFECT

A HUNDRED YEARS AGO, in his book An Introduction to Mathematics, the British philosopher Alfred North Whitehead wrote, “Civilization advances by extending the number of important operations which we can perform without thinking about them.” Whitehead wasn’t writing about machinery. He was writing about the use of mathematical symbols to represent ideas or logical processes—an early example of how intellectual work can be encapsulated in code. But he intended his observation to be taken generally. The common notion that “we should cultivate the habit of thinking of what we are doing,” he wrote, is “profoundly erroneous.” The more we can relieve our minds of routine chores, offloading the tasks to technological aids, the more mental power we’ll be able to store up for the deepest, most creative kinds of reasoning and conjecture. “Operations of thought are like cavalry charges in battle—they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.”1

It’s hard to imagine a more succinct or confident expression of faith in automation as a cornerstone of progress. Implicit in Whitehead’s words is a belief in a hierarchy of human action. Every time we offload a job to a tool or a machine, or to a symbol or a software algorithm, we free ourselves to climb to a higher pursuit, one requiring greater dexterity, richer intelligence, or a broader perspective. We may lose something with each upward step, but what we gain is, in the end, far greater. Taken to an extreme, Whitehead’s sense of automation as liberation turns into the techno-utopianism of Wilde and Keynes, or Marx at his sunniest—the dream that machines will free us from our earthly labors and deliver us back to an Eden of leisurely delights. But Whitehead didn’t have his head in the clouds. He was making a pragmatic point about how to spend our time and exert our effort. In a publication from the 1970s, the U.S. Department of Labor summed up the job of secretaries by saying that they “relieve their employers of routine duties so they can work on more important matters.”2 Software and other automation technologies, in the Whitehead view, play an analogous role.

History provides plenty of evidence to support Whitehead. People have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead. The transfer of work has allowed us to tackle thornier challenges and rise to greater achievements. That’s been true on the farm, in the factory, in the laboratory, in the home. But we shouldn’t take Whitehead’s observation for a universal truth. He was writing when automation was limited to distinct, well-defined, and repetitive tasks—weaving fabric with a steam loom, harvesting grain with a combine, multiplying numbers with a slide rule. Automation is different now. Computers, as we’ve seen, can be programmed to perform or support complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. In automated systems today, the computer often takes on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans. The person operating the computer is left to play the role of a high-tech clerk, entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action to its human collaborators, software narrows our focus. We trade subtle, specialized talents for more routine, less distinctive ones.

Most of us assume, as Whitehead did, that automation is benign, that it raises us to higher callings but doesn’t otherwise alter the way we behave or think. That’s a fallacy. It’s an expression of what scholars of automation have come to call the “substitution myth.” A labor-saving device doesn’t just provide a substitute for some isolated component of a job. It alters the character of the entire task, including the roles, attitudes, and skills of the people who take part in it. As Raja Parasuraman explained in a 2000 journal article, “Automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers.”3 Automation remakes both work and worker.

image

WHEN PEOPLE tackle a task with the aid of computers, they often fall victim to a pair of cognitive ailments, automation complacency and automation bias. Both reveal the traps that lie in store when we take the Whitehead route of performing important operations without thinking about them.

Automation complacency takes hold when a computer lulls us into a false sense of security. We become so confident that the machine will work flawlessly, handling any challenge that may arise, that we allow our attention to drift. We disengage from our work, or at least from the part of it that the software is handling, and as a result may miss signals that something is amiss. Most of us have experienced complacency when at a computer. In using email or word-processing software, we become less vigilant proofreaders when the spell checker is on.4 That’s a simple example, which at worst can lead to a moment of embarrassment. But as the sometimes tragic experience of aviators shows, automation complacency can have deadly consequences. In the worst cases, people become so trusting of the technology that their awareness of what’s going on around them fades completely. They tune out. If a problem suddenly crops up, they may act bewildered and waste precious moments trying to reorient themselves.

Automation complacency has been documented in many high-risk situations, from battlefields to industrial control rooms to the bridges of ships and submarines. One classic case involved a 1,500-passenger ocean liner named the Royal Majesty, which in the spring of 1995 was sailing from Bermuda to Boston on the last leg of a week-long cruise. The ship was outfitted with a state-of-the-art automated navigation system that used GPS signals to keep it on course. An hour into the voyage, the cable for the GPS antenna came loose and the navigation system lost its bearings. It continued to give readings, but they were no longer accurate. For more than thirty hours, as the ship slowly drifted off its appointed route, the captain and crew remained oblivious to the problem, despite clear signs that the system had failed. At one point, a mate on watch was unable to spot an important locational buoy that the ship was due to pass. He failed to report the fact. His trust in the navigation system was so complete that he assumed the buoy was there and he simply didn’t see it. Nearly twenty miles off course, the ship finally ran aground on a sandbar near Nantucket Island. No one was hurt, fortunately, though the cruise company suffered millions in damages. Government safety investigators concluded that automation complacency caused the mishap. The ship’s officers were “overly reliant” on the automated system, to the point that they ignored other “navigation aids [and] lookout information” that would have told them they were dangerously off course. Automation, the investigators reported, had “the effect of leaving the mariner out of meaningful control or active participation in the operation of the ship.”5

Complacency can plague people who work in offices as well as those who ply airways and seaways. In an investigation of how design software has influenced the building trades, MIT sociologist Sherry Turkle documented a change in architects’ attention to detail. When plans were hand-drawn, architects would painstakingly double-check all the dimensions before handing blueprints over to construction crews. The architects knew that they were fallible, that they could make the occasional goof, and so they followed an old carpentry dictum: measure twice, cut once. With software-generated plans, they’re less careful about verifying measurements. The apparent precision of computer renderings and printouts leads them to assume that the figures are accurate. “It seems presumptuous to check,” one architect told Turkle; “I mean, how could I do a better job than the computer? It can do things down to hundredths of an inch.” Such complacency, which can be shared by engineers and builders, has led to costly mistakes in planning and construction. Computers don’t make goofs, we tell ourselves, even though we know that their outputs are only as good as our inputs. “The fancier the computer system,” one of Turkle’s students observed, “the more you start to assume that it is correcting your errors, the more you start to believe that what comes out of the machine is just how it should be. It is just a visceral thing.” 6

Automation bias is closely related to automation complacency. It creeps in when people give undue weight to the information coming through their monitors. Even when the information is wrong or misleading, they believe it. Their trust in the software becomes so strong that they ignore or discount other sources of information, including their own senses. If you’ve ever found yourself lost or going around in circles after slavishly following flawed or outdated directions from a GPS device or other digital mapping tool, you’ve felt the effects of automation bias. Even people who drive for a living can display a startling lack of common sense when relying on satellite navigation. Ignoring road signs and other environmental cues, they’ll proceed down hazardous routes and sometimes end up crashing into low overpasses or getting stuck in the narrow streets of small towns. In Seattle in 2008, the driver of a twelve-foot-high bus carrying a high-school sports team ran into a concrete bridge with a nine-foot clearance. The top of the bus was sheared off, and twenty-one injured students had to be taken to the hospital. The driver told police that he had been following GPS instructions and “did not see” signs and flashing lights warning of the low bridge ahead.7

Automation bias is a particular risk for people who use decision-support software to guide them through analyses or diagnoses. Since the late 1990s, radiologists have been using computer-aided detection systems that highlight suspicious areas on mammograms and other x-rays. A digital version of an image is scanned into a computer, and pattern-matching software reviews it and adds arrows or other “prompts” to suggest areas for the doctor to inspect more closely. In some cases, the highlights aid in the discovery of disease, helping radiologists identify potential cancers they might otherwise have missed. But studies reveal that the highlights can also have the opposite effect. Biased by the software’s suggestions, doctors can end up giving cursory attention to the areas of an image that haven’t been highlighted, sometimes overlooking an early-stage tumor or other abnormality. The prompts can also increase the likelihood of false-positives, when a radiologist calls a patient back for an unnecessary biopsy.

A recent review of mammography data, conducted by a team of researchers at City University London, indicates that automation bias has had a greater effect on radiologists and other image readers than was previously thought. The researchers found that while computer-aided detection tends to improve the reliability of “less discriminating readers” in assessing “comparatively easy cases,” it can actually degrade the performance of expert readers in evaluating tricky cases. When relying on the software, the experts are more likely to overlook certain cancers.8 The subtle biases inspired by computerized decision aids may, moreover, be “an inherent part of the human cognitive apparatus for reacting to cues and alarms.”9 By directing the focus of our eyes, the aids distort our vision.

Both complacency and bias seem to stem from limitations in our ability to pay attention. Our tendency toward complacency reveals how easily our concentration and awareness can fade when we’re not routinely called on to interact with our surroundings. Our propensity to be biased in evaluating and weighing information shows that our mind’s focus is selective and can easily be skewed by misplaced trust or even the appearance of seemingly helpful prompts. Both complacency and bias tend to become more severe as the quality and reliability of an automated system improve.10 Experiments show that when a system produces errors fairly frequently, we stay on high alert. We maintain awareness of our surroundings and carefully monitor information from a variety of sources. But when a system is more reliable, breaking down or making mistakes only occasionally, we get lazy. We start to assume the system is infallible.

Because automated systems usually work fine even when we lose awareness or objectivity, we are rarely penalized for our complacency or our bias. That ends up compounding the problems, as Parasuraman pointed out in a 2010 paper written with his German colleague Dietrich Manzey. “Given the usually high reliability of automated systems, even highly complacent and biased behavior of operators rarely leads to obvious performance consequences,” the scholars wrote. The lack of negative feedback can in time induce “a cognitive process that resembles what has been referred to as ‘learned carelessness.’ ”11 Think about driving a car when you’re sleepy. If you begin to nod off and drift out of your lane, you’ll usually go onto a rough shoulder, hit a rumble strip, or earn a honk from another motorist—signals that jolt you back awake. If you’re in a car that automatically keeps you within a lane by monitoring the lane markers and adjusting the steering, you won’t receive such warnings. You’ll drift into a deeper slumber. Then if something unexpected happens—an animal runs into the road, say, or a car stops short in front of you—you’ll be much more likely to have an accident. By isolating us from negative feedback, automation makes it harder for us to stay alert and engaged. We tune out even more.

image

OUR SUSCEPTIBILITY to complacency and bias explains how a reliance on automation can lead to errors of both commission and omission. We accept and act on information that turns out to be incorrect or incomplete, or we fail to see things that we should have seen. But the way that a reliance on computers weakens awareness and attentiveness also points to a more insidious problem. Automation tends to turn us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit our ability to learn and to develop expertise. Whether automation enhances or degrades our performance in a given task, over the long run it may diminish our existing skills or prevent us from acquiring new ones.

Since the late 1970s, cognitive psychologists have been documenting a phenomenon called the generation effect. It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they read them from a page. In one early and famous experiment, conducted by University of Toronto psychologist Norman Slamecka, people used flash cards to memorize pairs of antonyms, like hot and cold. Some of the test subjects were given cards that had both words printed in full, like this:

HOT : COLD

Others used cards that showed only the first letter of the second word, like this:

HOT : C

The people who used the cards with the missing letters performed much better in a subsequent test measuring how well they remembered the word pairs. Simply forcing their minds to fill in a blank, to act rather than observe, led to stronger retention of information.12

The generation effect, it has since become clear, influences memory and learning in many different circumstances. Experiments have revealed evidence of the effect in tasks that involve not only remembering letters and words but also remembering numbers, pictures, and sounds, completing math problems, answering trivia questions, and reading for comprehension. Recent studies have also demonstrated the benefits of the generation effect for higher forms of teaching and learning. A 2011 paper in Science showed that students who read a complex science assignment during a study period and then spent a second period recalling as much of it as possible, unaided, learned the material more fully than students who read the assignment repeatedly over the course of four study periods.13 The mental act of generation improves people’s ability to carry out activities that, as education researcher Britte Haugan Cheng has written, “require conceptual reasoning and requisite deeper cognitive processing.” Indeed, Cheng says, the generation effect appears to strengthen as the material generated by the mind becomes more complex.14

Psychologists and neuroscientists are still trying to figure out what goes on in our minds to give rise to the generation effect. But it’s clear that deep cognitive and memory processes are involved. When we work hard at something, when we make it the focus of attention and effort, our mind rewards us with greater understanding. We remember more and we learn more. In time, we gain know-how, a particular talent for acting fluidly, expertly, and purposefully in the world. That’s hardly a surprise. Most of us know that the only way to get good at something is by actually doing it. It’s easy to gather information quickly from a computer screen—or from a book, for that matter. But true knowledge, particularly the kind that lodges deep in memory and manifests itself in skill, is harder to come by. It requires a vigorous, prolonged struggle with a demanding task.

The Australian psychologists Simon Farrell and Stephan Lewandowsky made the connection between automation and the generation effect in a paper published in 2000. In Slamecka’s experiment, they pointed out, supplying the second word of an antonym pair, rather than forcing a person to call the word to mind, “can be considered an instance of automation because a human activity—generation of the word ‘COLD’ by participants—has been obviated by a printed stimulus.” By extension, “the reduction in performance that is observed when generation is replaced by reading can be considered a manifestation of complacency.”15 That helps illuminate the cognitive cost of automation. When we carry out a task or a job on our own, we seem to use different mental processes than when we rely on the aid of a computer. When software reduces our engagement with our work, and in particular when it pushes us into a more passive role as observer or monitor, we circumvent the deep cognitive processing that underpins the generation effect. As a result, we hamper our ability to gain the kind of rich, real-world knowledge that leads to know-how. The generation effect requires precisely the kind of struggle that automation seeks to alleviate.

In 2004, Christof van Nimwegen, a cognitive psychologist at Utrecht University in the Netherlands, began a series of simple but ingenious experiments to investigate software’s effects on memory formation and the development of expertise.16 He recruited two groups of people and had them play a computer game based on a classic logic puzzle called Missionaries and Cannibals. To complete the puzzle, a player has to transport across a hypothetical river five missionaries and five cannibals (or, in van Nimwegen’s version, five yellow balls and five blue ones), using a boat that can accommodate no more than three passengers at a time. The tricky part is that there can never be more cannibals than missionaries in one place, either in the boat or on the riverbanks. (If outnumbered, the missionaries become the cannibals’ dinner, one assumes.) Figuring out the series of boat trips that can best accomplish the task requires rigorous analysis and careful planning.

One of van Nimwegen’s groups worked on the puzzle using software that provided step-by-step guidance, offering, for instance, on-screen prompts to highlight which moves were permissible and which weren’t. The other group used a rudimentary program that offered no assistance. As you’d expect, the people using the helpful software made faster progress at the outset. They could follow the prompts rather than having to pause before each move to recall the rules and figure out how they applied to the new situation. But as the game advanced, the players using the rudimentary software began to excel. In the end, they were able to work out the puzzle more efficiently, with significantly fewer wrong moves, than their counterparts who were receiving assistance. In his report on the experiment, van Nimwegen concluded that the subjects using the rudimentary program developed a clearer conceptual understanding of the task. They were better able to think ahead and plot a successful strategy. Those relying on guidance from the software, by contrast, often became confused and would “aimlessly click around.”

The cognitive penalty imposed by the software aids became even clearer eight months later, when van Nimwegen had the same people work through the puzzle again. Those who had earlier used the rudimentary software finished the game almost twice as quickly as their counterparts. The subjects using the basic program, he wrote, displayed “more focus” during the task and “better imprinting of knowledge” afterward. They enjoyed the benefits of the generation effect. Van Nimwegen and some of his Utrecht colleagues went on to conduct experiments involving more realistic tasks, such as using calendar software to schedule meetings and event-planning software to assign conference speakers to rooms. The results were the same. People who relied on the help of software prompts displayed less strategic thinking, made more superfluous moves, and ended up with a weaker conceptual understanding of the assignment. Those using unhelpful programs planned better, worked smarter, and learned more.17

What van Nimwegen observed in his laboratory—that when we automate cognitive tasks like problem solving, we hamper the mind’s ability to translate information into knowledge and knowledge into know-how—is also being documented in the real world. In many businesses, managers and other professionals depend on so-called expert systems to sort and analyze information and suggest courses of action. Accountants, for example, use decision-support software in corporate audits. The applications speed the work, but there are signs that as the software becomes more capable, the accountants become less so. One study, conducted by a group of Australian professors, examined the effects of the expert systems used by three international accounting firms. Two of the companies employed advanced software that, based on an accountant’s answers to basic questions about a client, recommended a set of relevant business risks to include in the client’s audit file. The third firm used simpler software that provided a list of potential risks but required the accountant to review them and manually select the pertinent ones for the file. The researchers gave accountants from each firm a test measuring their knowledge of risks in industries in which they had performed audits. Those from the firm with the less helpful software displayed a significantly stronger understanding of different forms of risk than did those from the other two firms. The decline in learning associated with advanced software affected even veteran auditors—those with more than five years of experience at their current firm.18

Other studies of expert systems reveal similar effects. The research indicates that while decision-support software can help novice analysts make better judgments in the short run, it can also make them mentally lazy. By diminishing the intensity of their thinking, the software retards their ability to encode information in memory, which makes them less likely to develop the rich tacit knowledge essential to true expertise.19 The drawbacks to automated decision aids can be subtle, but they have real consequences, particularly in fields where analytical errors have far-reaching repercussions. Miscalculations of risk, exacerbated by high-speed computerized trading programs, played a major role in the near meltdown of the world’s financial system in 2008. As Tufts University management professor Amar Bhidé has suggested, “robotic methods” of decision making led to a widespread “judgment deficit” among bankers and other Wall Street professionals.20 While it may be impossible to pin down the precise degree to which automation figured in the disaster, or in subsequent fiascos like the 2010 “flash crash” on U.S. exchanges, it seems prudent to take seriously any indication that a widely used technology may be diminishing the knowledge or clouding the judgment of people in sensitive jobs. In a 2013 paper, computer scientists Gordon Baxter and John Cartlidge warned that a reliance on automation is eroding the skills and knowledge of financial professionals even as computer-trading systems make financial markets more risky.21

Some software writers worry that their profession’s push to ease the strain of thinking is taking a toll on their own skills. Programmers today often use applications called integrated development environments, or IDEs, to aid them in composing code. The applications automate many tricky and time-consuming chores. They typically incorporate auto-complete, error-correction, and debugging routines, and the more sophisticated of them can evaluate and revise the structure of a program through a process known as refactoring. But as the applications take over the work of coding, programmers lose opportunities to practice their craft and sharpen their talent. “Modern IDEs are getting ‘helpful’ enough that at times I feel like an IDE operator rather than a programmer,” writes Vivek Haldar, a veteran software developer with Google. “The behavior all these tools encourage is not ‘think deeply about your code and write it carefully,’ but ‘just write a crappy first draft of your code, and then the tools will tell you not just what’s wrong with it, but also how to make it better.’ ” His verdict: “Sharp tools, dull minds.” 22

Google acknowledges that it has even seen a dumbing-down effect among the general public as it has made its search engine more responsive and solicitous, better able to predict what people are looking for. Google does more than correct our typos; it suggests search terms as we type, untangles semantic ambiguities in our requests, and anticipates our needs based on where we are and how we’ve behaved in the past. We might assume that as Google gets better at helping us refine our searching, we would learn from its example. We would become more sophisticated in formulating keywords and otherwise honing our online explorations. But according to the company’s top search engineer, Amit Singhal, the opposite is the case. In 2013, a reporter from the Observer newspaper in London interviewed Singhal about the many improvements that have been made to Google’s search engine over the years. “Presumably,” the journalist remarked, “we have got more precise in our search terms the more we have used Google.” Singhal sighed and, “somewhat wearily,” corrected the reporter: “ ‘Actually, it works the other way. The more accurate the machine gets, the lazier the questions become.’ ”23

More than our ability to compose sophisticated queries may be compromised by the ease of search engines. A series of experiments reported in Science in 2011 indicates that the ready availability of information online weakens our memory for facts. In one of the experiments, test subjects read a few-dozen simple, true statements—“an ostrich’s eye is bigger than its brain,” for instance—and then typed them into a computer. Half the subjects were told the computer would save what they typed; the other half were told that the statements would be erased. Afterward, the participants were asked to write down all the statements they could recall. People who believed the information had been stored in the computer remembered significantly fewer of the facts than did those who assumed the statements had not been saved. Just knowing that information will be available in a database appears to reduce the likelihood that our brains will make the effort required to form memories. “Since search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally,” the researchers concluded. “When we need it, we will look it up.”24

For millennia, people have supplemented their biological memory with storage technologies, from scrolls and books to microfiche and magnetic tape. Tools for recording and distributing information underpin civilization. But external storage and biological memory are not the same thing. Knowledge involves more than looking stuff up; it requires the encoding of facts and experiences in personal memory. To truly know something, you have to weave it into your neural circuitry, and then you have to repeatedly retrieve it from memory and put it to fresh use. With search engines and other online resources, we’ve automated information storage and retrieval to a degree far beyond anything seen before. The brain’s seemingly innate tendency to offload, or externalize, the work of remembering makes us more efficient thinkers in some ways. We can quickly call up facts that have slipped our mind. But that same tendency can become pathological when the automation of mental labor makes it too easy to avoid the work of remembering and understanding.

Google and other software companies are, of course, in the business of making our lives easier. That’s what we ask them to do, and it’s why we’re devoted to them. But as their programs become adept at doing our thinking for us, we naturally come to rely more on the software and less on our own smarts. We’re less likely to push our minds to do the work of generation. When that happens, we end up learning less and knowing less. We also become less capable. As the University of Texas computer scientist Mihai Nadin has observed, in regard to modern software, “The more the interface replaces human effort, the lower the adaptivity of the user to new situations.”25 In place of the generation effect, computer automation gives us the reverse: a degeneration effect.

image

BEAR WITH me while I draw your attention back to that ill-fated, slicker-yellow Subaru with the manual transmission. As you’ll recall, I went from hapless gear-grinder to reasonably accomplished stick-handler with just a few weeks’ practice. The arm and leg movements my dad had taught me, cursorily, now seemed instinctive. I was hardly an expert, but shifting was no longer a struggle. I could do it without thinking. It had become, well, automatic.

My experience provides a model for the way humans gain complicated skills. We often start off with some basic instruction, received directly from a teacher or mentor or indirectly from a book or manual or YouTube video, which transfers to our conscious mind explicit knowledge about how a task is performed: do this, then this, then this. That’s what my father did when he showed me the location of the gears and explained when to step on the clutch. As I quickly discovered, explicit knowledge goes only so far, particularly when the task has a psychomotor component as well as a cognitive one. To achieve mastery, you need to develop tacit knowledge, and that comes only through real experience—by rehearsing a skill, over and over again. The more you practice, the less you have to think about what you’re doing. Responsibility for the work shifts from your conscious mind, which tends to be slow and halting, to your unconscious mind, which is quick and fluid. As that happens, you free your conscious mind to focus on the more subtle aspects of the skill, and when those, too, become automatic, you proceed up to the next level. Keep going, keep pushing yourself, and ultimately, assuming you have some native aptitude for the task, you’re rewarded with expertise.

This skill-building process, through which talent comes to be exercised without conscious thought, goes by the ungainly name automatization, or the even more ungainly name proceduralization. Automatization involves deep and widespread adaptations in the brain. Certain brain cells, or neurons, become fine-tuned for the task at hand, and they work in concert through the electrochemical connections provided by synapses. The New York University cognitive psychologist Gary Marcus offers a more detailed explanation: “At the neural level, proceduralization consists of a wide array of carefully coordinated processes, including changes to both gray matter (neural cell bodies) and white matter (axons and dendrites that connect between neurons). Existing neural connections (synapses) must be made more efficient, new dendritic spines may be formed, and proteins must be synthesized.”26 Through the neural modifications of automatization, the brain develops automaticity, a capacity for rapid, unconscious perception, interpretation, and action that allows mind and body to recognize patterns and respond to changing circumstances instantaneously.

All of us experienced automatization and achieved automaticity when we learned to read. Watch a young child in the early stages of reading instruction, and you’ll witness a taxing mental struggle. The child has to identify each letter by studying its shape. She has to sound out how a set of letters combine to form a syllable and how a series of syllables combine to form a word. If she’s not already familiar with the word, she has to figure out or be told its meaning. And then, word by word, she has to interpret the meaning of a sentence, often resolving the ambiguities inherent to language. It’s a slow, painstaking process, and it requires the full attention of the conscious mind. Eventually, though, letters and then words get encoded in the neurons of the visual cortex—the part of the brain that processes sight—and the young reader begins to recognize them without conscious thought. Through a symphony of brain changes, reading becomes effortless. The greater the automaticity the child achieves, the more fluent and accomplished a reader she becomes.27

Whether it’s Wiley Post in a cockpit, Serena Williams on a tennis court, or Magnus Carlsen at a chessboard, the otherworldly talent of the virtuoso springs from automaticity. What looks like instinct is hard-won skill. Those changes in the brain don’t happen through passive observation. They’re generated through repeated confrontations with the unexpected. They require what the philosopher of mind Hubert Dreyfus terms “experience in a variety of situations, all seen from the same perspective but requiring different tactical decisions.”28 Without lots of practice, lots of repetition and rehearsal of a skill in different circumstances, you and your brain will never get really good at anything, at least not anything complicated. And without continuing practice, any talent you do achieve will get rusty.

It’s popular now to suggest that practice is all you need. Work at a skill for ten thousand hours or so, and you’ll be blessed with expertise—you’ll become the next great pastry chef or power forward. That, unhappily, is an exaggeration. Genetic traits, both physical and intellectual, do play an important role in the development of talent, particularly at the highest levels of achievement. Nature matters. Even our desire and aptitude for practice has, as Marcus points out, a genetic component: “How we respond to experience, and even what type of experience we seek, are themselves in part functions of the genes we are born with.”29 But if genes establish, at least roughly, the upper bounds of individual talent, it’s only through practice that a person will ever reach those limits and fulfill his or her potential. While innate abilities make a big difference, write psychology professors David Hambrick and Elizabeth Meinz, “research has left no doubt that one of the largest sources of individual differences in performance on complex tasks is simply what and how much people know: declarative, procedural, and strategic knowledge acquired through years of training and practice in a domain.”30

Automaticity, as its name makes clear, can be thought of as a kind of internalized automation. It’s the body’s way of making difficult but repetitive work routine. Physical movements and procedures get programmed into muscle memory; interpretations and judgments are made through the instant recognition of environmental patterns apprehended by the senses. The conscious mind, scientists discovered long ago, is surprisingly cramped, its capacity for taking in and processing information limited. Without automaticity, our consciousness would be perpetually overloaded. Even very simple acts, such as reading a sentence in a book or cutting a piece of steak with a knife and fork, would strain our cognitive capabilities. Automaticity gives us more headroom. It increases, to put a different spin on Alfred North Whitehead’s observation, “the number of important operations which we can perform without thinking about them.”

Tools and other technologies, at their best, do something similar, as Whitehead appreciated. The brain’s capacity for automaticity has limits of its own. Our unconscious mind can perform a lot of functions quickly and efficiently, but it can’t do everything. You might be able to memorize the times table up to twelve or even twenty, but you would probably have trouble memorizing it much beyond that. Even if your brain didn’t run out of memory, it would probably run out of patience. With a simple pocket calculator, though, you can automate even very complicated mathematical procedures, ones that would tax your unaided brain, and free up your conscious mind to consider what all that math adds up to. But that only works if you’ve already mastered basic arithmetic through study and practice. If you use the calculator to bypass learning, to carry out procedures that you haven’t learned and don’t understand, the tool will not open up new horizons. It won’t help you gain new mathematical knowledge and skills. It will simply be a black box, a mysterious number-producing mechanism. It will be a barrier to higher thought rather than a spur to it.

That’s what computer automation often does today, and it’s why Whitehead’s observation has become misleading as a guide to technology’s consequences. Rather than extending the brain’s innate capacity for automaticity, automation too often becomes an impediment to automatization. In relieving us of repetitive mental exercise, it also relieves us of deep learning. Both complacency and bias are symptoms of a mind that is not being challenged, that is not fully engaged in the kind of real-world practice that generates knowledge, enriches memory, and builds skill. The problem is compounded by the way computer systems distance us from direct and immediate feedback about our actions. As the psychologist K. Anders Ericsson, an expert on talent development, points out, regular feedback is essential to skill building. It’s what lets us learn from our mistakes and our successes. “In the absence of adequate feedback,” Ericsson explains, “efficient learning is impossible and improvement only minimal even for highly motivated subjects.”31

Automaticity, generation, flow: these mental phenomena are diverse, they’re complicated, and their biological underpinnings are understood only fuzzily. But they are all related, and they tell us something important about ourselves. The kinds of effort that give rise to talent—characterized by challenging tasks, clear goals, and direct feedback—are very similar to those that provide us with a sense of flow. They’re immersive experiences. They also describe the kinds of work that force us to actively generate knowledge rather than passively take in information. Honing our skills, enlarging our understanding, and achieving personal satisfaction and fulfillment are all of a piece. And they all require tight connections, physical and mental, between the individual and the world. They all require, to quote the American philosopher Robert Talisse, “getting your hands dirty with the world and letting the world kick back in a certain way.”32 Automaticity is the inscription the world leaves on the active mind and the active self. Know-how is the evidence of the richness of that inscription.

From rock climbers to surgeons to pianists, Mihaly Csikszentmihalyi explains, people who “routinely find deep enjoyment in an activity illustrate how an organized set of challenges and a corresponding set of skills result in optimal experience.” The jobs or hobbies they engage in “afford rich opportunities for action,” while the skills they develop allow them to make the most of those opportunities. The ability to act with aplomb in the world turns all of us into artists. “The effortless absorption experienced by the practiced artist at work on a difficult project always is premised upon earlier mastery of a complex body of skills.”33 When automation distances us from our work, when it gets between us and the world, it erases the artistry from our lives.

Interlude, with Dancing Mice

“SINCE 1903 I HAVE HAD UNDER OBSERVATION CONSTANTLY from two to one hundred dancing mice.” So confessed the Harvard psychologist Robert M. Yerkes in the opening chapter of his 1907 book The Dancing Mouse, a 290-page paean to a rodent. But not just any rodent. The dancing mouse, Yerkes predicted, would prove as important to the behavioralist as the frog was to the anatomist.

When a local Cambridge doctor presented a pair of Japanese dancing mice to the Harvard Psychological Laboratory as a gift, Yerkes was underwhelmed. It seemed “an unimportant incident in the course of my scientific work.” But in short order he became infatuated with the tiny creatures and their habit of “whirling around on the same spot with incredible rapidity.” He bred scores of them, assigning each a number and keeping a meticulous log of its markings, gender, birth date, and ancestry. A “really admirable animal,” the dancing mouse was, he wrote, smaller and weaker than the average mouse—it was barely able to hold itself upright or “cling to an object”—but it proved “an ideal subject for the experimental study of many of the problems of animal behavior.” The breed was “easily cared for, readily tamed, harmless, incessantly active, and it lends itself satisfactorily to a large number of experimental situations.”1

At the time, psychological research using animals was still new. Ivan Pavlov had only begun his experiments on salivating dogs in the 1890s, and it wasn’t until 1900 that an American graduate student named Willard Small dropped a rat into a maze and watched it scurry about. With his dancing mice, Yerkes greatly expanded the scope of animal studies. As he catalogued in The Dancing Mouse, he used the rodents as test subjects in the exploration of, among other things, balance and equilibrium, vision and perception, learning and memory, and the inheritance of behavioral traits. The mice were “experiment-impelling,” he reported. “The longer I observed and experimented with them, the more numerous became the problems which the dancers presented to me for solution.”2

Early in 1906, Yerkes began what would turn out to be his most important and influential experiments on the dancers. Working with his student John Dillingham Dodson, he put, one by one, forty of the mice into a wooden box. At the far end of the box were two passageways, one painted white, the other black. If a mouse tried to enter the black passageway, it received, as Yerkes and Dodson later wrote, “a disagreeable electric shock.” The intensity of the jolt varied. Some mice were given a weak shock, others were given a strong one, and still others were given a moderate one. The researchers wanted to see if the strength of the stimulus would influence the speed with which the mice learned to avoid the black passage and go into the white one. What they discovered surprised them. The mice receiving the weak shock were relatively slow to distinguish the white and the black passageways, as might be expected. But the mice receiving the strong shock exhibited equally slow learning. The rodents quickest to understand their situation and modify their behavior were the ones given a moderate shock. “Contrary to our expectations,” the scientists reported, “this set of experiments did not prove that the rate of habit-formation increases with increase in the strength of the electric stimulus up to the point at which the shock becomes positively injurious. Instead an intermediate range of intensity of stimulation proved to be most favorable to the acquisition of a habit.”3

A subsequent series of tests brought another surprise. The scientists put a new group of mice through the same drill, but this time they increased the brightness of the light in the white passageway and dimmed the light in the black one, strengthening the visual contrast between the two. Under this condition, the mice receiving the strongest shock were the quickest to avoid the black doorway. Learning didn’t fall off as it had in the first go-round. Yerkes and Dodson traced the difference in the rodents’ behavior to the fact that the setup of the second experiment had made things easier for the animals. Thanks to the greater visual contrast, the mice didn’t have to think as hard in distinguishing the passageways and associating the shock with the dark corridor. “The relation of the strength of electrical stimulus to rapidity of learning or habit-formation depends upon the difficultness of the habit,” they explained.4 As a task becomes harder, the optimum amount of stimulation decreases. In other words, when the mice faced a really tough challenge, both an unusually weak stimulus and an unusually strong stimulus impeded their learning. In something of a Goldilocks effect, a moderate stimulus inspired the best performance.

Since its publication in 1908, the paper that Yerkes and Dodson wrote about their experiments, “The Relation of Strength of Stimulus to Rapidity of Habit-Formation,” has come to be recognized as a landmark in the history of psychology. The phenomenon they discovered, known as the Yerkes-Dodson law, has been observed, in various forms, far beyond the world of dancing mice and differently colored doorways. It affects people as well as rodents. In its human manifestation, the law is usually depicted as a bell curve that plots the relation of a person’s performance at a difficult task to the level of mental stimulation, or arousal, the person is experiencing.

At very low levels of stimulation, the person is so disengaged and uninspired as to be moribund; performance flat-lines. As stimulation picks up, performance strengthens, rising steadily along the left side of the bell curve until it reaches a peak. Then, as stimulation continues to intensify, performance drops off, descending steadily down the right side of the bell. When stimulation reaches its most intense level, the person essentially becomes paralyzed with stress; performance again flat-lines. Like dancing mice, we humans learn and perform best when we’re at the peak of the Yerkes-Dodson curve, where we’re challenged but not overwhelmed. At the top of the bell is where we enter the state of flow.

The Yerkes-Dodson law has turned out to have particular pertinence to the study of automation. It helps explain many of the unexpected consequences of introducing computers into work places and processes. In automation’s early days, it was thought that software, by handling routine chores, would reduce people’s workload and enhance their performance. The assumption was that workload and performance were inversely correlated. Ease a person’s mental strain, and she’ll be smarter and sharper on the job. The reality has turned out to be more complicated. Sometimes, computers succeed in moderating workload in a way that allows a person to excel at her work, devoting her full attention to the most pressing tasks. In other cases, automation ends up reducing workload too much. The worker’s performance suffers as she drifts to the left side of the Yerkes-Dodson curve.

We all know about the ill effects of information overload. It turns out that information underload can be equally debilitating. However well intentioned, making things easy for people can backfire. Human-factors scholars Mark Young and Neville Stanton have found evidence that a person’s “attentional capacity” actually “shrinks to accommodate reductions in mental workload.” In the operation of automated systems, they argue, “underload is possibly of greater concern [than overload], as it is more difficult to detect.”5 Researchers worry that the lassitude produced by information underload is going to be a particular danger with coming generations of automotive automation. As software takes over more steering and braking chores, the person behind the wheel won’t have enough to do and will tune out. Making matters worse, the driver will likely have received little or no training in the use and risks of automation. Some routine accidents may be avoided, but we’re going to end up with even more bad drivers on the road.

In the worst cases, automation actually places added and unexpected demands on people, burdening them with extra work and pushing them to the right side of the Yerkes-Dodson curve. Researchers refer to this as the “automation paradox.” As Mark Scerbo, a human-factors expert at Virginia’s Old Dominion University, explains, “The irony behind automation arises from a growing body of research demonstrating that automated systems often increase workload and create unsafe working conditions.” 6 If, for example, the operator of a highly automated chemical plant is suddenly plunged into a fast-moving crisis, he may be overwhelmed by the need to monitor information displays and manipulate various computer controls while also following checklists, responding to alerts and alarms, and taking other emergency measures. Instead of relieving him of distractions and stress, computerization forces him to deal with all sorts of additional tasks and stimuli. Similar problems crop up during cockpit emergencies, when pilots are required to input data into their flight computers and scan information displays even as they’re struggling to take manual control of the plane. Anyone who’s gone off course while following directions from a mapping app knows firsthand how computer automation can cause sudden spikes in workload. It’s not easy to fiddle with a smartphone while driving a car.

What we’ve learned is that automation has a sometimes-tragic tendency to increase the complexity of a job at the worst possible moment—when workers already have too much to handle. The computer, introduced as an aid to reduce the chances of human error, ends up making it more likely that people, like shocked mice, will make the wrong move.