4
THE LIMITS OF YOUR MIND

ON MARCH 23, 2005, Warren Briggs got into his car and set off for work. The commute to the British Petroleum refinery in Texas City usually took thirty to forty-five minutes, but that day it felt longer. When he got to the plant, just before six A.M., he could scarcely remember driving there. Warren had been working twelve-hour shifts, seven days a week, for twenty-nine days in a row. He couldn’t remember the last time he’d had time off. He had mixed feelings about his shifts. Twelve hours off meant more time with the kids. But he still wasn’t getting much sleep.

He spoke briefly to the departing night-shift operator and then read the log book to prepare for the start-up. There was just a one-line logbook entry: “isom brought in some raff to unit, to pack raff with.” That told him more or less nothing, he thought, grumpy as he started to work.

In front of him sat the control board for the ISOM/NDU/AU2 complex: twelve monitors divided into twenty-four screens. Some screens had pages and pages of information behind them; others were pretty simple alarms. Visitors said it looked like something from NASA; Warren wished it were that exciting.

Warren’s boss was late and, when he got in, he was busy. He was always busy because he had a stack of paperwork and a bunch of contracting crews to look after. A lot of the men hated contractors, said they were unsafe and cut corners. Warren didn’t mind. Those guys needed to feed their families same as he did. It wasn’t their fault BP brought them in with fewer benefits, lower pay. There weren’t that many jobs around Texas City to choose from.

“We’re running so thin”: that was the phrase everybody used. At first it just meant the pipes were wearing thin but now the whole place was wearing out. The plant and the people. So much cost-cutting, you could do the form-filling but not the repairs. Warren’s supervisor was buried in paperwork and he had two new operators to train. He probably wouldn’t be around much during the shift.

With no one to relieve him, Warren ate his lunch at his desk in front of the control board. Some weird pressure spikes caught his attention and he wanted to keep an eye on them. It was boring, lonely work, cooped up in a darkened room. The equipment he was supposed to be controlling stood outside in the Texas sunshine, one small part of the vast refinery. When people first saw it, they’d say it looked like something on the moon, a space-age settlement full of towers and spheres that went on for miles. Warren didn’t share their romance. It was just a refinery, making 3 percent of America’s gasoline. That was a lot of cars.

The isomerization unit Warren looked after boosted octane levels in the gasoline produced at the plant. Flammable hydrocarbons, or raffinate—the “raff” in the log book—went into a 170-foot tower that would distill and separate gas components. Higher octane meant higher performance and higher prices. That was the name of the game. Starting the unit was always a tricky time, when it would have been nice to have an extra pair of eyes. In the old days, there had been two operators, but cost cutting had changed all that. Then they’d added a third unit—the NDU—and said it was so easy to operate that you didn’t need an extra person. So instead of two people for two refinery units, now it was just Warren, on his own, looking after all three.

Around 12:40, an alarm went off, but Warren couldn’t figure out where the high pressure was coming from. He decided to open a manual chain valve to vent some of the gases to the emergency relief system and to turn off two burners in the furnace. Just after one P.M., Warren’s boss called in to see how things were going. When Warren mentioned the weird pressure spikes, his boss suggested opening a bypass valve to the blowdown drum to relieve some of the pressure. What neither of them knew was that the isomerization tower was too full, fifteen times higher than it was meant to be. But Warren’s control panel wasn’t configured to display flows into and out of the tower on the same screen and nowhere did it calculate total liquid in the tower. Running thin.

An adventure, one of the guys had called working there. An adventure? Sure, he said: Each morning when I walk into this place, I wonder if today’s the day I’m gonna die. That wasn’t Warren’s idea of an adventure.

At 1:14 P.M., three emergency valves opened in the tower, sending nearly 52,000 gallons of hot, flammable liquid to the blowdown drum. When the liquid overflowed into a process sewer, it set off alarms in the control room. But the high-level alarm didn’t go off. While Warren sat in front of his twenty-four screens, a geyser of liquid and vapor erupted from the top of the stack, propelling nearly a tanker full of hot gasoline up into the air and then down to the ground like a tall, ungainly fountain. Within ninety seconds the whole unit and all the contractors’ trailers were engulfed in a vast flammable vapor cloud.1 Then a nearby car backfired.

A mile away, Joe Bilancich was negotiating for a new apprentice scheme. He felt one concussion in the room, then another. Everybody moved to the window. Flames and smoke filled their view while pieces of pipe and metal rained down on the ground.

Forty-five minutes from the site, Eva Rowe heard the blast. Both her parents worked at the site; she called them at once. No answer.

Fifteen people died that day at BP’s Texas City site, killed by the “blunt-force trauma” of the explosion. Eva Rowe lost both of her parents. It was one of the worst industrial accidents in American history.

When investigators, lawyers, and executives arrived to investigate the cause of the tragedy, everybody talked about blind spots: problems, processes, and warnings that everybody could see but somehow managed not to see. Some of the causes were complex and technical, but some were not. What happened to Warren Briggs was simple and obvious and not unique to oil refineries. As we know from the banking crisis, companies don’t have to kill people to be dangerous.

According to the U.S. Chemical Safety Board (CSB), which spent two years investigating the accident, Briggs was one of the most rested members of his team. The night lead operator, who had filled the tower from the control room before Briggs had come on duty, had worked thirty-three consecutive days, while the day lead operator—who was training two new operators, dealing with contractors, and working to get a replacement part to finish the ISOM turnaround work—had been on duty for thirty-seven consecutive days. In other words, they were all dog tired. The CSB estimated that Briggs was getting 5.5 hours of sleep per night, and therefore was suffering from what they call an accumulated “sleep debt” of about a month and a half. That didn’t just mean that he felt lousy. “It is common for a person experiencing fatigue to be more rigid in thinking, have greater difficulty responding to changing or abnormal circumstances, and take longer to reason correctly,” said the CSB. Focused attention on one thing, to the exclusion of everything else—often referred to as cognitive fixation or cognitive tunnel vision—is a typical performance effect of fatigue.2

Briggs and his operators could not see the problem. They were simply too tired. The UK Health and Safety Executive found that subjective levels of fatigue increase with consecutive early shifts (those starting around six A.M.) The third day of working an early-morning shift results, it said, in a 30-percent increase in fatigue, while the fifth consecutive day of working early-morning shifts results in a 60-percent increase in fatigue, and the seventh consecutive day results in a 75-percent increase compared to the first day. The study doesn’t even contemplate what happens to peoples’ minds when they’ve been working like this for thirty days nonstop.

Fatigue, overwork, and burnout are not unique to the oil and gas industries. On November 11, 2004, the computer games company Electronic Arts woke up to find itself the target of a blogger called “EASpouse” who complained with shocking eloquence about the hours EA expected its programmers to work. Addressing herself to then-CEO (now chairman) Larry Probst, the spouse asked, “You do realize what you’re doing to your people, right? And you do realize that they ARE people, with physical limits, emotional lives, and families, right? Voices and talents and senses of humor and all that? That when you keep our husbands and wives and children in the office for ninety hours a week, sending them home exhausted and numb and frustrated with their lives, it’s not just them you’re hurting, but everyone around them, everyone who loves them? When you make your profit calculations and your cost analyses, you know that a great measure of that cost is being paid in raw human dignity, right?”

Electronic Arts—whose tagline at the time was “Challenge Everything”—is the world’s leading producer of computer games. The year 2004 was a good one for the company. The Sims, Lord of the Rings, FIFA, and Medal of Honor generated record revenues ($3 billion) and record profits ($776 million).3 The year was particularly noteworthy because technology breakthroughs—faster processors and improved screen resolution—resulted in a boom for handheld game devices. The introduction of Sony’s PSP promised even greater opportunity. But writing in the company’s annual report about the challenges the company faced going into 2005, Probst made no mention of a workforce that was fried, or of an engineering team with a turnover rate of nearly 50 percent.

When EASpouse’s essay, “EA: The Human Study,” was posted online, it tore through the computer game community like wildfire. “I was so angry and in such pain, I thought, either I get a response to this or there’s something seriously wrong with the world!” said Erin Hoffman, its author. “It was students and gamers who propelled the thing. Within forty-eight hours, everyone read it. But it was students who were most angry! They dreamed of working in this industry—and were desperately disappointed to learn how awful it was.”

Today, Hoffman is in less pain, but she’s still angry. The gist of her complaint then was that EA routinely scheduled engineers to work eighty-five-hour weeks. When her fiancé, Lan, had interviewed for a job at EA, neither of them had been naïve. As veterans of the computer games industry, they knew that just before a product shipped, most teams went into crunch mode, which involved a lot of long hours.

“They asked Lan in one of the interviews: ‘How do you feel about working long hours?’ It’s just a part of the game industry—few studios can avoid a crunch as deadlines loom, so we thought nothing of it. When asked for specifics about what ‘working long hours’ meant, the interviewers coughed and glossed on to the next question; now we know why.”

Crunch is only supposed to be the mode of working at the tail end of a project. At EA, the team started by doing eight-hour days, six days a week. But that quickly turned into twelve hours, six days a week and then into eleven hours a day, seven days a week. Crunch wasn’t an emergency; it was a standard. Watching what was happening to her fiancé horrified Erin. “After a certain number of hours, the eyes start to lose focus; after a certain number of weeks with only one day off, fatigue starts to accrue and accumulate exponentially. Bad things happen to one’s physical, emotional, and mental health. The team is rapidly beginning to introduce as many flaws as they are removing. The bug rate soared in crunch.”

As the debate inspired by Erin’s essay continued, she and her friends became better informed about the iron laws of human productivity. The forty-hour week is there for a reason; it gets the best work from people. The first four hours of work are the most productive and, as the day wears on, everyone becomes less alert, less focused, and prone to more mistakes. In 1908, the first known study by Ernst Abbe,5 one of the founders of the Zeiss lens laboratory, concluded that reducing the working day from nine to eight hours actually increased output. Henry Ford, who studied productivity issues obsessively, reached the same conclusion and infuriated his manufacturing colleagues when, in 1926, he had the audacity to introduce a forty-hour work week. Subsequent studies by Foster Wheeler (1968), Procter & Gamble (1980), members of the construction industry, and many, many more show that, as the days get longer, productivity declines. No study has ever convincingly argued otherwise.6

Once you are doing sixty hours a week or more, you don’t just get tired, you make mistakes; the time you spend rectifying errors consumes all the extra hours you worked. The classic, and comic, example of this was Frank Gilbreth, the efficiency-obsessed father in Cheaper by the Dozen. He found he could shave faster if he used two razors—but then he wasted all his saved time covering the cuts with Band-Aids.

In software companies, a lot of developers like working late; they relish the silence that comes when the sales and marketing folk go home. But that means they need to start later, too. Otherwise, the extra hours produce only errors. Software bugs or accidental file deletions can have knock-on effects that take much longer to repair than did writing original code. EA’s working patterns weren’t just inhumane; they were counterproductive.

Then there is the sleep factor. Missing just one night’s sleep has a noticeable impact on the brain’s ability to function, as Dardo Tomasi7 and his colleagues at the Brookhaven National Laboratory discovered when they took fourteen healthy, nonsmoking right-handed men and made half of them stay awake through the night. In the morning, both rested and groggy subjects were put through a serious of tests that involved tracking ten balls on a screen. As they completed the tests an fMRI scanner took pictures of their brains to see how the rested brain differed from the one that was deprived of sleep. They found, not so surprisingly, that the sleepier the subjects, the lower their accuracy in the tests. But it was the detail that was most interesting.

The scientists found that two key areas of the brain—the parietal lobe and the occipital lobe—were less active in the sleep-deprived participants. The parietal lobe in the brain integrates information from the senses and is also involved in our knowledge of numbers and manipulation of objects. The occipital lobe is involved in visual processing. So both areas are highly involved in processing visual information and numbers. What was Warren Briggs of BP looking at on his twenty-four screens? Visual information and numbers. What do computer-game engineers work with all the time? Visual information and numbers. The higher-order brain activity that was most needed in those jobs was the first thing to go.

While the parietal and occipital lobes were less active, the thalamus, on the other hand, was very busy in the sleepy subjects. Scientists hypothesize that the thalamus attempts to compensate for the reduced activity in the parietal and occipital lobes. The thalamus sits at the center of the brain and is responsible for the regulation of consciousness, sleep, and alertness. It was, in other words, working extra hard to stay alert. All the energy you might want to concentrate on solving a hard problem is devoted to the challenge of staying awake.

In evolutionary terms, this makes sense. If you’re driven to find food, you need to stay awake and search, not contemplate recipes. But now that, for most of us, work isn’t primarily about physical endurance, mere wakefulness is not enough. These and other studies indicate that, yes, we can stay awake for long periods of time with little sleep—but what we lose, progressively, is the ability to think. “A tired worker tends to perform like an unskilled worker.”8 Or, you could say, a smart worker starts to work like a mindless one.

Moreover, sleep deprivation starts to starve the brain. There is a reason why we start to eat comfort food—doughnuts, candy—when we’re tired: our brains crave sugar. After twenty-four hours of sleep deprivation, there is an overall reduction of 6 percent in glucose reaching the brain.9 But the loss isn’t shared equally; the parietal lobe and the prefrontal cortex lose 12 to 14 percent of their glucose. And those are the areas we need most for thinking: for distinguishing between ideas, for social control, and to be able to tell the difference between good and bad.10

To Charles Czeisler, professor of sleep medicine at Harvard Medical School, encouraging a culture of sleepless machismo is downright dangerous.11 He’s amazed by today’s work cultures that glorify sleeplessness, the way the age of Mad Men once glorified people who could hold their drink.

“We now know,” says Czeisler, “that twenty-four hours without sleep or a week of sleeping four or five hours a night induces an impairment equivalent to a blood alcohol level of point one percent. We would never say ‘This person is a great worker! He’s drunk all the time!’ yet we continue to celebrate people who sacrifice sleep.”12

A blood alcohol level of 0.1 percent is higher than all legal limits for alcohol while driving. The U.S. and UK limits are 0.08 percent. At 0.1 percent, you are liable to be prone to mood swings; be emotionally overexpressive; lose peripheral vision, depth perception, and distance acuity; and exhibit poor reasoning.

Czeisler’s research team found that hospital interns scheduled to work for twenty-four hours increased their chances of stabbing themselves with a needle or scalpel by 61 percent, their risk of crashing a car by 168 percent, and their risk of a near miss by 460 percent. Twenty percent of car crashes are attributed to nothing more complex than lack of sleep. Since companies vigorously prosecute alcohol policies, Czeisler argues they should do the same with corporate sleep policies.

But they do just the opposite.

Erin’s fiancé Lan joined a successful class-action suit against EA’s working practices and he left the company. The “spouse” turned out to be a little premature: they never married and have since split up. Erin sits on the board of the International Games Developers Association (IGDA) but says the industry hasn’t learned much: Engineers are still too tired to see straight and the executives who manage them are too tired to see the problem.

“EA changed for a while, but only really because one group saw this as an opportunity to get rid of the guys responsible for the crazy hours. So there was a big political bloodbath and a new regime. Everything got better for six months and then it started all over again. They’re destroying people who should become our top developers! A lot of studios now won’t hire former EA employees because they’re so burned out; they say there’s just too much work involved in rehabbing them.”

Even rested and alert, you may not be able to see what’s right in front of you. In one of psychology’s most famous and stupefying experiments, Dr. Daniel Simons made a video at Harvard that set out to test just how much the mind can see when it’s busy.

“It started as a lark,” Simons recalled.13 “There had been earlier experiments into visual cognition but in all of them, the display was so weird that it didn’t feel like real life. So I thought: What if we make this whole thing live? This was quite a lot of fun—I’m a big fan of doing fun research. I do the boring stuff, too, but the purpose here was to ask: How extreme can you make this and illustrate the point?”

(Before you read farther, you might want to try the experiment for yourself on YouTube: http://www.youtube.com/watch?v=vJG698U2Mvo.)

In 1999, together with Chris Chabris, Simons made a short film of Harvard students moving around and passing basketballs. One team wore white shirts, the other wore black. Although Simons had appeared in some of his earlier video experiments, this time he chose to stay behind the camera and film it. When they finished making the film, Chabris and Simons asked volunteers to watch it and count the number of passes made by players wearing white. Less than a minute later, when the video ended, they asked viewers if they’d seen anything else. About half said no, they’d seen nothing.14

What they had missed was a female student wearing a full-body gorilla suit who walks into the scene, stops in the middle of the frame, faces the camera, thumps her chest, and walks off. She is on screen for approximately nine seconds.

The experiment has been shown repeatedly, around the world, in front of diverse audiences. I first saw it in Dublin, in an audience full of executives. Like them, I was so focused on counting the passes, that I never saw the gorilla.

Simons was so stunned by the result that he says that for several years afterward, he still kept expecting people to spot the gorilla. But results were always the same. In 1999, Simons and his colleagues published an account of the experiment entitled “Gorillas in Our Midst” and, in 2004 they won an Ig Nobel Prize for “achievements that first make people laugh and then make them think.” Simons has since gone on to make an academic career studying how we pay attention.

“We experience far less of our visual world than we think we do. We feel like we are going to take in what’s around us. But we don’t. We pay attention to what we are told to attend to, or what we’re looking for, or what we already know. Top-down factors play a big role. Fashion designers will notice clothes. Engineers will notice mechanics. But what we see is amazingly limited.”

We see what we expect to see, what we’re looking for. And we can’t see all that much. I asked Simons whether some people saw more than others.

“There is really limited evidence for that. People who are experienced basketball players are slightly better at seeing what’s happening in the video—but that’s probably because they’re more accustomed to watching passes; it isn’t so hard for them to read what’s going on. You can train yourself to focus on more than one spot. You might improve your eye muscles somewhat. But the limits are pretty fixed. There’s a physical and an evolutionary barrier. You can’t change the limits of your mind.”

Simons’ video is used for all kinds of safety training. “The airport security people, you know—they can find what they’re looking for but they won’t find what they’re not looking for, no matter how dangerous it is.” Trained baggage screeners are better than Simons’ respondents at spotting weapons, but not much: a third of the time, they will fail to spot weapons of any kind.15

Simons is amazed by the strange, sometimes convoluted interpretations of his film. “The video gets talked about a lot in relation to national security forces and why they didn’t see terrorists in their midst. My favorite one is a Baptist preacher who was giving a sermon in which he referred to the gorilla and said that’s why the Jews didn’t spot Jesus for what he was! But it is most commonly used for safety training, in power plants, for example, where people will focus on procedures and not notice anything that isn’t part of the procedure.”

After a decade of experiments16 by himself and others, Simons concludes that we see what we expect to see and are blind to the unexpected. And there are absolute hard limits to how much we can take in at any given time.

“For the human brain,” says Simons, “attention is a zero-sum game: If we pay more attention to one place, object, or event, we necessarily pay less attention to others.”

Simons now researches and teaches at the University of Illinois at Urbana-Champaign, where he continues to research visual cognition. His work is far from academic: On September 6, 2006, a graduate of the university, Matt Wilhelm, was riding his bicycle when Jennifer Stark hit him from behind with her car and killed him. In the subsequent investigation, it turned out that Stark had been downloading ring tones at the time she hit Matt, a tragic reminder of the realities behind Simons’ experiments.

“There was a huge debate, when radios were introduced into cars,” says Simons. “I’m still not sure I buy the argument but I suppose we can tune out a radio. But driving a car while talking on a cell phone or texting is different. They can seem really effortless but they both use your mind’s limited attention resources. You can’t do it. Your brain can’t do it.”

It isn’t about the phone—which is why hands-free sets won’t help you. It’s about the mental resources that are available to you at any one time. In what sounds like another piece of fun research, Frank Drews, an assistant professor of psychology at the University of Utah, divided forty students into three teams.17 The first team operated a driving simulator; the second team drove on the simulator while talking on cell phones. The third team got to operate the simulator after drinking enough orange juice and vodka to take their blood alcohol limit to 0.08 percent, the legal limit for driving in the United States and the United Kingdom.

Comparing the three teams yielded surprises. The team using cell phones had more rear-end collisions and their braking time was slower. The intoxicated participants exhibited a more aggressive driving style, following the vehicle in front more closely and braking with greater force, but they had no accidents. You should not take from this that it is better to drive drunk than while using a cell phone! What Drews and his colleagues concluded was that the drivers using cell phones were dangerous because they simply did not have enough attention to devote to their driving.

Shortly after running the experiment, Drews himself experienced the phenomenon firsthand when a driver next to him on the highway drifted into his lane, forcing the psychologist onto the shoulder. Both drivers took the next exit and Drews got out of his car, very upset. “I knocked on his window. He was still on his cell phone!” Drews recalled. But when he finally stopped talking, the chatty driver had “no clue” about the disruption he’d caused. He hadn’t seen a thing.

A study from the Harvard Center for Risk Analysis estimates that cell-phone use while driving contributes to 6 percent of crashes—which means 636,000 crashes, 330,000 injuries, 12,000 serious injuries, and 2,600 deaths each year in the United States. Or $43 billion. When the National Safety Council asked its member businesses whether they had safety policies prohibiting on-road cell-phone use, 45 percent said they did—but of those, 85 percent said the policies made no difference.

Meanwhile, technology whiz kids are busy inventing yet more forms of distraction for us as they bring the riches of personal computing to the car. Soon album covers, e-mail, and Wikipedia entries will all be available to us as we drive.18

“Cars are going to become probably the most immersive consumer electronics device we have,” according to Michael Rayfield, a general manager at Nvidia, the chip company that works with automotive manufacturers on visual displays.19

For Audi cars, Nvidia’s Vibrante multimedia software will provide passengers with “dual zone” entertainment enabling them “to simultaneously enjoy two different movies on two different screens in the backseat monitors.”20 This traveling picture palace will give drivers access to photos of their destination, reviews of nearby restaurants, and background information, along with navigational advice. USB ports and wi-fi capability will allow them to plug in keyboards, too, although if they want further online access, a notice pops up reading “Please only use the online services when traffic conditions allow you to do so safely.” We already know what a great safeguard that is. We know, but we don’t want to know. We just do not have enough mental capacity to do all the things that we think we can do. As attentional load increases, attentional capacity gradually diminishes.21 One frustrated psychologist has argued that the case for multitasking is on a par with “urban legend”;22 that is, it’s a story we like the sound of but that is really nonsense.

It’s particularly important to remember that the intellectual capacity we appear to lose first may be what we need most: the ability to discriminate, to make good judgments. Remember Warren Briggs sitting in front of his computer screens, so tired that he suffered from tunnel vision. He worried about how to get rid of the pressure spikes because he was too tired to contemplate the harder issue of what was causing these spikes to appear in the first place.

The bottleneck23 that characterizes our ability to receive information explains why we cannot intelligently absorb all the information presented to us on TV screens like those displayed by CNN, Fox, or CNBC. The scrolling text, sidebars, and stock prices don’t make us smarter or better informed; they make us stupid. While we are watching such a busy array, we can’t efficiently think, discriminate, or make critical judgments.

When we are tired or preoccupied—conditions psychologists call “resource-depleted”—we start to economize, to conserve those resources. Higher-order thinking is more expensive. So too are doubt, skepticism, and argument. “Resource depletion specifically disables cognitive elaboration,” wrote Harvard psychologist Daniel Gilbert.24 “Not only does doubt seem to be the last to emerge, but it also seems to be the first to disappear.” Because it takes less brain power to believe than to doubt, we are, when tired or distracted, gullible.25 Because we are all biased, and biases are quick and effortless, exhaustion makes us favor the information we know and are comfortable with. We’re too tired to do the heavier lifting of examining new or contradictory information, so we fall back on our biases, the opinions and the people we already trust.

This higher-order functioning that we lose when overloaded or exhausted is important—and not just in oil refineries. In the late 1990s, I worked for a company, CMGI, that in the heat of the Internet boom bought large numbers of companies. Regularly on a Monday morning, I would walk into the boardroom where I would find bleary-eyed executives hungover with exhaustion, having pulled an all-nighter or two in order to complete the latest acquisition. They were fried but triumphant; they were heroes: the deal was done! But I lost count of the number of transactions that were, even at the time, strategically mindless and ultimately wasteful. Why had we bought these businesses? Too much tunnel vision, too little sleep: No one thought—quite literally—to ask, why are we doing this in the first place?

Although CMGI was a remarkable environment to work in, it wasn’t—and isn’t—unusual in pouring vast resources into deals that bring armies of lawyers and bankers into paneled boardrooms to work through the night completing the latest acquisition. The leading law firms, investment banks, and accounting firms require long hours and weekend work from any employee who wishes to be considered for partner, or even to be taken seriously. Many leading investment bankers tell me that, however much they hate it, these are the rules of client service. When I point out that they could solve their problem by having more employees, each doing fewer hours, they look abashed by the simplicity of the arithmetic. The truth is, many of the participants love it: the thrill of the deadline, the mountains of documents, the legal, financial, and regulatory complexity of the task.26

Yet most of these deals achieve worse than nothing. A study by KPMG found that 83 percent of the mergers and acquisitions they studied didn’t boost shareholder value; 53 percent actually reduced it. Another study by management consultants at A. T. Kearney found that total return to shareholders on 115 global mergers was a negative 58 percent!27 And while business-school professors dissect the corpse of each dead deal, it might be wiser to remember the fried executives who signed off on the strategy. Tunnel vision blinds us to the wider consequences of our decisions. It isn’t just control-room operators who are dangerous.

Many psychologists have studied these phenomena, in many cases (like Kahneman and Tversky) to understand why we make mistakes. Others work more tactically, to devise guidelines for safer instrument panels, for example. But one of the earliest was the social psychologist Stanley Milgram, who is more famous for his experiments in obedience (see Chapter Six). Although Milgram himself was a native of New York City, he was fascinated by the way urbanites behaved. In “The Experience of Living in Cities” he reflected that the very large number of people who live in the city, together with their heterogeneity, meant that “city life, as we experience it, constitutes a continuous set of encounters with overload, and of resultant adaptations.” He wondered what the effects of living with so many people, so many impressions, so much information might be. “Overload characteristically deforms daily life on several levels, impinging on role performance, the evolution of social norms, cognitive functioning, and the use of facilities.”28

A natural, as well as a professional, observer of human life, Milgram noted that the country shopkeeper might engage his customers in conversations whereas the cashier in a city supermarket barely had time to complete one checkout before starting on the next. “The urbanite disregards the drunk sick on the street as he purposefully navigates through the crowd,” Milgram argued, not because he was less friendly or warmhearted, but because city dwellers had learned to manage the demands made on them by a crowded city. They adapted by reducing the amount of information they took in. If a city was a system that yielded more “inputs” than anyone could handle, inhabitants responded by taking in less. It was Milgram’s unique insight to see that the city is a system, just as the brain is a system. And Milgram’s fellow New Yorkers were managing themselves in just the same way that our brain manages information: letting some impressions in and leaving many behind.

Milgram’s argument was provocative because he maintained that what got lost wasn’t random but precise: When people felt overloaded, he said, they restricted their social and moral involvement. “Overload is made more manageable by limiting the ‘span of sympathy.’ ” Milgram wasn’t out to enrage city-lovers the world over. He was concerned that the load-balancing trade-offs weren’t just operational; they were moral. If it is hard to doubt when you’re tired, it may be even harder to care.

What is true for cities may equally be true for any large organization in which individuals operate under tremendous “resource depletion.” Jack Kaminsky29 worked at the Countrywide Mortgage Services for four and a half years.

“I loved my job. I’d usually do a fifteen-, sixteen-hour day—I loved it! When business was booming, I’d be getting up at five A.M., logging in to check e-mails and plan my day. It was like a high, the whole day.”

Didn’t you, I asked, get tired?

“Sure! Our office was a sweatshop. But the longer you worked, the more you made. We built a very successful team—out of fifty-four offices, we were near the top every year I was there. If you couldn’t handle it, we’d find someone else. Countrywide was a well-respected lender, everyone wanted to be part of it. It was easy to replace people.”

Jack’s job was to package up mortgages to sell on to banks.

“Everybody turned a blind eye to fraud. They’re still uncovering fraud from four years ago! The frauds were so obvious. You’d have seven files from seven different brokers. In each one, the same buyer was going to be the owner occupier; I’d have the same owner buying seven properties from seven different banks! And I’d tell the banks but they’d just turn a blind eye. We knew it was fraud but what can you do? We packaged them up and sold them on.”

When the mortgage market collapsed, Jack was laid off—on a conference call.

Propagandists and brainwashers know what managers and corporate leaders choose to forget: the human mind, overloaded and starved of sleep, becomes morally blind. This would appear to be part of the explanation of what took place at Abu Ghraib.

“Not only did this soldier work half around the clock [from four A.M. to four P.M.], he did so seven days a week with not a single day off for a full forty days! I can’t imagine any job where such a work schedule would not be seen as inhumane.”30

The psychologist Philip Zimbardo served as an expert witness for one of the Abu Ghraib reservists, Chip Frederick. Although his sympathies lay more with reservist Joe Darby, who had handed in the shocking photographs taken at the prison, Zimbardo understood better than anyone the impact that the situational influences of a prison environment could have on young men and women. In 1971, he had designed and run the Stanford Prison Experiment, in which twenty-four mentally and physically healthy young men endured a prison simulation for six days. Zimbardo’s detailed account of the experiment is hair-raising, but it taught him volumes about the ways in which a situation dramatically transforms behavior.31 A landmark in the study of power and environment, the experiment eerily presaged many of the abuses committed at Abu Ghraib by Chip Frederick and his colleages.

“There is absolutely nothing in his record that I was able to uncover that would predict that Chip Frederick would engage in any form of abusive, sadistic behavior,” Zimbardo wrote. “On the contrary, there is much in his record to suggest that had he not been forced to work and live in such an abnormal situation, he might have been the military’s all-American poster soldier on its recruitment ads. He could have been the best of apples in their good barrel.”32

Many forces—fear; corruption; inadequate resources; absence of supervision, written procedures, formal policies, or guidelines; and an absolute lack of training—conspired to erode those all-American qualities. But, as Zimbardo observes, that was just the beginning. Frederick not only worked a twelve-hour shift, seven days a week. After working for forty days, he got just one day off, followed by two more solid weeks on. Even when his shifts were over, he wasn’t able to leave the prison but went to sleep in a dirty and noisy six-by-nine prison cell.

That Frederick was surrounded by colleagues just as ill-trained and just as exhausted meant no one was awake enough to have any moral sensibility left. Of course other factors contributed to the abuse of prisoners, but what is so striking about Zimbardo’s analysis is that, at the simplest level, frightened, untrained guards were left with so little cognitive capacity.

Working hours seem such a small issue—but, by the same token, such a small thing to get right. But there’s a great deal of bravura attached to overwork. For men especially, complaining of tiredness can look and sound weak. And there’s no biofeedback: If you don’t eat, you starve and everyone can see there’s a problem. But when we don’t sleep, or when we work too hard, often even we can’t see there’s a problem. Sure, we don’t feel great; but what we can’t see is what we are losing: the capacity to reason, to judge, to make good and humane decisions, to see consequences and complexity.

The allure of exhaustion is baffling. I’ve lost count of the number of corporations I’ve worked with that positively boast about the number of all-nighters they pull. Investment banking may not be the absolute worst, but it’s up there, full of pride when they describe their requirement to work three weekends out of five. Client service, they say, just demands it. But I wonder how thrilled their clients would be if they knew how brain-dead the service they receive often is.

“You can’t change the limits of your mind,” says Dan Simons. But we keep trying. Why? Is it the last vestige of a physical model of heroism for which we lack any intellectual corollary? If so, then we’d better find that new model fast, before more reputations and lives are ruined. At the very least, as western democracies struggle to define some kind of regulatory framework that could protect the economy from future disaster, we could do worse than demand that, after forty hours of work, everybody just go home.