10   Developing Your Strategic Intent

As a leader in the twenty-first century, you will certainly have to lead for creativity to achieve organizational goals. You may also have to lead for creativity to help confront global challenges like the impact of AI on human workforces, climate change, gene editing, and the need to feed ten billion people by 2050. Digital technologies will play oversized roles in all.

Leading in either situation will require defining a coherent strategic intent. Recall that strategic intent aligns the direction and pace of motion across distributed teams. It allows distributed leaders to address local needs without centralized oversight while also ensuring that the local initiatives happen within the framework of the overall effort.

This is not happening with digital technologies. Within companies, R & D departments and business leaders are on completely different pages. For global challenges, researchers in academic and multinational institutions, CEOs trying to meet market expectations, entrepreneurs pursuing big dreams, politicians with many sincere and insincere goals, NGOs advocating for the voiceless and people at large must be corralled. These groups, each with its own interests, won’t move in lock step. However, it would be helpful if they were not pulling in completely different directions.

An Extended Example: How Will AI Affect Humans?

Exploring the AI-workforce challenge can give a good understanding of what needs to be aligned. This discussion won’t debate extreme outcomes—countless new jobs or mass unemployment. Instead, it offers the perspectives of many technologists and business executives, assembled from presentations at formal public events, articles, and personal conversations. Both groups have stakes in harnessing AI but also have very different points of view.

The Technologists’ Perspective

Today’s AI is very good at answering questions, but not asking them. One research trajectory focuses on understanding how people unconsciously do everyday tasks. Even so, generalized humanlike intelligence, popular from science fiction, is still many years away. Technology development is currently pursuing multiple domain-specific capabilities (e.g., subfields of health care).

Technologists are making progress faster than they expected. For example, the ancient Chinese game of Go is much more difficult than chess. In 1997, they predicted that it would take a hundred years to develop AI that could beat Go grandmasters. In 2014, that prediction fell to ten years. It actually happed in 2015.

So technologists feel constrained. They believe established companies aren’t deploying AI systems quickly and broadly. Slow deployment is problematic because, inevitably, AI systems have access to more data after they have been deployed than during development. More deployments offer more opportunities to learn and improve.

Technologists expect young people and small companies to courageously ask brand-new questions and deploy AI faster. This will offer another benefit: Big technology companies won’t be able to hoard AI-based power. Tiny start-ups using the right application programming interfaces to access the power of available AI systems (like IBM’s Watson) will unleash unprecedented innovation.

Regulations will slow down efforts to improve the world. AI won’t endanger human safety, and concerns about job losses are misplaced. It will augment, not replace, human capability and create “new-collar” (not white- or pink- or blue-collar) jobs that don’t exist today. People will work the same number of hours, but on different things. They could also have much more free time, allowing them to lead more meaningful lives.

The Senior Business Executives’ Perspective

Senior business executives believe AI’s key value to companies lies in its ability to increase the performance of existing systems in marketing and operations. CEOs are being cautious and making deployment decisions incrementally; systems have to prove that they can deliver real value. Consequently, very few AI implementations have been at scale, and commonplace usage is far off.

AI will eliminate many jobs, but individuals and society shouldn’t fear it. That’s because, like prior major technologies, it will create even more jobs. A McKinsey Global Institute study1 has forecast what could happen.

On the one hand, the study estimates that half of all current jobs can be automated and up to one-third of workers will have to change occupations. Its most likely scenario projects the displacement of 400 million workers between 2016 and 2030. People in advanced economies will bear the brunt of change.

On the other hand, AI and related technologies will create at least 390–590 million new jobs. In developed countries, these will come from increases in incomes and consumption and from greater spending on health care for aging populations. If countries increase investments in infrastructure and energy (and in a few other areas), the number of new jobs could be in the range of 555–890 million. Many jobs will be fields that don’t exist today.

The economic transition will rival those that occurred as economies shifted out of agriculture and manufacturing. New jobs will require more education and midcareer retraining. Both businesses and society have to rethink how people should be educated and trained.

The Challenge of Strategic Intent

Both these perspectives implicitly assume that two realities of prior epochs apply in the digital epoch. In all past epochs, long-arc-of-impact technologies have destroyed prior jobs but created many, many more. Moreover, as the locus of the technologies was inside factories, the pace of business and societal change was gradual: Western economies, after all, embraced the quality movement epoch more than three decades after Japan did. In doing so, these perspectives gloss over four realities of the digital epoch.

First, while past technologies de-skilled physical work, digital technologies also de-skill cerebral work (Principles 1 and 4). Physical work was easy to transfer from farms to factories and across industries. The necessary retraining could be provided in days and weeks. Despite that, even advanced countries like the United States have struggled with people who entered the workforce without adequate primary education.2, 3 In contrast, training for most cerebral work takes many years. Retraining for new sectors of the economy can’t be done in days and weeks. The McKinsey report assumes retraining will be available in time to help the 75–350 million people, mostly in advanced economies, who will need assistance by 2030. Is anyone seriously preparing to provide this?

Moreover, half of all jobs being automatable is a lot. Its distribution won’t be uniform. For many activities, it will produce unprecedented social turmoil. Pick any of the world’s most crowded cities outside developed countries—Jakarta or Kolkata or Cairo. Every year countless people became professional drivers, embarking on a tenuous battle to reach the bottom-most rung of the local middle class. When driverless vehicles become commonly available, that rung will go out of reach for millions. Their lack of basic education will make retraining very difficult. Do CEOs and technologists have a responsibility to prepare society for this reality?

Second, past technologies upskilled physical work, allowing many people to do with machines what few could without them. Digital technologies upskill cerebral work, allowing fewer people to do what many could (Principles 2 and 4). Already happening in several industries,4 this will accelerate as higher proportions of jobs become automatable. That will impose ever-greater pressure on midcareer job training.

Third, the digital transition is happening faster than any prior one. Jaikumar’s research established that leading companies adapted to new epochs in about fifteen years and broad scale transitions across the economy took up to fifty years.5 In contrast, McKinsey’s research says the economies around the world must absorb unprecedented change in fifteen.6 The problem—as the technologists’ and senior business executives’ perspectives suggest—is so far, the speed of technological change is swamping the incremental decision-making of CEOs. As such, the actual time to effect organizational and societal changes is being sharply compressed. The CEOs’ incrementalism could be laudable if it were guided by thoughtful creativity; the focus on increasing the productivity of existing operations isn’t comforting.

Fourth and finally, we haven’t even begun to scratch the surface of the broader ethical issues digital technologies raise. For example, consider the unending demand for data. David Eggers’s best-selling near-future science fiction book, The Circle,7 describes the morphing of a good motive—radical transparency to expose political corruption—into a requirement for all people to constantly transmit their actions. Could the need for data become a requirement to provide data? It’s not that much of a leap, really. It would require a line, a paragraph at most, in a user agreement for a bank account or car insurance or access to the Internet. China is already piloting social credit, a handful of insurance companies already monitor driving practices in real time,8 and at least one company’s “smart” televisions already track households’ viewing habits without their informed consent.9

Because of these four realities, in contrast with prior epochs, this time we do need to ask, What is the strategic intent? How do we “roughly align” decisions being made independently by people who are oblivious to others’ perspectives? How do we make sure that the missing perspectives (e.g., who is responsible for the retraining?) are taken into account? Far from seeking solutions, we aren’t even asking the questions.

Business executives who wish to be called “leaders” can’t duck this question. They can’t simply respond to the future as it evolves—though that is a key skill—but must try to shape it. If not, they will be buffeted by uncontrollable events in a digital, VUCA world in which China’s demand for copper affects the profitability of unrelated industries.

The first part of a framework that can help leaders defines Five Assumptions people unquestioningly make about digital technologies. Understanding these assumptions and making informed choices can open up “I wonder why ?” “I wonder if ?” and “Could I ?” paths for you to pursue. The second part of the framework addresses seven types of errors that confound major technological initiatives. Understanding these can help you set up guard rails for yours.

The Five Assumptions Made about Digital Technologies

The first of the Five Assumptions is the Assumption of Benevolence. Positive perspectives of digital technologies assume they will augment human capabilities; negative ones assume they will supplant or harm humans. The history of technology suggests both will happen unevenly.

Experts won’t help you make good judgments. Knowledgeable people who are best equipped to be skeptical optimists about technologies default toward unvarnished, even gushing, praise. For its Fall 2016 issue, Sloan Management Review asked fifteen academics and expert practitioners, “Within the next five years, how will technology change the practice of management in ways we have not yet witnessed?” Ten articles exuded boundless optimism. An eleventh touched on a possible problem before turning optimistic. The twelfth, by an academic economist, neutrally discussed technical changes in corporate structure. The last two, also by academics, gently urged caution; one cowritten article raised concern about the ethics embedded in algorithms10 and the other posited that “digital transformation needs a heart.”11 Twelve-to-two in favor of benevolence. No contest.

Average businesspeople are no different. In contrast to the conclusions of the aforementioned McKinsey report, they expect digital technologies to spare them the injuries they expect others to endure. In late 2017, online job search firm ZipRecruiter conducted a survey of one thousand job seekers.12 Seventy-seven percent had heard the term “job automation,” but only 30% actually understood it. Sixty percent considered the possibility of robots replacing humans at work overhyped. Fifty-nine percent of those currently employed didn’t expect their jobs to be automated during their lives.

The Global Survey respondents answered two matched questions about increased thought content of work in general and in their jobs in particular (see figure 4.5). They largely agreed thought content was increasing, but their work was untouched. Clearly, they too optimistically believed they didn’t have to change.

People unconsciously favor new digital technologies. They are more optimistic about unfamiliar than familiar digital technologies.13 They blame technology failures on user errors, not the technologies.14 Entrepreneurs continue investing in developing unsuccessful technologies hoping beyond reason for turnarounds.15 In 2016, a publicly listed Finnish company, Tieto, put an AI system on its management team as a voting member.16

Awareness of—and mindfulness about—this bias can help you address it. Being skeptical about the proclaimed benefits of technology (without becoming a Luddite) is good. While early skepticism about new ideas kills creativity, judicious skepticism while evaluating those ideas produces better outcomes.17

The second is the Assumption of Infallibility. Algorithms already make critical decisions that affect people’s lives. They require massive amounts of data for training. Here’s a highly simplified explanation of the training process: An algorithm, fed data as input, processes the data and produces outputs. The outputs are compared to known correct results and the errors are fed back to the algorithm, which then adjusts its processing. After many such iterations, the algorithm makes sense of arbitrary inputs.

If bad or incomplete data are used during training, the algorithms learn the wrong lessons. This creates a chicken-and-egg problem: More data is available in actual use than during training, but deployment without full testing of the possible range of data that could be input during actual use can be dangerous. For example, if only Chinese features are used to teach the concept of a human face, the algorithm may not recognize people from beyond the Pacific Rim. If only Caucasian features are used, the algorithm may not recognize Asian or African ones.

Developers normally resolve this conundrum by relying on available data (e.g., Caucasian or Chinese faces, depending on where the development is taking place) instead of appropriate data (e.g., faces of all people from all parts of the world). Their decision usually isn’t deliberately nefarious; people in other fields do the same too. Consider this very book: While arguing strongly for a global perspective, I’ve largely cited Western sources and/or articles published in English, relying on the interviews of executives with diverse backgrounds and the Global Survey to ameliorate this shortcoming.

With advanced digital technologies, the stakes skyrocket. With inaccurate data, development teams produce biased AI systems even when no individual developer is biased.18 Explicit and implicit biases compound this problem. Current AI systems are poorly trained,19 pose ethical conundrums,20 and don’t represent the population at large.21 This problem won’t abate soon. Once biased data get used, they corrupt development efforts everywhere. MIT Media Lab scientist Joy Buolamwini has talked about finding code in Asia that embodies racial biases normally found in America.22

Moreover, digital algorithms often produce great “intellectually” valid decisions that cannot be explained to nonexperts.23 How do we regulate Facebook’s algorithms? Or those of driverless cars?24 Should we rely on algorithms for hiring? Police work? Stock trading? If yes, how would you explain to ordinary people that the world’s economy depends on inscrutable algorithms? What would you say to justify AI systems that deny women jobs or recommend harsher sentences for people of particular races?25

Two researchers in ethics, Bidhan Parmar and Edward Freeman, described this challenge well:

[T]he software code used to make judgments about us based on our preferences for shoes or how we get to work is written by human beings, who are making choices about what that data means and how it should shape our behavior. That code is not value neutral—it contains many judgments about who we are, who we should become, and how we should live.

Understanding how ethics affect the algorithms and how these algorithms affect our ethics is one of the biggest challenges of our times.26

Even when the algorithms aren’t inherently inscrutable, there’s another challenge. Since they aren’t infallible, they should be subject to oversight. How could we balance oversight and protecting proprietary intellectual property? So far, corporate leaders have defied almost all oversight efforts. As the use of algorithms spreads and more grievous errors become public, the demand for regulations will rise. Where would you draw the line?

You need to take responsibility for the advanced technologies your company develops. Before you sign off on tests and deployments, you must assure yourself that they are safe, or at least their impacts are easily reversible. A good—definitely not perfect!—test would be Would I authorize its application on someone I love?

The third is the Assumption of Controllability. Even very knowledgeable people assume they can limit, or counterbalance, or control the evolution, or specific uses, of digital technologies. Reid Hoffman, executive chairman and cofounder of LinkedIn, wrote:

[S]ome very smart people are worried about [AI’s] potential dangers, whether they lie in creating economic displacement or in actual conflict I am backing the OpenAI project, to maximize the chances of developing “friendly” AI that will help, rather than harm, humanity. AI is already here to stay. Leveraging specialized AI to extend human intelligence in areas like management is one way we can continue to progress.27

In February 2019, OpenAI refused to release the full code for a program it had developed that could respond to prompts and write page-long, realistic essays, including creative writing. It explained its decision:

Large, general language models could have significant societal impacts, and also have many near-term applications. We can also imagine the application of these models for malicious purposes, including the following (or other applications we can’t yet anticipate). Today, malicious actors—some of which are political in nature—have already begun to target the shared online commons Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights.28

Oh, the irony! An organization created to do good was concerned that one of its creations could be used for evil!

Christopher Manning, a Stanford-based AI researcher, disparaged the decision (“I roll my eyes at that, frankly”), noting, “Yes, it could be used to produce fake Yelp reviews, but it’s not that expensive to pay people in third-world countries to produce fake Yelp reviews.”29 Manning trivialized the damage the program could do to people’s lives and freedoms (just ask Mark Zuckerberg). Nevertheless, his criticism was valid: Someone is probably already creating a better program.

Three months earlier, a Chinese scientist announced he had used CRISPR, the powerful gene-editing tool, in an in vitro fertilization process that resulted in the birth of HIV-resistant twin girls.30 He violated laws in many countries, but not China’s. There, he violated a tacit agreement not to gene edit humans. While his effort might have been for good, it may yet trigger an arms race for fame and fortune that may not be benign.

But what then should we make of the story German pharma company Bayer posted on its website just thirteen days before the in vitro fertilization story? Hash-tagged #CanWeLiveBetter, it read in part

November 2017 saw the latest milestone in our increasingly intimate and complex relationship with our genes, when 44-year-old Californian Brian Madeux was injected with copies of a corrective gene in an attempt to cure his Hunter’s Disease. The injection was intended to directly edit Madeux’s gene code, removing faulty pieces of the genome and stitch it back together.

If successful, this treatment will be a major step forward for the medical application of gene-editing technology. But are we opening a Pandora’s box? Now that we can, do we need to stop and ask whether or not we should?31

Underlying these three brief stories—OpenAI, the HIV-resistant twins, and the Hunter’s disease cure—is an important question: What sized “box” will contain our enormously powerful creations? Too small a box and we forsake a lot of good. Too large a box and we invite potentially irreversible harm.

How will you make such decisions, which will often require collaboration with people outside your company? You’ll sometimes be guided by law and always by your values. These decisions will be tricky; as chapter 9 noted, the logic of “everyone’s doing it” can justify many sins.

The fourth is the Assumption of Omniscience. Renowned science fiction author Arthur C. Clarke formulated three “laws” of technology. One said, “Any sufficiently advanced technology is indistinguishable from magic.”32 His astute observation left a corollary unsaid: Throughout history, magic has been synonymous with power.

Digital technologies themselves, and their by-product—virtually instantaneous, seemingly infinite data on minutiae—engender faith that reaches levels associated with religion. They give an unwarranted sense of complete control. Just consider data analytics. Top universities,33, 34 branded online education,35 global NGOs,36 and premier consulting companies37, 38 all extol its ability to transform medicine, business, poverty alleviation, sports, entertainment, and countless other areas.

We can’t forget these are just tools with their own limitations. How well will AI perform in VUCA conditions, which training data may not replicate well? We know that even in static conditions it fails to recognize people of some races.39 Technologies will also fall short of what we expect. While AI can augment human productivity in creative activities, one expert equated this with “unleashing creativity.”40 The expert’s opinion fell well short of the standard set for AI’s ability to play chess or Go: “Win against a grandmaster.”

Wide-scale use will expose more limitations. The Internet of Things will bring unimaginable benefits but will also heighten the problems we haven’t solved with today’s Internet—hacking, spam, viruses, and the like. Samsung’s struggle to diagnose the cause of spontaneous fires in its Galaxy Note 7, a stand-alone device, revealed another critical challenge. Imagine millions of such flawed devices connected to the Internet of Things. Now add inscrutable algorithms—with no independent oversight—into the mix.

How do you feel about the power of digital technologies? When making decisions, how could you take into account their weaknesses?

The fifth is the Assumption of Authenticity. Authenticity has multiple meanings. It describes the match between a leader’s real self and projected persona (see chapters 7 and 9). It also assesses whether someone or something is legitimate (“Is this authentic Thai cuisine?”) or reflects the expected values (“Was the chef trained in Thai kitchens?”).41 These two latter meanings are relevant as people and robots begin working together.

Search for “people and robots working together” and you’ll mostly find examples of mechanistic, utilitarian factory robots. In reality, robots may increasingly resemble sentient creatures, albeit with domain-specific intelligence. This will pose unimagined challenges for leaders.

Kate Darling argues humans are “hardwired” to anthropomorphize anything that seemingly moves of its own volition.42 After playing with robot dinosaurs for one hour, test subjects refused orders to destroy them. Battle-hardened soldiers couldn’t watch “wounded” robots continuing to do their assigned mine-clearing tasks.43 People changed their behaviors in the presence of simpler robots that resembled self-mobile TV screens that projected images of noncolocated co-workers.44

In contrast, robots themselves have no emotion and (probably) won’t anytime soon. Bots—software robots—that supervise people can “deactivate” (an official Uber term45) them unhesitatingly for whatever infraction their coding deems inappropriate. One journalist, after interviewing Uber spokespeople about their semiautomated process, wrote that if clear guidelines existed for infractions, they weren’t known by the people affected by them.46 Even so, Uber is rapidly introducing AI in all aspects of its business.47

In a legal filing, Amazon has described how its algorithm functions semiautomatically:48

Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors. [These] are required to be provided to associates within 14 days. If the feedback is not provided for any reason the notice expires. While managers have no control over rates, they can override the notice if a policy was applied incorrectly. If an associate receives two final warnings or a total of six written warnings within a rolling 12-month period, the system automatically generates a termination notice.

Can a person complain against an unfair decision? To whom? Notably, Amazon’s filing didn’t discuss what happened to the “managers” who regularly overrode or ignored the algorithm’s decisions.

Perhaps this lack of robotic emotion lies behind the research finding that people discounted music, paintings, and decisions supposedly created by algorithms.49 They agreed these were comparable to those made by humans (“type authenticity”) but questioned the algorithms’ “moral authenticity” to produce them. When making judgments about ambiguous data, humans are also willing to be convinced by other humans, but not by robots.50 In other words, algorithms can do the work, but not be the work.

The assumption of authenticity, beyond all others, will challenge you in the years ahead. It goes to the heart of what it means to lead. In humanity’s entire past, people have led sentient beings—people and animals. No one has led intelligent machines for whom humans can develop unreciprocated affection (as in Darling’s experiment).

You’ll be wise to be skeptical—and empathetic. Consider a few obvious challenges you’ll face. Should robots have legal rights?51 Which robots should have which rights? If new digital technologies with uncertain or ambiguous characteristics are given human characteristics, people trust them more.52 This knowledge will undoubtedly be used to peddle dubious products and services. Should it? If the mere knowledge of who did the work, person or algorithm, affects people’s opinions, will people consider decisions made by machines outcome just? Procedurally just?

The Five Assumptions should inform the strategic intent of leaders. But first, let’s consider another key issue: errors people make in complicated technological projects.

The Seven Critical Errors

Executives talk a good talk about errors but don’t usually walk that talk. In the past, they treated all failure as anathema; today, many promote “failing fast,” “failing forward,” and “celebrating failure” seemingly without limits.

Real life is complex. Failures (and the errors that cause them) can be the source of profound knowledge when consequences aren’t grave or they are controllable or easily reversible.53 In other environments—as in operating theaters or at immigration checkpoints or in courtrooms or with key design decisions—like the Boeing 737 MAX’s angle-of-attack sensor that caused two crashes—failure shouldn’t be embraced in the name of innovation nor easily forgiven.

A granular understanding of errors can help leaders minimize the chance of catastrophic failure as they create bold human-machine systems. Recent research into this issue54 suggests that people make seven types of errors: Believing something that isn’t true (Type 1), not believing something that is true (Type 2), picking the wrong goals (Type 3), deciding before considering alternatives (Type 4), not acting when you should (Type 5), acting when you shouldn’t (Type 6), and the combined effect of multiple small errors of the prior types (Type 7). Each type, discussed below, can affect digital technology initiatives.

Type 1 errors (believing something that isn’t true) and Type 2 errors (not believing something that is true) are routinely taught in connection with data analysis in business, science, and engineering programs. Their discussion assumes we know the issues at stake (e.g., the possible errors in a training data set).

Reducing Type 1 errors unavoidably raises Type 2 errors, and vice versa. So, as a decision maker, you should err on the side of reducing the likelihood of the more damaging error. When you next have to choose, or sign off on investing in, a better training set for AI, ask, “Due to this investment, will the system give more credence to what is false or be less likely to learn what is true?” Follow up with “What are the key costs of its learning false facts? Not learning the truth?”

Type 3 errors (picking the wrong goals) occur long before then. In 2016, Microsoft withdrew its chat bot Tay for racist and other inflammatory output two days after giving it a Twitter handle.55 Designed to mimic an American teenage girl, Microsoft wanted Tay to learn from interacting with real people. It chose Twitter? A medium well-known for trolling? Which, until recently, refused to censor any speech? Even a small fraction of Twitter users could target Tay and corrupt it. That is exactly what happened. Adult supervision limits inappropriate behavior in children, even teens. Who was supposed to chaperone Tay on Twitter and how?

In the business world, Type 4 errors (deciding before considering alternatives) are well-known in theory, less so in practice. Good brainstorming guards against this problem by forbidding early criticisms of ideas. A British Design Council56 document nicely illustrates (see figure 10.1) what should happen.

Figure 10.1

The double diamond phases.

Source: Design Council, “Design Methods for Developing Services.”

Divergent thinking (opening up options—illustrated as the flaring out of the squares near Discover and Develop) and convergent thinking (closing options—illustrated as the converging of the lines of the squares near Define and Deliver) should occur at two places in any project. Convergent thinking is important, but divergent is essential in VUCA conditions and for creativity. The former head of innovation of an Indian digital services company, an expert in Design Thinking, noted:

Leaders who rush to conclude exclude the new possibilities. Those that pause, suspend judgement and absorb diverse perspectives before zeroing in on the most feasible and effective ideas succeed. Solutions are more likely when they are built on the collective understanding of the emerging world, not based on a few individuals’ extrapolated understanding of past beliefs and assumed futures.

Type 4 errors occur when divergent and convergent thinking don’t happen appropriately. Microsoft’s problem with Tay probably had its roots in an earlier successful launch of its Xiaoice bot in China. The project leaders simply replicated that effort, instead of returning to the drawing board.57 Someone should have asked, “China censors police its internet. Twitter isn’t policed. What could go wrong?” That should have led to divergent thinking.

The best leaders instinctively avoid Type 5—not acting when you should—and Type 6—acting when you shouldn’t—errors. Today, executives working on technology initiatives are more likely to commit Type 6 errors than Type 5 because of speeded up competition and because they are blithely applying ideas and processes developed in simpler times.

A generation ago, when American businesses was under assault by Japanese businesses, In Search of Excellence taught American executives that the best among them had “a bias towards action.”58 In the 1990s, as the dot-com hype heated up, a method for field testing stand-alone software (like the early versions of Excel and Word) for bugs that would do no lasting harm became a product launch strategy with the sexy name of “minimum viable product.” During 2000–2001, when software implementation initiatives—let alone development initiatives—routinely ran for multiple years before delivering meaningful results, a group of highly acclaimed software designers rightfully formulated “A manifesto for agile development.”59, 60

The recent, avoidable Boeing 737 MAX disasters that cost hundreds of lives suggests how relying on these ideas by default for all projects could go wrong. A Seattle Times investigation revealed that with its development effort nine months behind that of the competing Airbus A320neo, Boeing’s own internal safety analysis of its MCAS control system (that was supposed to prevent the 737 MAX’s nose from being pointed too high and thereby stalling it)

[u]nderstated the power of the new flight control system. Failed to account for how the system could reset itself each time a pilot responded. Assessed a failure of the system as one level below “catastrophic.” But even that “hazardous” danger level should have precluded activation of the system based on input from a single sensor—and yet that’s how it was designed.61

The article described numerous instances of short-circuiting of normal processes at Boeing and at the US Federal Aviation Authority in order to speed up the launch,62 but that’s only part of the story. The Wall Street Journal opined, “At the root of the miscalculations, though, were Boeing’s overly optimistic assumptions about pilot behavior.” It found that “Boeing assumed that pilots trained on existing safety procedures should be able to sift through the jumble of contradictory warnings and take the proper action 100% of the time within four seconds. That is about the amount of time that it took you to read this sentence.”63

The Journal also wrote, “The company reasoned that pilots had trained for years to deal with a problem known as a runaway stabilizer [and the] correct response to an MCAS misfire was identical. Pilots didn’t need to know why it was happening.” Boeing wasn’t alone in making this flawed assumption; the FAA had its own version of it. Currently, “FAA rules typically assume ‘the human will intervene reliably every time.’” After the crashes, the FAA is rethinking its “reliance on average US pilot reaction times as a design benchmark for planes that are sold in parts of the world with different experience levels and training standards.”64

During the design process, the MCAS was initially designed to move the tail fins only 0.6° out of a physical max of almost 5°. However, when test pilots worked with the advanced prototypes, this got increased to 2.5°. While “it’s not uncommon to tweak the control software,”65 this large increase was neither fed back to the designers, nor documented in the safety analysis submitted to the FAA (or in information provided to any airline).

In part, the flawed decisions were made because: “MCAS wasn’t seen as an important part of the flight-control system. [A]round 2013, the plane maker described the system as simply a few lines of software code.”66

In part, the flawed decisions were also made because of other decisions taken far from the R&D labs:

The assumptions [that pilots could react almost instantaneously] dovetailed with a vital company goal. To make the plane as inexpensive as possible for airlines to adopt. At one point around 2013, Boeing officials fretted the FAA would require simulator training, the person involved with the plane’s development said. But the officials, including chief MAX engineer opted not to work with simulator makers to simultaneously develop a MAX version because they were confident the plane wouldn’t differ much from earlier 737s. It was a high-stakes gamble,” this person said. The company had promised its biggest customer for the MAX it would pay it $1 million per plane ordered if pilots needed to do additional simulator training 67

In the original design, there was an alert feature which could have warned the pilots the MCAS was malfunctioning.

Trouble was, that alert feature wasn’t activated on MAX jets operated by Ethiopian and many other airlines. A contractor had made mistakes in software meant to activate them, but Boeing had told only certain airlines. Boeing, which maintains the alerts aren’t critical safety items, instead billed them as part of an optional package.68

This is far from the complete story of what happened and indeed, it may change in the months and years to come. It is also clearly an extreme case of a crisis. Even so, it offers a critically important lesson.

Executives make huge assumptions—on human behavior, what customers will do, how an engineering system will work, what the operating environment will be, and many more—when their companies create products and services with embedded digital technologies. They don’t necessarily ask themselves whether their existing corporate processes and systems can respond adequately to the issues that could arise in a new epoch. In particular, there’s a real danger in the fact that though most of today’s digital projects are routinely delivered in days or weeks or months, executives continue to display their bias for action by pursuing agility with demonic intent while launching really complicated minimum viable products.

Calling out this serious problem is not an endorsement of “paralysis by analysis” or the glacially slow “waterfall method” (linear progression through requirements, design, implementation, testing, and maintenance) of software development. Nor does it suggest that rapid prototyping is detrimental—it is essential and indeed, its value is often underrated. It does imply that speed shouldn’t be at the expense of thoughtfulness; even advanced prototypes shouldn’t be released without careful consideration, if at all; and all necessary (not minimally required) testing should be completed.

Samsung’s Galaxy Note 7, Boeing’s 737 MAX, esoteric financial products, and many other similar major errors should convince us that, in a highly connected, digital, VUCA environment, the cost of acting too soon (Type 6 errors) can be much higher than the cost of failure to act (Type 5 errors). As I’ve written elsewhere, “If the speed and cost for fixing errors and miscalculations are acceptable, by all means proceed with agility, aim to be first to market, or launch minimally viable products. Otherwise, steel your backbone and demand thoughtfulness.”69

Finally, Type 7 errors result from a cascading of multiple small, individually inconsequential, errors of the prior types. These can combine to produce crisis-level outcomes. Don’t downplay small errors! Instead, ask, “Could these cascade in VUCA conditions? Under what circumstances and with what impact?” Moreover, set standards for risk before you begin. Tighten them if experience suggests, but don’t weaken them. Over time, ignored risk standards inure decision makers to greatly enhanced risk.70 Preprescribe steps that must be taken if these standards are ever breached.

Figure 10.2 illustrates the errors in the context of the evolution of a project. Note that the numbering isn’t sequential. Type 1 and Type 2 errors are broadly used terms with roots in statistics; they were identified long before the others.

Figure 10.2

The seven types of errors.

Source: Lightly adapted from Mark Meckler and Kim Boal, “Decision Errors, Organizational Iatrogenesis and Error of the 7th Kind,” Academy of Management Perspectives, published online October 15, 2018; in press.

Formulating Your Strategic Intent

Digital technologies haven’t yet lived up to the hopes attached to them. I don’t agree with much that venture capitalist Peter Thiel says, but he captured this shortfall eloquently: “We wanted flying cars, but we got 140 characters.”71 More prosaically, while 2 KB of RAM in one spacecraft got humanity to the moon, 2 GB of RAM (a million times more) in each of billions of smartphones worldwide hasn’t produced comparable wonders. Somehow we got sidetracked into friending strangers we wouldn’t recognize if our lives depended on doing so.

The biggest challenge leaders face isn’t technology but their own mindsets. The path of incremental efforts that many CEOs are currently pursuing is the easy one to take since it continues the last two epochs’ focus on productivity. After all, their companies are already set up for this, and it appeals to financial markets. All they have to do is to focus on de-skilling (Principle 1) and upskilling (Principle 2) and automate existing operations.

In 2017, I visited a major global digital technology consultant’s showcase lab for digital technologies. The consulting company did cutting-edge research there and also used it to convince senior client executives to spend tens of millions of dollars on digital transformation. The top featured use? Using virtual reality to speed up the training of equipment maintenance staff who are responsible for expensive capital assets.

The digital epoch demands bold, creative initiatives. For these, Principles 1 (de-skilling) and 2 (upskilling) may be useful, but Principles 3 (distributing work), 4 (cerebral work), and 5 (emergent needs) will often be key. Contrast the prior use of virtual reality technology with its use to give a young female doctor-in-training a visceral understanding of what it feels like to be an old male patient with a debilitating illness (see chapter 8). One can save some money, the other can help transform medicine—and make money. Which company—one speeding up maintenance training or one transforming medicine—would you want to lead?

Aligning your distributed leadership team’s direction and pace of motion enables the flexible addressing of local issues within the context of a larger effort. The discussion of the technologists’ and business executives’ perspectives at the start of this chapter should have suggested its importance: Although digital technologies are transforming long-standing social contracts among people, between institutions and people, and between governments and institutions, the two key groups—and governments, regulators, and NGOs—are behaving like ships passing each other silently at night.

Consider formulating your strategic intent collaboratively with your leadership team and subject matter experts. Why? An adage long associated with Design Thinking is “Nobody is as clever as everybody.” The digital, VUCA world is convoluted, and you may miss key issues which others don’t.

Start with the mindset that everyone needs to adopt with digital technologies. Bold, creative initiatives will require your replacing a productivity mindset with a creative one. Mindsets are easy to ignore because the term is touchy-feely soft. (“OK, but what do I do?”) You will do, but take a few moments to think: I wonder if ? I wonder how ? How/what could ?

The mindset change you need to make now is conceptually no different from the mindset changes your predecessors had to make during the last three epochal changes. They went from precision focus to clearance focus, from machine focus to people focus, and from acceptable quality of individual workpieces to time-phased control of production variation. These mindset changes were their biggest hurdles. Don’t underestimate yours.

At the transitions of prior epochs, the new foci didn’t eliminate the old ones, but supplemented them (and yes, pushed them to positions of lesser prominence). So the focus on clearance, for example, didn’t mean micrometers were abandoned; it simply means micrometers were used where necessary and not by default. Similarly, in the digital epoch, your focus on creativity won’t mean that you don’t have to care about productivity; it does mean that wherever creativity is needed, it—not productivity—should be the default option.

Then consider your ideas for creativity, inventions, or innovations. View each through the lens of each the Five Assumptions. They will help you shape the response to your “I wonder if ? I wonder how ? How/what could ?” questions. What opportunities and issues arise? Where is flexibility needed? By whom? How much? Where must you be in sync? Why?

Finally, use the seven errors to identify where your risks are the greatest. Given the newness of many digital technologies and the intricacies of the VUCA world, pursuing bold goals will almost inevitably expose you to greater risks than automating existing tasks. Ask about them: “Is this risk worth taking?” For most, the answer will probably be yes. Then ask, “How can we protect against it?”


An executive officer once told my class, “Given the choice between doing something small and doing something big, pick the big; it will only take a little bit more effort.” Good words to keep in mind.

Notes

  1. 1. James Manyika, Susan Lund, Michael Chui, Jacques Bughin, Jonathan Woetzel, Parul Batra, Ryan Ko, and Saurabh Sanghvi, “Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation,” McKinsey Global Institute, December 2017, https://www.mckinsey.com/~/media/mckinsey/featured%20insights/Future%20of%20Organizations/What%20the%20future%20of%20work%20will%20mean%20for%20jobs%20skills%20and%20wages/MGI-Jobs-Lost-Jobs-Gained-Report-December-6-2017.ashx.

  2. 2. Paul Davidson, “More High Schools Teach Manufacturing Skills,” USA Today, November 12, 2014, https://www.usatoday.com/story/money/business/2014/11/12/high-schools-teach-manufacturing-skills/17805483.

  3. 3. Nicholas Wyman, “Why We Desperately Need to Bring Back Vocational Training in Schools,” Forbes, September 1, 2015, https://www.forbes.com/sites/nicholaswyman/2015/09/01/why-we-desperately-need-to-bring-back-vocational-training-in-schools/#7e9e986887ad.

  4. 4. Clifford Krauss, “Texas Oil Fields Rebound from Price Lull, but Jobs Are Left Behind,” New York Times, February 19, 2017, https://www.nytimes.com/2017/02/19/business/energy-environment/oil-jobs-technology.html.

  5. 5. Ramchandran Jaikumar, “From Filing and Fitting to Flexible Manufacturing: A Study in the Evolution of Process Control,” Foundations and Trends(R) in Technology, Information and Operations Management 1, no. 1 (2005): 1–120, https://ideas.repec.org/a/now/fnttom/0200000001.html.

  6. 6. Manyika et al., “Jobs Lost, Jobs Gained.”

  7. 7. Dave Eggers, The Circle (New York: Vintage Books, 2014).

  8. 8. Mike Juang, “A New Kind of Auto Insurance Technology Can Lead to Lower Premiums, but It Tracks Your Every Move,” CNBC, October 5, 2018, https://www.cnbc.com/2018/10/05/new-kind-of-auto-insurance-can-be-cheaper-but-tracks-your-every-move.html.

  9. 9. Sapna Maheshwari, “How Smart TVs in Millions of US Homes Track More Than What’s on Tonight,” New York Times, July 5, 2018, https://www.nytimes.com/2018/07/05/business/media/tv-viewer-tracking.html.

  10. 10. Bidhan Parmar and Edward Freeman, “Ethics and the Algorithm,” Sloan Management Review, Fall 2016.

  11. 11. George Westerman, “Why Digital Transformation Needs a Heart,” Sloan Management Review, Fall 2016.

  12. 12. Greg Nichols, “Workers Don’t Fear Automation (Because They Don’t Understand It),” ZDNet, December 6, 2017, https://www.zdnet.com/article/workers-dont-fear-automation-because-they-dont-understand-it.

  13. 13. Brent Clark, Christopher Robert, and Stephen Hampton, “The Technology Effect: How Perceptions of Technology Drive Excessive Optimism,” Journal of Business and Psychology 31, no. 1 (2016): 87–102.

  14. 14. Kimberly D. Elsbach and Ileana Stigliani, “New Information Technology and Implicit Bias,” Academy of Management Perspectives 33, no. 2 (May 1, 2019): 185–206.

  15. 15. Robert Lowe and Arvids Ziedonis, “Overoptimism and the Performance of Entrepreneurial Firms,” Management Science 52, no. 2 (2006): 173–186.

  16. 16. Vilhelm Carlström, “This Finnish Company Just Made an AI Part of the Management Team,” Business Insider Nordic, October 17, 2016, https://nordic.businessinsider.com/this-finnish-company-just-made-an-ai-part-of-the-management-team-2016-10.

  17. 17. Roberto, Unlocking Creativity.

  18. 18. Tobias Baer and Vishnu Kamalnath, “Controlling Machine-Learning Algorithms and Their Biases,” McKinsey Quarterly, November 2017, https://www.mckinsey.com/business-functions/risk/our-insights/controlling-machine-learning-algorithms-and-their-biases.

  19. 19. Maria Korolov, “AI’s Biggest Risk Factor: Data Gone Wrong,” CIO Magazine, February 13, 2018, https://www.cio.com/article/3254693/artificial-intelligence/ais-biggest-risk-factor-data-gone-wrong.html.

  20. 20. Christopher Heine, “Microsoft’s Chatbot ‘Tay’ Just Went on a Racist, Misogynistic, Anti-Semitic Tirade,” AdWeek, March 24, 2016, https://www.adweek.com/digital/microsofts-chatbot-tay-just-went-racist-misogynistic-anti-semitic-tirade-170400.

  21. 21. Natasha Singer, “Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says,” New York Times, July 26, 2018, https://www.nytimes.com/2018/07/26/technology/amazon-aclu-facial-recognition-congress.html.

  22. 22. Joy Buolamwini, “How I’m Fighting Bias in Algorithms,” TEDxBeacon Street, https://www.ted.com/speakers/joy_buolamwini.

  23. 23. “For Artificial Intelligence to Thrive, It Must Explain Itself,” Economist, February 15, 2018, https://www.economist.com/science-and-technology/2018/02/15/for-artificial-intelligence-to-thrive-it-must-explain-itself.

  24. 24. Haslina Ali and Rubén Mancha, “Coming to Grips with Dangerous Algorithms,” Sloan Management Review, Fall 2016.

  25. 25. Joy Buolamwini, “How I’m Fighting Bias in Algorithms,” TED Talk, updated March 29, 2017, https://www.youtube.com/watch?v=UG_X_7g63rY.

  26. 26. Parmar and Freeman, “Ethics and the Algorithm.”

  27. 27. Reid Hoffman, “Using Artificial Intelligence to Set Information Free,” Sloan Management Review, Fall 2016.

  28. 28. OpenAI, “Better Language Models and Their Implications,” February 14, 2019, https://openai.com/blog/better-language-models/#sample8.

  29. 29. Rachel Metz, “This AI Is So Good at Writing That Its Creators Won’t Let You Use It,” CNN Business, February 18, 2019, https://www.cnn.com/2019/02/18/tech/dangerous-ai-text-generator/index.html.

  30. 30. Gina Kolata, Sui-Lee Wee, and Pam Belluck, “Chinese Scientist Claims to Use Crispr to Make First Genetically Edited Babies,” New York Times, November 26, 2018, https://www.nytimes.com/2018/11/26/health/gene-editing-babies-china.html.

  31. 31. “Can We Trust Ourselves When It Comes to Gene Editing?,” Bayer AG, November 15, 2018, https://www.canwelivebetter.bayer.com/innovation/can-we-trust-ourselves-when-it-comes-gene-editing?ds_rl=1259492&gclid=EAIaIQobChMIyuWWuZrQ4AIViq_ICh1DKAYKEAMYASAAEgI16vD_BwE&gclsrc=aw.ds.

  32. 32. Arthur C. Clarke, “Hazards of Prophecy: The Failure of Imagination,” in Profiles of the Future: An Enquiry into the Limits of the Possible (New York: Harper and Row, 1973 [originally published in 1962]).

  33. 33. Abby Abazorius, “How Data Can Change the World,” MIT News, September 26, 2016, http://news.mit.edu/2016/IDSS-celebration-big-data-change-world-0926.

  34. 34. Ian Chipman, “How Data Analytics Is Going to Transform All Industries,” Stanford Engineering Research & Ideas, February 23, 2016, https://engineering.stanford.edu/magazine/article/how-data-analytics-going-transform-all-industries.

  35. 35. “Big Data: How Data Analytics Is Transforming the World,” The Great Courses, 2014, https://guidebookstgc.snagfilms.com/1382_DataAnalytics.pdf.

  36. 36. “How Is Big Data Going to Change the World?,” World Economic Forum, updated December 1, 2015, https://www.weforum.org/agenda/2015/12/how-is-big-data-going-to-change-the-world.

  37. 37. Nicolaus Henke, Jacques Bughin, Michael Chui, James Manyika, Tamim Saleh, Bill Wiseman, and Guru Sethupathy, “The Age of Analytics: Competing in a Data-Driven World,” December 2016, https://www.mckinsey.com/~/media/McKinsey/Business%20Functions/McKinsey%20Analytics/Our%20Insights/The%20age%20of%20analytics%20Competing%20in%20a%20data%20driven%20world/MGI-The-Age-of-Analytics-Full-report.ashx.

  38. 38. Herman Heyns and Chris Mazzei, “Becoming an Analytics-Driven Organization to Create Value,” 2015, https://www.ey.com/Publication/vwLUAssets/EY-global-becoming-an-analytics-driven-organization/%24FILE/ey-global-becoming-an-analytics-driven-organization.pdf.

  39. 39. Maria Korolov, “AI’s Biggest Risk Factor.”

  40. 40. Robert Austin, “Unleashing Creativity with Digital Technology,” Sloan Management Review, Fall 2016.

  41. 41. Arthur Jago, “Algorithms and Authenticity,” Academy of Management Discoveries 5, no. 1 (March 26, 2019): 38–56.

  42. 42. Kate Darling, “‘Who’s Johnny?’ Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy,” in Robot Ethics 2.0, ed. P. Lin, G. Bekey, K. Abney, and R. Jenkins (New York: Oxford University Press, 2017).

  43. 43. Kate Darling, “Why We Have an Emotional Connection to Robots,” TED Salon Talks, updated September 2018, https://www.ted.com/talks/kate_darling_why_we_have_an_emotional_connection_to_robots#t-699288.

  44. 44. Leila Takayama, “What’s It Like to Be a Robot?,” TEDx Palo Alto, updated April 2017, https://www.ted.com/talks/leila_takayama_what_s_it_like_to_be_a_robot.

  45. 45. “Driver Deactivation Policy,” Uber, updated May 29, 2019, https://help.uber.com/partners/article/driver-deactivation-policy?nodeId=ada3b961-e3c2-48e6-ac3f-2db5936e37a9.

  46. 46. Samantha Allen, “The Mysterious Way Uber Bans Drivers,” The Daily Beast, January 27, 2015, https://www.thedailybeast.com/the-mysterious-way-uber-bans-drivers.

  47. 47. John Koetsier, “Uber Might Be the First AI-First Company, Which Is Why They ‘Don’t Even Think about It Anymore,’” Forbes, August 22, 2018, https://www.forbes.com/sites/johnkoetsier/2018/08/22/uber-might-be-the-first-ai-first-company-which-is-why-they-dont-even-think-about-it-anymore/#5ca511e35b62.

  48. 48. Colin Lecher, “How Amazon Automatically Tracks and Fires Warehouse Workers for ‘Productivity,’” The Verge, April 25, 2019, https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations.

  49. 49. Jago, “Algorithms and Authenticity.”

  50. 50. Jürgen Brandstetter, Péter Rácz, Clay Beckner, Eduardo B. Sandoval, Jennifer Hay, and Christoph Bartneck, “A Peer Pressure Experiment: Recreation of the Asch Conformity Experiment with Robots,” 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2014.

  51. 51. Kate Darling, “Extending Legal Protection to Social Robots,” IEEE Spectrum, September 10, 2012, https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/extending-legal-protection-to-social-robots.

  52. 52. Elsbach and Stigliani, “New Information Technology and Implicit Bias.”

  53. 53. Amy Edmondson, “Strategies for Learning from Failure,” Harvard Business Review, April 2011.

  54. 54. Mark Meckler and Kim Boal, “Decision Errors, Organizational Iatrogenesis and Error of the 7th Kind,” Academy of Management Perspectives, published online October 15, 2018; in press.

  55. 55. Hope Reese, “Why Microsoft’s ‘Tay’ AI Bot Went Wrong,” TechRepublic, March 24, 2016, https://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong.

  56. 56. Design Council, “Design Methods for Developing Services,” https://www.designcouncil.org.uk/resources/guide/design-methods-developing-services.

  57. 57. Peter Bright, “Tay, the Neo-Nazi Millennial Chatbot, Gets Autopsied,” Ars Technica, March 25, 2016, https://arstechnica.com/information-technology/2016/03/tay-the-neo-nazi-millennial-chatbot-gets-autopsied.

  58. 58. Robert Waterman and Tom Peters, In Search of Excellence (New York: Harper & Row, 1982).

  59. 59. Caroline Nyce, “The Winter Getaway That Turned the Software World Upside Down,” The Atlantic, December 8, 2017, https://www.theatlantic.com/technology/archive/2017/12/agile-manifesto-a-history/547715.

  60. 60. Martin Fowler, “Writing the Agile Manifesto,” July 9, 2006, https://martinfowler.com/articles/agileStory.html.

  61. 61. Dominic Gates, “Flawed Analysis, Failed Oversight: How Boeing, FAA Certified the Suspect 737 MAX Flight Control System,” Seattle Times, March 17, 2019, https://www.seattletimes.com/business/boeing-aerospace/failed-certification-faa-missed-safety-issues-in-the-737-max-system-implicated-in-the-lion-air-crash.

  62. 62. Matt Stieb, “Report: Self-Regulation of Boeing 737 MAX May Have Led to Major Flaws in Flight Control System,” New York Magazine Intelligencer, March 17, 2019, https://nymag.com/intelligencer/2019/03/report-the-regulatory-failures-of-the-boeing-737-max.html.

  63. 63. Andrew Tangel, Andy Pasztor and Mark Maremont, “The Four-Second Catastrophe: How Boeing Doomed the 737 MAX,” Wall Street Journal, August 16, 2019.

  64. 64. Tangel, Pasztor, and Maremont, “The Four-Second Catastrophe.”

  65. 65. Gates, “Flawed Analysis, Failed Oversight.”

  66. 66. Tangel, Pasztor, and Maremont, “The Four-Second Catastrophe.”

  67. 67. Tangel, Pasztor, and Maremont, “The Four-Second Catastrophe.”

  68. 68. Tangel, Pasztor, and Maremont, “The Four-Second Catastrophe.”

  69. 69. Amit Mukherjee, “The Case against Agility,” Sloan Management Review, September 26, 2017.

  70. 70. Bohmer, Edmondson, and Roberto, “Columbia’s Final Mission.” Case.

  71. 71. Daniel Weisfield, “Peter Thiel at Yale: We Wanted Flying Cars, but We Got 140 Characters,” Yale School of Management, April 27, 2013, https://som.yale.edu/blog/peter-thiel-at-yale-we-wanted-flying-cars-instead-we-got-140-characters.