7 The Way Forward: Mission-Critical Advice

Nothing is so painful to the human mind as a great and sudden change.

—Mary Shelley, Frankenstein

When Ed was growing up in Brooklyn, his school would take class trips to the beach every year. Before he learned to swim, he found the power of the ocean threatening, and he dreaded going on these trips. But one thing that stuck with him from those experiences was practical advice about what to do in the event of a riptide: don’t fight it. Swim parallel to the shoreline until you’re outside its grip. Decades later, this lesson came to mind when he found himself caught in a riptide in the Hamptons. After an initial wave of panic, he let the water carry him where it wanted rather than trying to resist its force. He wound up on a beach a mile from where he’d started, exhausted and grateful for the knowledge that had saved his life.

Many of the project professionals we have taught or consulted with in recent years can relate to the metaphor of swimming in a riptide. We are living in a time characterized by radical uncertainty, which John Kay and Mervyn King define as “the vast range of possibilities that lie in between the world of unlikely events and the world of the unimaginable.”1 We write this in late 2020 amid a global pandemic that has upended the status quo ante in unanticipated ways. The pandemic is only the latest and greatest reminder that radical uncertainty requires new ways of thinking.

Turbulence and Risk

In a world of radical uncertainty, the biggest risks to projects are social, political, and economic. Events ranging from Brexit to COVID-19 have exposed more than the vulnerability of global supply chains—they have illuminated cracks along fault lines where tectonic plates collide. Projects that demand the kind of sustained international cooperation that was necessary to design, build, and operate the International Space Station are harder to establish when forces such as nationalism strain relations among countries that both compete and collaborate. The pandemic-induced shift to remote and virtual work has vast implications for the way project teams work and learn together. The future of commercial real estate construction in the world’s major cities is now an open question. These are just a few examples of the political, social, and economic risks on the horizon of the project world today. The only sure thing is that there will be others tomorrow.

Technology contributes to radical uncertainty because the velocity of change makes it nearly impossible to predict which trends will accelerate most rapidly. The unforeseen move to remote and virtual work in 2020 created an overnight demand for secure, reliable communication tools at a scale that was previously unimaginable. Based on recent trends in project work, it is reasonable to expect that artificial intelligence (AI) and machine learning will be employed for increasingly complex pattern recognition tasks, just as robots will continue to take over work that is physically demanding, dangerous, or repetitive. These technologies introduce new risks and raise ethical and epistemic concerns that need to be addressed through a humanistic lens.

We are not futurologists, and the mercurial nature of social and political risks makes us leery of predictions about the state of the world in two or five years. (Consider how many political experts discounted both Trump and Biden in the early stages of their respective campaigns that led to victory.2) But we are confident that a landscape contoured by radical uncertainty has significant implications for knowledge and leadership.

Knowledge

One result of radical uncertainty is epistemic uncertainty. All knowledge is temporal. A physicist working at the turn of the twentieth century could scarcely have imagined that the laws of classical mechanics would not hold at the quantum level. What works for today may not work for tomorrow, and this is particularly true as the velocity of change increases.

The need for speed. The relationship between the rate of change and the speed of thought required to respond leads us to a rule of thumb about the likelihood of relying on habits of thought ingrained by heuristics and cognitive biases. There is a direct relationship between the rapidity of human decision-making and the use of mental shortcuts that introduce predictable errors. The faster the pace, the more tempting it is to lean on familiar patterns that reduce friction. Information that is easily recalled isn’t always the most relevant. A story that reduces the complexity of an issue may lead to simplistic conclusions. The usual suspects should not be expected to think differently than they always do. The only answer for the trap of fast thinking is slow thinking: deliberation, conversation, and reflection. A decision maker who doesn’t pause to ask questions about biases, risks, or misinformation is a dead man walking.

Stopping the clock is not always possible, but our experience is that too often the perceived need for speed shapes the decision-making reality. In 2003, a contractor team working six days a week on a weather satellite project for NASA neglected to secure the satellite to a cart with the proper number of bolts before moving it, resulting in a mishap that cost $135 million to repair.3 The project was not scheduled to launch for years—there was no reason the team needed to be working on a weekend. Speed had become a goal unto itself, leading to a failure to ask the obvious question (“Are we doing things right?”), to say nothing of the one that a learning organization routinely asks (“Are we doing the right things?”). Radical uncertainty only increases the importance of making time for the latter question.

Learning and unlearning. Uncertainty about the knowledge that will be needed in the future means that continuous learning is now a sink-or-swim proposition for project professionals at all career levels. Digital transformation of processes, business models, and product lines demands increased capacity for managing change in addition to technical skill. AI won’t replace managers, but managers who use AI will replace those who don’t.4 While skill gaps need to be addressed, the human dynamics of digital transformation require something more difficult: unlearning what has worked in the past in order to enable experimentation that can lead to breakthroughs.5

But keeping up with technology is simply the price of admission. The longer-term challenge is to read, listen, and gain exposure to a wide range of topics in order to think broadly and holistically. Just as STEM (science, technology, engineering, and mathematics) education has moved in the direction of STEAM (the A stands for arts), the value of multidisciplinary learning throughout a career has begun to receive due recognition. This wider lens enables practitioners to avoid relying on old patterns of thought that are no longer useful.6

If this sounds like yet another angst-inducing item for the to-do list, there is good news on that front as well: learning does more than just making us smarter. Current research on coping with the pressures of work suggests that learning leads to reduced feelings of distress and anxiety and is a powerful tool for building resilience. The creation of new knowledge prepares us for dealing with challenges, threats, and change.7

Teams and organizations have the same need to refresh their knowledge. As projects continue to increase in complexity, inclusion can serve as a deliberate strategy for seeking out new ideas. The Netflix Challenge mentioned in chapter 5 is an example of casting the widest possible net, but inclusion can start closer to home. Psychological safety enables team members to share ideas freely without risk of recrimination. As a team bonds and finds its working rhythm, however, the importance of cohesion has to be balanced against the risk of creating a culture of exclusion that communicates the message that outside ideas are neither needed nor welcome. As NASA discovered with the Challenger accident, that road leads to hubris and failure.

Judgment and ethics. A machine can run numbers faster than any human, but it cannot ask or answer questions like “Do these numbers look right?” or “Are these the right numbers?” The answers to those questions require judgment. Judgment encompasses the ability to understand context, separate signal from noise, weigh ethical considerations, and exercise emotional intelligence and sensitivity.8 These are all critical abilities, yet very little education or training focuses on how to develop and exercise this skill.

As artificial intelligence and machine learning become increasingly integrated in project work, they come with risks that are still emerging. AI can create increasingly sophisticated fakes of everything from human images to news stories. Machine learning can generate biased outcomes as a result of bias-ridden data, faulty algorithms, or a combination of the two. The work of sorting through these and other unforeseen challenges will fall to people who will need to be trained as thoroughly in ethics as in technology. New knowledge will be needed to develop more discerning judgment when assessing the quality and value of work produced by machines.

As we write this, Google has established a low-cost certification for information technology professionals that provides highly transferable skills in some of the most common computer languages in use today.9 At first blush, this sounds like a wonderful gift that can help to level the playing field in a deeply unequal society. But it quickly raises questions about the educational priorities of a tech firm with a market capitalization of $1 trillion. What aspects of ethics and social responsibility will be taught? Where will judgment and contextual thinking fit into the curriculum? How will a workforce trained by and for a private sector firm that owns some of the most powerful algorithms in the world learn to make decisions that have consequences for billions of people?

Leadership

How does one lead in a landscape of radical uncertainty? Leaders are expected to be able to define reality and mobilize resources.10 The difficulty begins with the first part of that proposition: reality is temporal and fragmented.

While the lure of technology is strong, attempts to build a higher-fidelity dashboard representation of reality are the equivalent of trying to capture lightning in a bottle. Access to critical real-time data is essential, but it is a category error to mistake curated data for reality. Leading organizations through radical uncertainty is not the same as piloting an aircraft through foul weather. It’s not possible to rely on instruments to hit the middle of the runway 99.9 percent of the time.

There is an alternative to trying to define reality through technology: abandon the control paradigm. Hire people you can trust and give them the decision authority to do their jobs at the local level. Provide the learning and knowledge infrastructure they need, and focus on intangibles such as teamwork, collaboration, and culture. Let them sense and respond to the various realities they encounter. Communicate transparently and hold them accountable for results.

Radical uncertainty does not mean there is no role for strategy. One part of defining reality that is reserved for leaders is grappling with political and social risks that may be unevenly distributed across geographies. In the United States today, for instance, section 230 of the Communications Decency Act of 1996 is a political hot potato for some of the world’s largest tech companies. Section 230 currently protects companies that own social media platforms from legal liability for the content posted by users on those sites, even if the information posted is misleading or false. A change to this would have potentially enormous consequences for the US operations of YouTube, Facebook, Twitter, and other social media companies. Now consider how the same challenges might play out in a hundred or more countries, each with their own legal and regulatory frameworks, and the need for a strategic perspective becomes evident. Similarly, the pandemic has laid bare the risk posed by worker health in global supply chains. Understanding and navigating risks like these can only be done at the macro level.

Though the responsibility for exercising judgment with regard to technology is shared by many, ultimate accountability rests with leaders. This much seems clear: problems with technical solutions will increasingly be solved by machines. The ethical and epistemic questions will only get harder as technology proliferates, and bedrock questions such as “Are we doing the right things?” will become more important. AI and machine learning will continue to tackle tasks that were once thought to be the exclusive province of technical experts (just ask any radiologist), but the social problems inherent in teamwork, collaboration, and organizational culture will always come down to people.

If we have emphasized one theme throughout this book, it is that focusing on the human dimension of project work offers the greatest potential for return on investment to an organization, its stakeholders, and society. As we said in the introduction, projects run on knowledge that can be technical, organizational, or political. Teams function within organizations that empower or constrain them through a combination of bureaucratic means such as governance and intangibles such as culture and a shared sense of mission and purpose. They explore, fail, improvise, and maneuver in response to challenges they didn’t or couldn’t anticipate, and as a result they learn the only way they can: together. The starting point for knowledge is not information. It is people.

Notes

  1. 1. John Kay and Mervyn King, Radical Uncertainty: Decision-Making Beyond the Numbers (New York: W. W. Norton, 2020), 14.

  2. 2. Philip Tetlock’s groundbreaking book Expert Political Judgment: How Good Is It? How Can We Know? (Princeton, NJ: Princeton University Press, 2005) and subsequent studies have demonstrated the paucity of accuracy in political predictions by “experts.” His findings could also be applied to economics (how many analysts failed to anticipate the Great Recession?) and management.

  3. 3. See National Aeronautics and Space Administration, NOAA N-Prime Mishap Investigation: Final Report, NASA, September 13, 2004, https://www.nasa.gov/pdf/65776main_noaa_np_mishap.pdf; and Jason Bates, “Lockheed Martin Profits to Pay for NOAA N-Prime Repairs,” Space, October 11, 2004, https://www.space.com/417-lockheed-martin-profits-pay-noaa-prime-repairs.html.

  4. 4. We are grateful to our colleague Yahiro Takegami of IBM Japan for this insight.

  5. 5. Barry O’Reilly, Unlearn: Let Go of Past Success to Achieve Extraordinary Results (New York: McGraw-Hill, 2019).

  6. 6. David Epstein, Range: How Generalists Triumph in a Specialized World (New York: Riverhead Books, 2019), 34.

  7. 7. Chen Zhang, David M Mayer, and Eunbit Hwang, “More Is Less: Learning but Not Relaxing Buffers Deviance under Job Stressors,” Journal of Applied Psychology 103, no. 2 (February 2018):123–136, https://doi.org/10.1037/apl0000264.

  8. 8. This formulation draws from the definition used by Brian Cantwell Smith in The Promise of Artificial Intelligence: Reckoning and Judgment (Cambridge, MA: MIT Press, 2019), xv.

  9. 9. Lilah Burke, “Google Releases New IT Certificate,” Inside Higher Ed, January 17, 2020, https://www.insidehighered.com/quicktakes/2020/01/17/google-releases-new-it-certificate.

  10. 10. Noel Tichy, former head of GE’s leadership academy, offered this definition of leadership in Noel M. Tichy with Eli Cohen, The Leadership Engine: How Winning Companies Build Leaders at Every Level (New York: HarperCollins, 1997).