9

Unlearning with People and Organizations

Our ability to open the future will depend not on how well we learn anymore but on how well we are able to unlearn.
—Alan Kay

On January 28, 1986, the space shuttle Challenger—one of five shuttles built by NASA—was set to lift off for its relatively short flight into orbit around the earth. The space shuttle program had been phenomenally successful, and Challenger had logged nine previous missions—taking its crew of seven astronauts into low-earth orbit to conduct experiments, deploy and service satellites, and gather scientific data. Previous Challenger missions had led to a number of firsts, including the first spacewalk during a space shuttle mission, the first American woman in space (Sally Ride), the first shuttle night launch, and more.

By the tenth Challenger mission, launches and landings—and the time in between, generally about a week—had become routine, even boring. Television networks used to interrupt their scheduled programming to cover Mercury, Gemini, and Apollo launches and landings, each of which pushed humanity’s knowledge thresholds farther and had an undeniable edge of danger. However, after the first shuttle orbital test flight in 1981, the American public turned its attention elsewhere, and—except for a fledgling CNN—launches and landings were no longer broadcast live.

So when Challenger exploded and broke apart just 73 seconds after launch from Florida’s Kennedy Space Center that cold winter morning—killing seven crew members in the process (including much-beloved school teacher Christa McAuliffe)—the event was a brutal reminder to the American people, and to the world, of the still very real dangers of space travel.

Just as had happened immediately after previous NASA catastrophic disasters—including Apollo 1 (where three astronauts were consumed by fire on the launch pad during a preflight test) and Apollo 13 (where a moon landing had to be aborted after the explosion of an onboard oxygen tank)—there were reviews and inquiries and commissions and self-assessments. The manned space program was put on hold until questions could be answered, practices examined, and systems improved, but most NASA veterans chalked up the Challenger disaster to a freak accident—an act of God that would not be repeated. In reality, engineers down the chain of command had warned against the launch, citing the low ambient temperatures, but their concerns were dismissed and overridden by managers up the chain. The explosion of Challenger touched a very deep nerve in the American people and in NASA’s employees, and it was apparent that something had to change.

Breakthroughs or Breaking Points?

Throughout the late-1950s and early 1960s, NASA experienced a long series of remarkable technical breakthroughs in its human spaceflight program, culminating in the first moon landing in 1969 by the crew of Apollo 11. This tremendous surge in technology was in direct response to the challenge President Kennedy issued in 1961 to put a man on the moon by the end of the decade. Success followed success, as NASA moved from Project Mercury (which successfully put the first American, John Glenn, into orbit around the earth) to Project Gemini (orbiting with a larger spacecraft that accommodated two astronauts), and Project Apollo (with its three astronauts and lunar module, the vehicle that would take the astronauts to the surface of the moon and then back to the orbiting Apollo spacecraft).

But like any complex system or program that finds great success, it works—until suddenly, it doesn’t. When work becomes routine and guards are let down, complacency sneaks in and performance suffers. That’s exactly when the level of concern should be raised.

To succeed in the long run, organizations and the people within them must constantly be stimulated, and the capability of the organizational system of work continuously improved. In short, if the system appears not to be broken, that’s all the more reason to fix it—before it becomes a problem and not after.

Indeed, as NASA was having tremendous success with its manned space program, the organization was sowing the seeds of failure, creating false intellectual superiority and impermeable towers of information. In time, these towers turned into silos, and they stopped information from moving across the organization. This became a very real problem for NASA, and it led directly to catastrophic failure with a resulting loss of life.

When your actions are achieving success, and moving in a positive direction, you gain more confidence to take greater risks. In the case of Columbia’s launch on January 16, 2003, the program managers were aware that heat-resistant foam tiles would routinely break loose from the surface of the shuttle’s external fuel tank, but because they never had an issue with these errant tiles, it was a blind spot for them—they thought it was okay. They even gave it a technical term, “foam shedding,” which served to further (as sociologist and professor at Columbia University Diane Vaughan defined) “normalize the deviance”* of this routine event within the shuttle team.1

That is, until 82 seconds after launch when one of the tiles, the size of a briefcase and 1.67 pounds in weight, broke loose and struck Columbia’s left wing—piercing the skin and causing internal damage that led the shuttle to break apart upon reentry into the atmosphere, killing all seven crew members. Video of the shuttle launch reviewed by NASA the next day clearly showed that the wing had been hit by a large chunk of foam.

Organizations of all sorts suffer from the normalization of deviance, until some catastrophe wakes everyone up. NASA employees turned a blind eye to ongoing deviations because the organization was able to complete its missions despite a few mishaps—until the universe hit back and catastrophic failure occurred.

In this chapter, we dig deep into unlearning with people and organizations, focusing particularly on a case study from NASA and the work it has done to learn lessons from some very pubic tragedies, while creating a culture of transparency and safety for mistakes, as they seek the breakthroughs required to build our future.

NASA Learns to Unlearn

A NASA veteran for three decades, Dr. Ed Hoffman served as chief knowledge officer of the agency (the first) for six years, until 2016. In this role, Ed was responsible for establishing a formal, integrated, and effective knowledge management program (with governance connected to NASA project management policy), a community of 15 formally assigned knowledge professionals, and a knowledge map of services and products. Ed is a mentor and collaborator of mine through our shared passion for learning and unlearning.

In his early years with the agency, Ed knew the importance of organizational learning, but the transformation was not an easy one for NASA. Says Ed:

The center point of what I’ve done in my career is this question: How do you get into a place where people are comfortable to take time to unlearn, to change what they’re doing, to look at adaptive or different approaches, to innovate—and to move forward? I believe the biggest challenge with organizations is really the starting point of the comfort with unlearning. NASA was great at learning, but we usually had to fail really big before it became important enough for us to unlearn and then move forward.

It took catastrophic failure—the explosion of space shuttle Columbia in 2003—for the agency to get serious about unlearning its existing, dysfunctional learning culture. Previous disasters (including Apollo 1 and Challenger) were shocks to the collective consciousness of NASA, and people understood if different outcomes were to be achieved, they would need to take a different course of action.

The reality, however, was that many within NASA saw Challenger as a freak accident that would never again be repeated. They didn’t think they needed to unlearn anything; NASA had a problem even using the word “unlearn” within the organization.

Key people in NASA struggled to unlearn the behaviors that had brought them so much success. They failed to let go of past success to innovate and build the future. On top of all that, after Challenger, NASA carried out a long string of missions without failure, doing what they had always done. This served as further evidence to many in the organization that Challenger was a freak accident, and the wise and prudent choice was to maintain the status quo, sticking with the behaviors that had always worked in the past.

Organizational transformation is the result of collective individual transformation; it is continuous, not a one-time event. As such, it must be active, adaptive, and ongoing. We must continuously seek to unlearn the behaviors and thinking that are holding us back, then relearn and apply new methods to achieve the breakthroughs that lead to extraordinary results. Waiting for failure events to prompt action is the definition of failure to unlearn and innovate your organization’s system of learning.

Learning organizations are not only about transforming workers, a common misconception on the part of many leaders. Leaders and workers must both transform together. To achieve organizational or systemic impact, leaders and trainees must be trained together and act in sync with one another. In a post-Challenger NASA, this was definitely not the case.

But all this changed after the Columbia disaster. Even those who believed that Challenger was a one-time act of God realized that NASA’s problems were systemic and had to be addressed by everyone in the organization—together. So Ed applied the same problem-solving approaches that the agency performed in space on the ground. He worked within NASA to build systems of learning to inform decision making, doing the research and interviewing people across the entire organization to better understand what behaviors led to success and what behaviors led to failure. He encouraged people to share their stories with his knowledge team—and with one another. Mistakes and mishaps were occurring regularly, but they were not openly talked about. Smart people like to be correct—they are used to being correct. Talking about failure wasn’t a cultural or behavioral norm at NASA.

Ed and his team built a new system of learning based on the inputs they gathered from every corner of the organization, at every level. This system identified competencies and then trained people, giving them the tools and opportunities they needed to use new behaviors.

After Challenger, NASA struggled as leaders found it difficult to unlearn the behaviors that made them successful. But eventually, they began to understand what they needed to unlearn and relearn to achieve the breakthroughs they wanted. After Columbia, NASA’s leaders were trained and given authority, and they deployed the new system to full effect. When employees came to senior leaders with ideas for change, the leaders would tell them, “Go out there and do it.” This led to the development of new written policies and guidelines—written by employees for employees, which led to better outcomes, including these two specific ones.

The first one was alignment and engagement. When initial policies were drawn up and distributed to the workforce, the natural reaction was to be skeptical and reject the changes: “They don’t do projects in Washington, what do they know about my work?” When policy, procedures, and standards were set from a community of experienced practitioners, the whole dialogue shifted. NASA unlearned to centralize policy creation and relearned to have experienced practitioners from the field draft all standards and policies, given that they have the validity of practice. It also helps engagement because the community will protect its own. Ed and the practitioners did this by making sure that the larger community has access to communication, conversation, and input. A policy may not be optimal (it rarely is in initial drafts), but it has the wisdom of the knowledge community impacted.

The second outcome involved learning and competencies. Once NASA had practitioners developing policies, Ed and his team could bring them into the NASA Academy as teachers to train and communicate the competencies. This is vital because when you are being taught by a valued practitioner, it raises the value. For example, NASA was having significant problems with orbital debris. (Space junk is a massive problem.) Ed could not find industry or academic expertise to design a course, so he worked with a NASA and international expert on orbital debris and they together designed a course for aerospace professionals. The course then became the basis for policy. This was not unusual. Often a starting point would be to learn about the problem, prepare learning materials that would be presented to students at NASA, listen and collect their feedback, unlearn what was not working with the course, and then redesign a stronger one.

The Key Components to Relearn Systems of Learning

Creating continuous organizational transformation can be quite a challenge, but as was ultimately the case for NASA, it can be done—and done well—but you must design for it. When working with clients, designing and deploying their system of learning is the key to scaling sustainable learning and unlearning throughout their organization. The small steps I use to get started are as follow:

•   First, understand the system by gathering data and interviewing people from across the organization, at all levels.

•   Identify the competencies that lead to successful outcomes, and those that do not.

•   Design a system of work that allows the desired behaviors to happen, and socialize it with your community for communication, conversation, and input.

•   Think big but start small. Don’t try to deploy all the new behaviors at once. Identify the one you feel can have the most impact and start there.

•   Make the new behavior really easy to do, then provide a small group of people with new tools and training in the new behaviors.

•   Give people designed opportunities to deliberately practice new behaviors but start small, such as sharing information with one another (good and bad) to show evidence, and quickly make them feel successful using the new behaviors, thus reducing learning anxiety and increasing psychological safety.

•   The system in design needs to be tested by the people who will use it (its customers) during and throughout the design process. Once you have designed, tested, and deployed the new system, get more and more people to use it and generate results. Take the results and use them to improve your system as you scale.

•   Constantly stimulate the system of learning so organizational complacency and intellectual arrogance does not take hold. Give people designed, safe-to-fail opportunities to remind them of what happens when failures occur, such as simulated system failures or exercises (piquing survival anxiety).

•   Eventually, the organization gets to the point where the new system becomes widely adopted, and the organization and the people in it start recognizing what it needs to learn and what it needs to unlearn.

When working with global organizations to transform, I constantly have to remind leaders that scaling yourself does not work; scaling your lessons learned does. People remember stories and take inspiration from the experiences of others. Start by encouraging peers, colleagues, and teams to share their experiences, discoveries, and difficulties based on specific behaviors, methods, or mindsets they have tried to unlearn. By making it easier for everyone to learn from one another, you leverage the company’s collective information and insight. This builds momentum, creates new norms, and enables your organization and the people in it to achieve extraordinary results.

Google’s Aristotle project revealed that creating great teams wasn’t about how smart or how experienced people were. Instead, it was how much psychological safety resided within the group.2 Having a safe space for team members to share mistakes and be vulnerable in front of one another was the number-one indicator of high-performance teams. When mistakes are seen as new information available to improve the system, not negative information to show an individual’s inabilities, it can become a competitive advantage for your people and your organization.

Creating a culture of sharing lessons learned and reducing learning anxiety helps to grow an organization’s capability. The more safety there is, the better the quality of information, the better the quality of decisions, and the better the quality of results. This is the basis of a performance-oriented, generative culture outlined by Ron Westrum.

So how did NASA’s decision makers unlearn being the “know it all” and transform into being the “learn from all”? The first small step and new behavior NASA implemented was sharing stories of success and failure. Ed Hoffman and I use the Pyramid of Advantage or Catastrophe model to help people understand why making, catching, and sharing mistakes is a good behavior. It prevents mishaps, which in turn mitigates catastrophic failures.

As a leader, your responsibility is to enable organizational learning by reducing learning anxiety across the entire organization. The purpose of a learning organization is to help others make better mistakes, not the same mistakes.

Ed Hoffman talks about three levels of uncertainty that impact the desired outcomes of missions (Figure 9.1): mistakes, mishaps, and catastrophic failures. According to Ed, a mistake is when something doesn’t go according to plan; a mishap is when the mission or project doesn’t fail, but some part of it is going in the wrong direction; and a catastrophic failure is when things have not gone right, resulting in significant negative consequences. Each situation offers the opportunity to learn, and NASA’s new system of learning trained employees to raise and discuss mistakes when they occur—in a safe space without judgment—so they don’t become mishaps or catastrophic failures (Figure 9.2).

Images

FIGURE 9.1. The Pyramid of Advantage or Catastrophe

Images

FIGURE 9.2. The Pyramid of Advantage or Catastrophe risk and information flow

Harvard Business School professor Chris Argyris once pointed out, “Smart people don’t learn . . . because they have too much invested in proving what they know and avoiding being seen as not knowing.” Without safety, smart people struggle to reveal their shortcomings; NASA was faced with a similar challenge. The organization had lots of smart people, but they didn’t have a smart system of learning, sharing mistakes, or unlearning. The key is to make people aware of the consequences of not sharing mistakes, and then to normalize sharing mistakes as a learning opportunity or even a competitive advantage.

Future Breakthroughs Require Relearning from the Past

When you do true innovation, you’re never guaranteed successful outcomes all the time. Failure is inherent because you don’t know what you don’t know. So you need to build learning systems that surface mistakes before they have a chance to become mishaps or catastrophic failures. This requires instilling a behavioral norm of sharing mistakes and using the information to improve the system. Psychological safety is the key metric for high-performance teams.

When you truly innovate, build the future, and courageously face down uncertainty, what happens is that complex, unpredictable, and unintended consequences occur. When you build hierarchies of knowledge or silos, and information doesn’t travel across the company, organizational learning does not occur. You need to unlearn this approach to information management, and then encourage a culture of making safe-to-fail mistakes and socializing those mistakes (and successes), because it feeds into this knowledge pool that lifts the entire capability of the company. You’re trying to institutionalize the kind of behaviors in which people are constantly sharing and promoting and learning from what they’re doing, what’s working, what’s not working, and democratizing information because it becomes a powerful capability, a source of most current knowledge, and a competitive advantage for the company.

Unlearning Deviant Behavior

One of the big problems that NASA (and many other organizations) had to overcome was the normalization of deviance that was discussed earlier. It is when you’re in a situation and you’ve defined organizationally or as a team what behavior indicates successful performance. When a deviation to that comes up, where behavior is not happening correctly or the way you expect it to—but that deviation doesn’t lead to a failure—then the deviation becomes normalized, routine, and acceptable, even if such behavior is not ideal.

This can happen when you’re very busy, overloaded with tasks, or when complacency creeps into the system. You don’t have a lot of investment and resources. You don’t have time to reflect. You’re too tired to explore every deviation in performance. You enlarge the margins of acceptable behavior, and you therefore start normalizing and accepting deviant behavior.

At least two systemic problems led directly to the explosion of Columbia. First, NASA’s program managers ignored the ongoing problem of heat shield tiles falling off the launch vehicle and hitting the surface of the spacecraft. This occurrence was within the acceptable limits for a shuttle launch since it had not led to any significant consequences in prior missions besides a few dents. It became a tacitly accepted—normalized—deviance. Second, NASA program managers became overconfident even when mishaps (foam shedding during liftoff) were occurring routinely. They were blind to the possibility of catastrophic failure.

Normalization of deviance occurs in every kind of organization, not just a government agency like NASA. For instance, there are many types of normalization of deviance, especially behavioral aspects of people in companies. Uber and former CEO Travis Kalanick are prime examples. People started raising concerns about his ability as a leader several years before the company’s toxic culture was exposed and he had to step down, but the company was growing like crazy, so Kalanick’s behavior was overlooked by many. There’s a natural tendency for some organizations and leaders to say, “It didn’t fail. It didn’t blow up,” so it becomes normalized and acceptable—until it impacts the organization with consequences that may have serious repercussions.

To illustrate how NASA’s learning organization changed for the better after the Columbia disaster, Ed tells the story he calls, “The Tale of Two Shuttles.” The first shuttle was Columbia, and the second shuttle was Discovery. Both encountered problems, handled largely with the same people in the room, but in a totally different way. When the decision was made by Columbia’s program managers to launch despite the possibility of a tile strike to the wing, the result was a catastrophic disaster. The Discovery launch took place six years later. Ed Hoffman flew to the Kennedy Space Center in Florida to witness the launch. But all did not go according to plan. Says Ed:

I got down there that night and everything was looking good. I woke up the next morning and grabbed breakfast at about 6:00 A.M. It was then that I was told we might have a problem. The problem was that at the end of the last shuttle mission, which landed successfully, there was a small but significant technical issue where one of the flight control valves did not operate properly. There were four of these valves with plenty of redundancy, so it wasn’t a problem. The shuttle landed safely. But the engineers could not explain why the valve did not operate properly. Was it something in the process? Was it a part from a supplier? They couldn’t explain it.

We were in exactly the same situation as we were with Columbia. Something had deviated. On initial viewing you don’t think it’s something that’s going to be a problem. What do you do?

In those shuttle meetings there were as many as 200 people in a room, including senior leadership, the astronaut crew, engineering, safety, retirees—the whole community. The problem with the valve was raised, and a launch decision needed to be made. The program team recommended a launch. They said, “We can’t explain why that valve is a problem, but we’ve flown successfully. We have these other valves that are working well.” There are heavy costs when you sit on the pad and don’t launch. Not launching also increased the risk to the International Space Station, which was counting on the shuttle’s arrival for supplies. But engineering and safety said Discovery shouldn’t launch for the simple reason that they couldn’t explain what happened and they should figure that out first.

Following the Challenger disaster, Ed worked within NASA to create a program management initiative, which later became the NASA Academy. Workers were trained, given leadership and innovation tools, and provided with learning opportunities to relearn new approaches to old problems and deliberately practice new behaviors. But when these men and women went back to their departments, they were told to ease up by their leaders, who held onto the behavior norms they attributed to success from the past, and still saw Challenger as a freak incident.

Following Columbia, the breakthrough NASA realized was that adopting new behavior norms required that both leaders and workers change together—progress could not happen in isolation. If NASA was to create the cultural transformation they desired, and continuously adapt their systems of learning, training programs, and behavioral norms individually and collectively, leaders and workers had to unlearn, relearn, and break through together.

Discovery was originally scheduled to launch in January 2009, but this date was pushed back until the team could solve the problem with the control valves. The team’s leadership focused on the key outcome of assessing the acceptable risk to launch, sourcing input from multiple groups closest to the problem while managing the organizational tensions of available information, safety, budget limitations, and dependencies such as the astronauts on the space station who were anxiously awaiting a fresh supply shipment.

NASA made the decision to invest in experiments to resolve the issues, eventually creating a new, patented technology that allowed testing without putting the shuttle in danger. This required many extra hours and greater costs but led to an innovation that created a more resilient shuttle that would be less expensive to test in the future. The team solved the problem with the valves and successfully launched Discovery in March 2009. Says Ed about the decision to delay the launch:

That was one of the days I was most proud of being part of NASA and whatever contribution I made to the organization because, during that day, there was total learning taking place. We had most of the same exact people working both missions—Columbia, which went in a completely wrong direction—and Discovery, where the approach and the dynamics lent themselves to learning. If organizations and projects follow that kind of an approach consistently, the success rate for everything they do would skyrocket.

Ed’s story clearly illustrates NASA’s breakthrough resulting in an improved system of learning and organizational behaviors in operation. The leadership team exhibited that they had unlearned normalization of deviance and relearned cross-functional collaboration, multiple disciplinary input, and transparent sharing of information, while managing the tension of acceptable risk in decision making for launch. NASA did this by:

•   Raising and making transparent the mishap with the shuttle valves

•   Involving everyone affected by the decision

•   Focusing on what really mattered

•   Balancing financial and safety tensions with the mission’s desired outcome

•   Leveraging organizational insight

•   Making a decision and everyone committing to it

The result was further successful missions after a mishap, new innovations in shuttle testing, and the control valve problem being solved. The decision was made by the team to stop, make a fixed investment in exploring the problem, and avoid failure. They balanced the competing risks, resulting in a stronger system of learning and better organizational behaviors overall, and a safer and less expensive shuttle.

Unlearn Complacency and Arrogance to Scale Your Breakthroughs

Every organization runs the risk of backsliding into its previous state, falling victim to complacency as the next disaster looms just around the corner. One way to prevent this from happening is to revitalize the system of learning by piquing survival anxiety, so people remember that failures can and do happen, especially in unguarded moments.

As we have seen, NASA is definitely no stranger to catastrophes, but it has been many years since the last catastrophic failure: space shuttle Columbia in 2003. Without the occasional significant failure every three or five years to refocus everyone’s attention on what can happen when complacency sneaks back into the system, disaster awaits. To prevent this, you have to train, reflect on results, and have conversations that remind people what happens when things go bad. You have to be curious enough to introduce new behaviors into existing routines. Remember: Individual and organizational transformation is not a singular event. It is continuous, and we must constantly stimulate the system to discover and exhibit new and better behaviors. This is the purpose of deliberate practice of the Cycle of Unlearning.

To help its employees remember, each year NASA conducts what it calls a Day of Remembrance. On this very special day, NASA pauses “to reflect on the legacy and memory of our colleagues who have lost their lives advancing the frontiers of exploration.” In his message to employees for the 2017 Day of Remembrance, NASA’s acting administrator, Robert Lightfoot, pointed out that approximately 45 percent of the current NASA workforce were not working for the agency 14 years earlier, at the time of the Columbia tragedy. Continued Lightfoot,

How do those of us who experienced tragedies and subsequent recoveries ensure the lessons are passed on as we continue our exploration journey? The best way I know is for us to share our stories—not in PowerPoint—but personally.3

During the Day of Remembrance—the last Thursday of January each year—the families of astronauts who died in the Apollo 1, Challenger, and Columbia tragedies are invited to speak about their loved ones, and employees who were there at the time are encouraged to share their own stories. They are also invited to share their personal stories about the accidents—and the return to flight efforts—in team meetings during the month. Explained Robert Lightfoot, “Perhaps then all of our team will begin to understand the reason we strive for a culture of speaking up with concerns, and a culture of leaders who stay curious and hungry as they make the ultimate decisions to send our crews on their journeys.”4

Some of today’s most successful companies have learned similar lessons, although the impact of their decisions are not life and death, as they can be for NASA. For example, Netflix has found that it can effectively leverage deviant behavior to improve their systems of organizational learning, as well as products and services. The company conducts game days where, unbeknownst to the teams, parts of the company’s live production systems are randomly shut down and their products and services start breaking. The purpose of the exercise is to build a system that can identify and mitigate failure, be more resilient, and improve the quality of their system of work. Intentionally disabling computers in Netflix’s production environment also builds alignment and collaboration networks among employees across the company, ultimately enabling them to provide their customers with a better quality of service.

No one on the Netflix teams would know why suddenly their products and services were breaking and whether it was real or not, but the exercise trained people to actively collaborate and share the information they had with one another to find and fix the failures, thereby building greater resilience into their systems. Intentionally disabling computers in Netflix’s production environment became such a habit within the company that they built a piece of software called Chaos Monkey to randomly and automatically trigger system failures to test how their systems and teams responded to outages. Chaos Monkey is now part of a larger suite of tools called Simian Army, which was designed to simulate and test responses to various system failures, edge cases, and outages. In addition to leveraging deviant behavior to improve their systems, these simulations help Netflix people constantly introduce new approaches, ideas, and paradigms on a steady basis to continuously improve their systems of learning, products, and services.

Unlearning does not lead with words; it leads with action. By unlearning the ways in which we behave, our actions begin to change the way we observe, experience, and eventually see the world. Seeing and experiencing the world differently changes the way we think about the world. People do not change their mental model of the world by speaking about it; they need to experience the change to believe, see, and feel it.

Innovation, such as new behavioral norms, also need a petri dish in which to grow. It requires a protected and safe space for people to start to unlearn old skills and relearn new ones. Safety needs to exist at many levels, including psychological, physical, and economic. We need a sandbox in which to experiment, and to test and develop new skills, capabilities, and mindsets. We must design and create an environment where we can make recoverable mistakes without causing irreversible damage.

Sandboxes create the safety we need to get comfortable with being outside our comfort zone to innovate and succeed. Physical sandboxes create dedicated time and space to allow our new behaviors to emerge through trial and reflection. Economic sandboxes allow us to make safe-to-fail investments to relearn, acquire new skills and capabilities, and deal with uncertainty without blowing up the entire business.

As you increase the level of safety in your organization, check the reactions and responses of your people when failure occurs in the team. Do they act as though they are beaten, or is the new information seen as a competitive advantage, leveraged, and fed forward to the next iteration? High-performance generative cultures see incidents as competitive advantages.

The Power of Continuous Unlearning

In a company setting, adopting the Cycle of Unlearning requires that we unlearn the belief that identifying, designing, and deploying systems of learning are a one-time event. As organizations from NASA to Netflix to Toyota have proven, where systems of continuous learning are implemented, simulated, evolved (sometimes automated, such as with Netflix), and made a part of the cultural norm, you can achieve extraordinary results.

Taiichi Ohno, the father of the Toyota Production System, famously stated, “We are doomed to failure without a daily destruction of our various preconceptions.” He knew that success for Toyota required employees to unlearn, let go of the past, and seek new, innovative, and powerful improvements each and every day. Toyota understands that to build its future, one has to be ready to unlearn, adapt, and apply new methods as the world continuously evolves.

As BJ Fogg’s Tiny Habits teach us, adopting new behaviors is not as complicated as people believe. Even more, it is a system. Unlearning leads to more unlearning and can become systemic with ripple and network effects across your entire organization. We simply need to decide what we want to unlearn, make deliberate choices, and introduce new behaviors to existing routines. When NASA innovated its new learning system after the Columbia disaster, leaders and workers learned and engaged in new behaviors of cross-functional collaboration. They shared mistakes and lessons learned, and they used this knowledge to make better decisions.

Relying on motivation alone to bring about change will not work. We need to make it easy to do, regardless of an individual’s ability. We must also start with small steps, integrating them into our existing daily routines to slowly but surely commence the journey to see and experience the world in new ways. This can be as small and easy to do as asking others in your organization what the most helpful mistake is that they recently made. How did it help them make a new discovery to improve what they were working on? By iteratively working on small, frequent steps—adapting our approach based on what we discover—we build momentum and evidence of evolution based on the results.

How to Start the Cycle of Unlearning and Become a Learning Organization

The simple truth as stated by Andrew Clay Shafer, senior director of technology at Pivotal, is, “You’re either building a learning organization, or you’re losing to someone who is.” That premise is very pertinent in today’s world. The idea of a learning organization is nothing new. As I explained in Chapter 2, the idea was popularized by Peter Senge in his book The Fifth Discipline, which was published in 1990.

The reason NASA, Netflix, and other true learning organizations succeed is that they have a systematic approach to taking in information from all sources of their organization, synthesizing, leveraging, and then using it as the basis upon which to innovate. They actively create opportunities and safe environments for people to learn by doing through experience, informal settings, simulations, and play.

Informal, incidental learning takes place wherever people have the need, motivation, and opportunity. After a review of several studies done on informal learning in the workplace, Victoria Marsick and Marie Volpe concluded that informal learning can be characterized as follows:

•   It is integrated with daily routines.

•   It is triggered by an internal or external jolt.

•   It is not highly conscious.

•   It is haphazard and influenced by chance (something sparks).

•   It is an inductive process of reflection and action.

•   It is linked to learning of others.5

So, what are the components of a learning organization, and how can you measure if yours is one? Victoria Marsick and Karen Watkins developed a model of a learning organization that comprises seven different parts, as well as the Dimensions of the Learning Organization Questionnaire, or DLOQ, to diagnose an organization’s current status (Table 9.1).6

Images

TABLE 9.1. The Seven Dimensions of Organizational Learning

What’s particularly interesting about this model is that all these constituent parts of organizational learning can be seen in action within some of today’s most successful companies. For example, Amazon builds systems specifically designed to capture learning and use it as they measure their teams. In this book, we talked about the ideas of feedback, reflection, experimentation, and synthesizing what you’ve learned. In essence, this is the breakthrough step in the Cycle of Unlearning—it all ties together.

The first version of DLOQ comprised 43 descriptive statements divided across the seven dimensions of organizational learning, but this was later reduced by Marsick and Watkins to 21 questions. Here are examples of some of the descriptive statements from the most recent DLOQ:

•   Q1. In my organization, people help each other learn.

•   Q2. In my organization, people are given time to support learning.

•   Q3. In my organization, people are rewarded for learning.

•   Q4. In my organization, people give open and honest feedback to each other.

•   Q5. In my organization, whenever people state their view, they also ask what others think.7

I encourage you to consider working through this questionnaire with your own organization, starting with yourself and your team. Identify the dimensions in which you are lagging—maybe it’s continuous learning or empowerment. Pick one dimension, discuss, decide, and commit to one thing you believe your team must unlearn to move forward. Write an unlearn statement for it, and then design a tiny habit to unlearn it; make it really small. Think what you could do in a month, in a week, or in a day. Introduce a new and better behavior to take its place.

As you work through the Cycle of Unlearning, design and build a system of organizational learning. For example, NASA’s approach to identifying key competencies of success, then training employees, giving them tools, and creating opportunities for them to practice these competencies.

Scale the learning system across your entire organization and normalize desired behaviors—while removing deviant behaviors—the same way Ed Hoffman did at NASA. Make the learning system your default condition, not something used only when management is looking.

Finally, systematize your learning system. This is what technology companies do by default. They build learning platforms to gather data to inform their decision making. It is here where you will reap the full benefit of the learning organization by routinely unlearning old behaviors that no longer work and replacing them with new behaviors that will enable you to achieve extraordinary results.

*Vaughan defines this as a process where a clearly unsafe practice comes to be considered normal if it does not immediately cause a catastrophe: “a long incubation period [before a final disaster] with early warning signs that were either misinterpreted, ignored or missed completely.”