There will be those who consider us to be ridiculously frozen in time, who demand of the government the rapid introduction into our work of advanced technologies, but while it is true that laws and regulations can be altered and substituted at any moment, the same cannot be said of traditions, which is, as such, both in form and sense, immutable.
—José Saramago, All the Names
It is a working principle of the authorities that they do not even consider the possibility of mistakes being made. The excellent organization of the whole thing justifies that principle, which is necessary if tasks are to be performed with the utmost celerity.
—Franz Kafka, The Trial
Bureaucracy Goes to School
Bureaucratic rules are inflexible and impersonal—that’s their nature. But that’s a matter of how they’re applied, not how they’re formulated. There’s no reason they can’t be formulated based on lived experience and no reason they can’t be changed often as experience teaches new lessons. In a stable environment we’d expect relatively little change; rules are composed rationally, optimized, and then stay put. In a less stable environment we expect the rules to be revisited more often and adjusted. And in the digital world, which accepts continuous change as a given, the rules—to remain “efficient” in Weber’s sense—have to be adjusted more or less continuously. Not just the rules: in a changing environment, role definitions and the formal relationships between them also need to be adjusted regularly.
Agile IT approaches are designed for continuous improvement through feedback, incremental change, and retrospection. DevOps emphasizes fast feedback—each unit of work is small so that it can be finished quickly and given to employees or customers to use. Based on the results, the code can be tweaked and improved. The same idea can be applied to bureaucratic rules or indeed any practical framework for taking action in the world.
For example, if we were to apply DevOps to the goals of the Paperwork Reduction Act,* we might proceed as follows: we’d set up monitoring tools to measure how long it would take an applicant to fill out, say, an I-90 form (a green card renewal). Then, we’d code a small change to the form, deploy it, and see if it reduced the burden. If it didn’t, we’d investigate why and try a different change. If it did, we’d keep the change and move on to the next. We’d learn in small steps, but continuously. Each change would be treated as a sort of hypothesis—“I believe that if we make this change it will reduce the burden”—and then tested scientifically.
Incremental adjustment of rules is the essence of a learning bureaucracy. Its rules still have the characteristics we (now that we’ve read Chapter 5) admire in bureaucracy. They still enforce auditable controls, generate evidence that they’ve been applied, and are formalized and documented so that they can be optimized for efficiency. They’re still fair, and their effect is calculable. A learning bureaucracy has both the advantages of bureaucracy and the advantages of agility, strange as that might sound.
I’ll use a few government examples to illustrate why bureaucracies must learn to learn, but don’t think for a moment that business enterprises are different. Has anyone in your company stopped to test the hypothesis that the annual budgeting process adds enough value to justify its costs? Or that a company wouldn’t get better business results by using a different, and perhaps less heavyweight, process for controlling spending? How often do you eliminate legacy bureaucracy that’s no longer needed or that could be replaced now by better bureaucracy?
PRA: Better If It Reduced Paperwork
The Paperwork Reduction Act (PRA) is a masterpiece of good intentions. Its goals are achievable and compelling. It’s a noble effort to reduce bureaucratic paperwork, and its guardians in OIRA are well-meaning and diligent. Unfortunately, it doesn’t work very well. The workflow mandated by the Act sounds plausible, at least, I assume, to congressional representatives sitting in the Capitol and legislating. But for Congress to ensure that the law worked well, it needed to insert a feedback and refinement mechanism that would tweak the law to improve its effectiveness.
Every year, OIRA duly reports on the impact of the PRA, as Congress requires. In 1996, when the Act and its amendments were finalized, the country’s paperwork burden was 6.9 billion hours. By 2017, even under the watchful eye of the PRA enforcers, it had increased to 11.6 billion hours.1 And that’s in an era when electronic forms make possible a whaleboat-full of simplifications—for example, providing default values for input fields and saving user information to be reused from one form to another. Plenty of us government folks know from our experience exactly how the PRA gets in the way of reducing burden, as I described in Chapter 2. But in the years since 1996, with the burden continuing to increase, with a vast amount of potential digital simplification not taking place, and with government employees aware of just what is going wrong, still no one has changed the PRA’s workflow.
Although Congress gets its report from OIRA every year, that report is not true feedback—a term that implies a cycle in which new information causes process changes which then result in new information. It’s one thing to mandate that paperwork be reduced each year. It’s a very different thing to figure out how. In a complex environment like the federal government, figuring out how requires the humility to listen and learn, not the hubris of thinking you can write down a bureaucratic mechanism on one piece of paper that will unfailingly reduce the number of other pieces of paper fluttering through government offices.
MD-102 and Reality
MD-102 helps DHS report to Congress and the public on the status of its projects. What it doesn’t do is report on its own effectiveness in getting better outcomes for those projects. Since it adds considerable overhead and cost to projects, DHS should be asking: Is it successful? Do projects actually return more value to the public than they would without it? The same questions can be asked at a more granular level—Does requiring every project to have an Integrated Logistical Support Plan actually improve outcomes?
It’s possible that MD-102 results in worse outcomes. It adds cost. It takes time and therefore delays the delivery of value. It locks in decisions that would better be made as the team learns during project execution. In effect, it requires that teams use antiquated IT delivery practices. It takes focus away from good execution and places it on documentation and a “checkbox” approach. And it leads to large, risky projects, because no one wants to go through its cumbersome process more often than necessary, so they do fewer, but bigger, projects.
I say that MD-102 doesn’t report on its own effectiveness, but there’s certainly anecdotal evidence that DHS should be concerned. According to the former DHS Inspector General in his testimony to Congress in May of 2019, “Most of DHS’s major acquisition programs continue to cost more than expected, take longer to deploy than planned, or deliver less capability than promised.”2† Although I don’t believe those are the right success measures, they are the success measures that MD-102 was designed for.
The inspector general is understating the case—those failures include, for example, Customs and Border Protection’s ACE program, primarily a software development effort begun in 2001 and finally completed seventeen years later at a cost of over $3 billion.3 I’m choosing ACE as an example to distract you from USCIS’s own distressing Transformation program, which spent a billion dollars or so before producing anything useful.
These programs were overseen using MD-102, as were all of the other troubled programs the inspector general referred to. You’d think it would be taken as a sign that MD-102 is the wrong approach, but there’s a serious risk of confirmation bias. When a project succeeds, its success is attributed to MD-102. When a project fails, it’s attributed to not following MD-102 carefully enough, or to gaps in MD-102 that must be remediated by adding more overhead and administrative effort.
Even for projects that are successful, DHS should still ask whether they would have been even more successful without MD-102, or with less of it. But here they face another cognitive bias. If DHS were to consider removing one of the eighty-seven documents—that is, making a marginal change—it would be met with the argument that doing so could only increase the risk of the project. Since the purpose of MD-102 is to reduce risk, that would be unacceptable. The document stays.
There’s no rigorous learning mechanism built into or applied to MD-102, aside from anecdotal evidence or the gut feel of its guardians and an occasional pile-on of more heavy-handed controls when something goes wrong. From the point of view of a CIO overseen by it, it seemed clear to me that many of its costs weren’t justified—it could have been changed in ways that would have resulted in better outcomes for the public. I knew that in writing an ILSP, we spent our time trying to figure out how to write an ILSP that would be acceptable to the reviewers rather than making and documenting decisions that were important to a project’s success. I was also pretty sure that in projecting the costs of an IT system thirty years into the future we were playing a game with numbers rather than doing anything actionable or relevant to decision-making.
The problem is not just the mechanisms of MD-102 but the fact that those mechanisms could not be tested to see whether they were valuable and to modify them to make them better. If bureaucracy is to serve as a storehouse of good practices, a kind of institutional memory, then how can it not change when its practices are found to work poorly?
MI-CIS-OIT-003
This realization led to our unusual design of MI-CIS-OIT-003, which I described earlier. Instead of requiring particular practices or workflows in the policy, we stated the goals and principles, and then referred readers to an addendum that listed what we believed to be current best practices. We wrote explicitly that although those were considered best practices at the moment, the addendum should remain a living document and should be adjusted periodically. In MI-CIS-OIT-004 we solidified that idea by saying that when a project was audited, the auditors should review the project against the goals and principles, and also against whatever was considered at that moment to be best practices. In other words, we tried to build a learning loop into the bureaucratic rules.
It was moderately successful. It helped us create a culture that was comfortable with change and understood that change was to be expected. On the other hand, since I left the agency, I don’t think the addendum has been updated. The next time I try this experiment I’ll probably strengthen its bureaucratic aspects by requiring that it be reevaluated and changed at least annually, and setting up a formal process for doing so. More bureaucracy here would result in more learning.
Checkmating Erosion
In Seeing Like a State, James Scott provides some examples where governments have used feedback and learning to overcome the loss of important detail that accompanies simplification and categorization. He talks about water management in Japan, comparing it to a game of chess. The engineer makes a move, sees how nature responds, and then determines his next move. Through this incremental, feedback-based approach the engineer “checkmates” erosion.4
Such an iterative process must be used, he says, in any realm where there are considerable uncertainties or complex interactions that make it hard to predict outcomes.
Virtually any complex task involving many variables whose values and interactions cannot be accurately forecast belongs to this genre: building a house, repairing a car, perfecting a new jet engine, surgically repairing a knee, or farming a plot of land. Where the interactions involve not just the material environment but social interaction as well—building and peopling new villages or cities, organizing a revolutionary seizure of power, or collectivizing agriculture—the mind boggles at the multitude of interactions and uncertainties (as distinct from calculable risks).5
When rules are devised centrally or at the top of a hierarchy, they aren’t based on the practical knowledge and detail that frontline experts have access to. This isn’t necessarily a terrible thing—as long as the policy is tentative and continuously improved as it encounters the real world. This is no more than what Agile IT theorists have been saying—that IT is complex and uncertain, and that we therefore must take an empirical approach, inspecting and adapting, fluidly changing plans.
The California Division of Highways
Steve Kelman, a professor at the Harvard Kennedy School of Government and formerly the US government’s head of procurement policy, relates another powerful example from Kevin Starr’s Golden Dreams: California in an Age of Abundance, 1950–1963. The California Division of Highways in the 1950s, Starr says, was an almost paramilitary, bureaucratic organization, “model conformists, working in a standardized environment, paid at standardized rates, motivated by the same retirement package, living in tract housing, driving similar makes of automobiles.”6 In the early days of highways, and in their careful, systematic, bureaucratic way, they worked to improve traffic safety. They did so by generating hypotheses about what might cause accidents and what might help avoid them. Each hypothesis was carefully tested, and if they found it to be valid, they incorporated it into a rule.
For example, they experimented with signage on the highways. When they found that the typeface used on signs at the time was too small for fast-moving motorists to read, they mandated that larger text be used. When they saw that signs in all-capital letters were harder to read, they switched to mixed upper- and lowercase letters. They chose the background color of green for signs based on their experiments, and they required that the signs be lit at night. They discovered that lane divisions with bumps that caused the car to rumble were effective at keeping cars in their lanes. With each improvement, they reduced highway accidents.
Kelman speculates that bureaucracies might be even better than other organizations when it comes to learning, at least in cases where this kind of systematic inquiry is appropriate.7 In any case, his example makes clear that the hypothesis-testing paradigm used in digital delivery is in no way counter to bureaucratic structures. Bureaucracy provides the framework, the background, against which the ingenuity of employees can be applied.
Interestingly, in the context of our discussion on motivation in the last chapter, Kelman claims that their success was due to “an evangelical zeal born of their profession and of their belief in freeways and in California’s future.”8 The structure of the bureaucratic organization gave them a way to come together in a shared vision and goal.
NUMMI
Adler’s study of NUMMI’s car factory taught him the power of learning bureaucracies. NUMMI, a joint venture between General Motors and Toyota in Fremont, California, replaced an earlier GM factory that had—by almost any criteria—been a terrible failure. It had the lowest productivity of any GM factory, consistently poor quality, and an employee absentee rate of over 20%.9 The NUMMI joint venture that replaced it had an absentee rate of only 2% and the highest productivity of any GM facility, twice that of its predecessor.10
It pulled off this miracle by instituting a highly structured bureaucracy that operated along the lines of Taylorist process improvement. It obsessed over process standardization. It broke down all of the actions of employees on the manufacturing line into their component movements, optimized them, and then required those optimized movements of all employees. No improvements were allowed unless they could be adopted across the entire organization.
What was special about NUMMI was that it made its teams of factory workers responsible for its Taylorist process improvement. The workers were trained in Toyota’s techniques of work analysis, description, and process improvement. It was the workers themselves who timed their team members and proposed process improvements. Because employees didn’t fear for their jobs—NUMMI guaranteed lifetime employment—they were comfortable doing so. In 1991, employees proposed over ten thousand suggestions, of which some 80% were implemented.11
The teams were trained to find and deal with problems as well. NUMMI’s practices made it
(1) difficult to make a mistake to begin with; (2) easy to identify a problem or know when a mistake was made; (3) easy in the normal course of doing the work to notify a supervisor of the mistake or problem; and (4) consistent in what would happen next, which is that the supervisor would quickly determine what to do about it.12
Now that’s learning and enabling bureaucracy.
As process improvements were proposed and their effectiveness was confirmed, they were then standardized across employees and teams. This had a number of benefits. It gave workers a basis for their continuous improvement, since they always had a baseline process whose efficiency they could measure. It improved safety because each process was documented and could be examined carefully for risks. Inventory control became easier for just-in-time provisioning of resources. Job rotation became easier and fairer. And finally, the entire system became more agile, because workers could quickly reskill and immediately begin improving processes. They became, over time, experts in continuous improvement.
This all took place within a traditional management hierarchy, well supplied with layers of middle management. Managers, however, were taught to view themselves as problem-solvers whose responsibility was to help the floor workers realize their process improvements. They were valued for their expertise, and it was through this expertise that they become managers. It was Weberian “domination through knowledge.”13
Adler concluded that NUMMI was able to learn through a synergistic combination of its formal, standardized work process and various informal cultural aspects, including the facts that it cultivated broader skill sets among its employees and that it widely shared principles such as continuous improvement.14 Just as with Amazon’s fourteen leadership principles, shared values empowered employees to make decisions knowing that they would be consistent with management’s intentions.
Verdict: Learning Bureaucracy Is Agile Too
A learning bureaucracy is bureaucracy as a storehouse of institutional knowledge, knowledge that is added to and refined over time. It’s bureaucracy that keeps employees engaged and committed and uses their abilities to improve processes, not just their abilities to turn wrenches or fit car parts into place. It employs the more likable characteristics of Weberian bureaucracy: rules and standards that provide shared knowledge and comfort, and a hierarchy of roles that is truly meritocratic and uses the talents of its managers and workers.
Once again, the parallels with today’s IT practices are notable. Both the California Division of Highways and NUMMI based their continuous improvements on hypothesis-testing—just as digital practices, often derived from Eric Ries’s book The Lean Startup, treat system requirements as hypotheses and insist that they be tested. They both built cultures that encouraged employees to feel comfortable proposing innovations and process improvements, just as DevOps uses blameless retrospectives and supportive small-team interactions. These are not just surface similarities, but deep cultural affinities.
Schwartz talked about the PRA in Chapter 2. The irony of a paperwork reduction law that increases paperwork is a bit like that of a lean sumo wrestler or bloated process like MD-102 that is supposed to improve efficiency. -ed. |
|
This is the same IG that put the critical fact that he was comparing a twenty-year cost to a thirty-year cost into a footnote in his report, so take his testimony with more grains of salt than perhaps he has budgeted for. -au. |