Purely in your interests, not in mine, I must make it clear to you that you would be making the biggest mistake of your lives if you were to consider the fact that I have spoken to you with an open heart and mind a sign of personal weakness or a diminution of official authority.

—José Saramago, All the Names

To think is easy. To act is hard. But the hardest thing in the world is to act in accordance with your thinking.

—Goethe, Faust

Which Way, Max?

A crucial assumption has snuck its way into our real-world implementations of bureaucracy: the assumption that employees are recalcitrant and therefore must be coerced into doing what is right. Bureaucracy is a natural and rational way to get results despite employees’ normal inclinations. But Adler argues that employees often are and certainly can be motivated to share the company’s objectives, and that bureaucracy can therefore be designed to be enabling rather than coercive.1

The tension between coercion and enablement can be traced back to Weber, who on one hand speaks of bureaucracy as “domination based on knowledge” and on the other hand as “authority derived from occupying a role.”2 Which is it, Max—do bureaucrats have authority because of their expertise, or authority because of their placement in the organizational chart? A doctor has authority over us because we believe the doctor is competent and is acting in our interests—that is domination by knowledge. A factory supervisor, on the other hand, may have authority because it comes with the role. Both sources of authority, apparently, are possible.

Remember that modern bureaucracy arose along with scientific management. Frederick Taylor, one of the pioneers of the field, used time and motion studies to analyze and reconfigure every movement of employees on the assembly line. Look at how Taylor explains what he was doing:

It is only through the enforced standardization of methods, enforced adoption of the best implements and working conditions, and enforced cooperation that this faster work can be assured. And the duty of enforcing the adoption of standards and of enforcing this cooperation rests with the management alone.3

I didn’t add those italics—they’re Taylor’s own. He assumes that employees will resist standardization and process improvement unless it’s forced on them. He also assumes that managers are better able to plan and optimize employee activities than the employees themselves.4 Those are big italicized assumptions.

In the bureaucracies of ancient Egypt and ancient China, Weber points out, employees could be coerced through the use of physical violence—that is to say, they were slaves who could be tortured. But such coercion, says Weber, is not the best way to achieve efficiency. Modern bureaucracies do even better by offering a guaranteed salary and a career that’s furthered by good performance rather than subject to the whim of a superior. Even just strong but empathetic management, coupled with respect, can have better results. As he says,

Taut discipline and control which at the same time have consideration for the official’s sense of honor, and the development of prestige sentiments of the status group as well as the possibility of public criticism also work in the same direction. With all this, the bureaucratic apparatus functions more assuredly than does legal enslavement of functionaries.5

There’s a vicious circle in coercive bureaucracies: coercion demotivates employees, so they become more recalcitrant, so more coercion becomes necessary. According to writers like Adler, this circle can be broken: employees begin with a certain level of motivation, and if the bureaucracy is designed to support and intensify that motivation, a virtuous circle ensues. He cites Motorola as an example of a company that has used bureaucratic mechanisms “to provide a common direction and to capture best practices in ways that supported high levels of commitment and innovation.”6

Most companies use a common, very structured piece of bureaucracy—their annual budgeting process—as a way to control and constrain employees. The Beyond Budgeting movement argues that control through budgeting is not actually effective because it takes away decision-making power from the employees who would otherwise be in the best position to seize opportunities as the year progresses. Since there’s considerable uncertainty in the business world, locking managers into cost targets that were set as far as eighteen months in advance interferes with their ability to create value for the company.

The mistaken assumption, say Beyond Budgeting proponents—similar to that of recalcitrant employees—is that without centralized control of spending there will be anarchy. They show that it’s possible to set up rigorous control structures that do not take away the soupçon* of managerial judgment needed to achieve full efficiency and take advantage of market opportunities. Bjarte Bogsnes, who implemented Beyond Budgeting at Statoil Hydro, a large energy company, says:

There are, however, two other types of control that we want less of. The first one is controlling what people shall and shall not do, through detailed budgets, tight mandates, detailed job descriptions, rigid organizational structures. The second type of control, we probably never had to begin with. That is the perceived control of the future, the one we think we get if we only have enough numbers and details in our plans and forecasts.7

Bureaucracy requires rigor and scientific planning, but it does not always require a heavy-handed, restrictive control over employees without their consent. Instead, it can provide structure, frameworks, guardrails, and tools to build on the good intentions of employees. It can involve employees in the creation of rules and workflows, and make them willing participants in finding good ways to accomplish the organization’s goals.

My corkboard story in the introduction has a subtle piece of enabling bureaucracy. While requirements for an IT system often appear in a coercive guise—programmer, you “shall” make the system produce strozzapreti recipes—requirements can also be an enabler for the programmer. Someone else has already done the work of thinking through a useful system feature and documenting it, which saves the programmer effort and research. In the corkboard story, the team asked me to draw them a state transition diagram because that would help them understand the application processing flow and be a useful reference for them as they created their system. I was working for the team, you might say, by giving them a bureaucratic artifact when they wanted it.

Gypsum

In Patterns of Industrial Bureaucracy, Gouldner criticizes Weber’s account of bureaucracy on the grounds that it presents a static picture that leaves out the important dynamics of how bureaucracy is introduced into a company and changes it. In his analysis of the gypsum factory he describes the motivations of the people involved and shows how the cultural context—different between the mining operation and the board factory—affected how and to what extent workers accepted bureaucratic management.

As I said in the Introduction, this book is mainly about the kind of bureaucracy that resists digital transformation—the kind that holds back white-collar knowledge-work initiatives. Gouldner’s book reminds us that factory bureaucracy can be rather a different thing. At the gypsum plant, much of the bureaucracy is dedicated to making employees work harder, or at least up to a predefined standard, on the assumption that they’ll slack off, or “goldbrick,” in its absence. Middle managers are treated with suspicion as well; red tape is used by central leadership to enforce their policies on workers directly so as to bypass the “shirking” managers. The gypsum factory’s bureaucracy includes prohibitions against absenteeism, “write-ups” of problems with employee performance, and disciplinary processes.

There are also some interesting similarities to the bureaucracy we see in knowledge-work settings. Governance controls in IT, for example, come from a central authority but are applied to all employees—in other words, they bypass “shirking” managers who might not effectively be managing their employees. While IT bureaucracy usually does not assume that employees are lazy goldbrickers, it does assume that employees are liable to spend the company’s money foolishly. The manner in which the bureaucracy is applied—say, by the ARB in the Chaos Monkey chapter, is equally paternalistic and mostly punitive.

Gouldner concludes that several distinct patterns of bureaucracy are possible. A disciplinary or punishment-centered bureaucracy institutes rules because of a lack of trust. It limits the employees’ discretion, legitimates any punishments that may be imposed, and establishes minimum levels of performance.8 Rules in a disciplinary bureaucracy are equivalent to orders that are given directly by a supervisor; they’re a crutch for management.9 Disciplinary bureaucracy is closest to that side of Weber’s model where authority comes from incumbency in a legally defined office.

Gouldner notes that disciplinary bureaucracy can be imposed not just by management, but by employees as well, in which case it has the characteristics of a grievance-based bureaucracy.10 A union agreement, as I mentioned earlier, is a set of formal bureaucratic rules imposed on management. At the gypsum factory, workers won rules that restricted management’s discretion in awarding promotions by requiring that seniority be considered.11 Just as management-initiated bureaucracy addresses management’s lack of trust in employees, employee-initiated bureaucracy reflects a lack of trust in management.

Closer to Weber’s idea of expertise-based authority is the pattern Gouldner called representative bureaucracy, where workers participate in crafting the rules. An example from the gypsum factory is the complex of practices related to worker safety.12 It was bureaucracy for sure: it called for rules and red tape, paperwork, meetings, and reports. But it was concocted jointly by management and workers, and was continuously improved in meetings between them, based on reviews of safety findings and incidents. Management provided a safety and personnel manager, whose authority was based on his expertise in factory safety.13 Management and workers both agreed to abide by the safety rules, as each saw it as in their own interests.

Adler accepts Gouldner’s typology, referring in his work to coercive (disciplinary) and enabling (representative) bureaucracies. A coercive bureaucracy, in his sense, is one that mandates compliance, punishes violations, and enforces adherence to standards. An enabling bureaucracy, on the other hand, provides rules that help amplify the effectiveness of people, and does so in part by allowing flexibility and dynamic change in its rules.14

Motivation Revisited

The IT community has been heavily influenced by books like Daniel Pink’s Drive, which argues that intrinsic motivation through autonomy, mastery, and purpose is much more powerful than extrinsic motivation like financial incentives. Mastery and purpose can certainly live in a bureaucratic environment (at least in the Weberian meritocratic ideal, where roles are occupied by masterful employees who exercise their mastery for the organization’s good). On the other hand, bureaucracy limits autonomy, or rather allows autonomy only to the extent that the employee stays within the rules.

But Adler cites research showing that autonomy is not the most important motivator—that “when authority is subordinated to common goals, efficacy seems to be more important in determining motivation levels,” and autonomy becomes merely a matter of “hygiene.”15 Employees who are motivated by the goals of the organization are willing to accept limitations on their autonomy in service of those goals. The plausible implication is that purpose, motivation by goals, is by far the strongest motivator of Pink’s three.

It’s easy to think that all organizations are coercive because they co-opt individuals to work toward the organization’s goals rather than their own.16 But studies show that employees adopt an organization’s goals as their own, and can be motivated by them. Adler notes that “work can be fulfilling, rather than a disutility, and that organization can be experienced as a cooperative endeavor rather than as an abrogation of autonomy.”17

Enabling rules are templates that have been found to work well and to support employees who are, in fact, committed to the success of the organization. Coercive rules, on the other hand, become a substitute, rather than a complement, for employee commitment.18

Why, then, are bureaucracies so demotivating? It’s a question of how they are designed. Employees react better when routine tasks are formalized.19 They also—unsurprisingly—prefer what they consider good rules rather than bad ones.20 In Gouldner’s representative bureaucracy, employees are comfortable with the rules because they participate in developing them.21 Other studies have shown that rules that are positively associated with commitment reduce alienation.22 And, finally—again, no surprise—rules are accepted better when they’re perceived to be in the interests of both management and employees (safety rules in Gouldner’s example) rather than punishment-centered (for example, rules against using company equipment for personal purposes).23

Innovation

It’s conventional wisdom that bureaucracy interferes with innovation. That notion has been tested in research, and the results are surprising. As Adler puts it,

the commonly hypothesized negative relationship between innovation and formalization held for most studies of service and not-for-profit organizations and for innovations of higher scope, but the preponderance of the evidence pointed to a positive, not negative, correlation between formalization and innovation in manufacturing and for-profit organizations and for both product and process innovations.24

One possible explanation is that by capturing the results of previous work—in its function as institutional memory—bureaucracy provides a head start and a focus for new innovative efforts. Another is that by formalizing the interactions between different groups within the company, it establishes ground rules for how services might be obtained and coordinated, especially for large-scale innovation initiatives. Or, possibly, the existence of guardrails in bureaucracy allows for faster innovation because it occurs in a safer environment.

In IT, we innovate partly by recombining reusable software services in new ways. It helps to have guardrails in place so that innovative ideas can be tried out quickly and validated without much worry about security and operability. It’s a similar matter with innovation in a bureaucracy. The crucial matter is that the bureaucracy is applied to controls and tools, but not directly to the innovative ideas, which then would easily be shut down under bureaucratic scrutiny.

We tend to focus on the proclivity of bureaucracies to shut new ideas down, but we should also consider the factors that create conditions under which new ideas are generated. Adler lists a set of conditions he believes are necessary for innovative behavior:

a) a minimum of employment security

b) a professional orientation toward the performance of duties

c) established work groups that command the allegiance of their members

d) the absence of basic conflict between work group and management

e) organizational needs that are experienced as disturbing25

Bureaucracy can satisfy these conditions. Why not? It provides employment security, hires professionals, organizes into a hierarchy of work groups, establishes formal interactions that avoid basic conflict, and certainly can disturb employees when necessary.

Another interesting twist is reported by David Buchanan and Louise Fitzgerald in an article on health care bureaucracy. Although most people associate inflexibility with bureaucracy, research has found that intensifying bureaucracy has actually increased the flexibility of professional service workers.26 The mechanism might be the same: the availability of guardrails and tools to draw on for what is routine may allow the employee to approach the distinctive aspects of each situation more flexibly.

Atul Gawande’s writings on medicine also demonstrate how a kind of bureaucracy can support innovation and craftsmanship. In The Checklist Manifesto, he promotes the use of checklists, even in knowledge-work environments such as surgery. Not only do they drastically reduce errors, but they also provide a sort of comforting safety net for medical providers, since they’ll help catch flaws of memory, attention, and thoroughness.27 While checklists are a bureaucratic device, they nevertheless support the surgeon in what is clearly a highly skilled and creative activity. Gawande says, “[People] require a seemingly contradictory mix of freedom and expectation—expectation to coordinate, for example, and also to measure progress toward common goals.”28 Checklists—and certain other types of bureaucracy—give employees autonomy by pushing decision-making and innovation to the periphery while still providing centrally determined guardrails.29

Security as Enabler

In earlier chapters I talked about the deep and surprising relationship between bureaucracy and IT. I claimed that Devops is an extremely enabling type of bureaucracy that frees technologists to be creative and solution minded. It takes good practices, standardizes them, then automates them to make them repeatable and reliable. In doing so, it turns what would otherwise be intrusive bureaucratic ceremony into tools that engineers can use. It adopts bureaucratic mechanisms to remove toil—the dull, repetitive, nonthinking work—and lets the technologists focus on what motivates them.

To illustrate, I’ll describe in some detail how DevOps handles the bureaucratic aspects of security. Because information security does require deep technical expertise that’s constantly refreshed, large enterprises generally have a dedicated team of security experts who formulate security policy, engineer security solutions, test IT systems for security vulnerabilities, and respond to security incidents—those frightening moments when monitoring software indicates that a hacker is trying to or succeeding in breaking into the company’s IT systems. This is the team that tells you not to use your pet monkey’s name as your password, and frowns at you when you click on an email that says you’ve won a million dollars in the Nigerian lottery.

Security is a classic IT example of Weberian bureaucracy. The security team must know its functional area well, and its knowledge becomes even deeper over time as it accumulates experience through its role in the hierarchy. Security “dominates by knowledge,” since others in the company are unlikely to have Security’s deep—and important—understanding. Since keeping the company secure requires employees to behave in ways that are not natural to them, Security must formulate rules and enforce them. Their rules are viewed as an imposition by other employees—and customers, as Graeber’s anecdote about dealing with his bank reveals—and may therefore be resisted, and as a result require coercive enforcement. And because Security is carefully audited and plays a big part in many compliance regimes (for example, compliance with payment card industry requirements formalized in PCI DSS), security controls must be carefully documented.

Before DevOps, security teams would review new IT systems just before they were deployed to users and would invariably find vulnerabilities. This was a kind of bureaucratic gatekeeping that was imposed on software developers, who were eager to deploy the products of their hard work to the people who would use them. Sometimes the vulnerabilities were bugs in the code; sometimes they were a matter of required security controls that the software engineers hadn’t implemented. In the government, security for a “moderately” sensitive system required 303 controls and system “enhancements.”30 There was ample opportunity for the security team to identify problems and reject the code. Their security review was about compliance and enforcement.

With DevOps, though, security experts participate earlier in the design and coding process, lending their expertise to make sure each system is secure when built. (DevOps practitioners call this “shifting left,” referring to the time sequence on a Gantt chart.) They provide reusable security software components that developers can incorporate into their code, standardizing authentication, authorization, and auditing functions, for example. They put automated guardrails in the cloud to keep an eye on what’s happening in the live systems. If a developer deploys something dangerous, Security can be notified to take action or can just arrange to have it automatically disabled. Compared to their traditional gatekeeping role, these techniques put security teams in the position of enablers and contributors rather than enforcers.

What most directly takes the place of traditional enforcement is the use of automated security tests. The security team can prepare automated tests for developers to use to check their code for compliance and security. The tests can be run every time a developer makes a change to their code, and a test can report back to the developer immediately if it finds a problem. Sometimes the security test platform helps the developer further by providing information on how to avoid the security flaw in the future. The tests are tools, in other words, for the developers—and yet they apply Security’s controls.

The result of all this enabling bureaucracy is that developers can do a better job of delivering secure code quickly. The security team acts as experts in their domain and are respected for that expertise; they no longer have a confrontational relationship with the development teams. At the same time, all of Security’s objectives are met—even better than they could be through gatekeeping reviews and cranky enforcement.

Has the security bureaucracy been eliminated? Not really—the software development process is still constrained by the same security controls and still requires approval by the security authority. It’s just that the rules are now automated, and the security authority has put its hierarchical powers into the automation as its proxy: they’ve agreed that if the code passes the automated tests, then it meets their definition of secure.

These DevOps principles work not just for security, but for virtually all aspects of IT delivery that require compliance. For example, financial controls can be implemented in the cloud by tagging cloud resources with cost-accounting labels that show which budget category should be charged and what the resource is being used for. Reports can then be generated to analyze this information along various dimensions. Automated scripts can shut down infrastructure that doesn’t include such tags; the finance department can be notified automatically when spending thresholds are exceeded—you get the idea. Automated controls replace verbal troll reprimands. The IT delivery teams can move quickly and take matters into their own hands, within the constraints of the automated rules.

Automated functional testing of code is also an instructive case. One of the characteristics of bureaucracy, as I’ve said, is that it’s self-documenting; that is, you not only comply but provide evidence that you have complied. Thus the reliance on paper forms—filling one out is not only a step in a workflow, but a way to prove that the required process has been followed simply because the paper exists. It’s an audit trail.

Developers must prove that their code works and that it implements the “required” functionality. They do so by writing automated tests and showing that their code passes the tests. In the old days, tests were something used by a separate QA organization to catch programmers before they deployed faulty code—they were an excuse for angry troll foot stomping. Today, tests are a way for developers to demonstrate that their code meets requirements and complies with quality standards. It’s a reversal of the bureaucracy; a way to turn a gatekeeping enforcement task into a proactive, enabling tool for developers.

Self-Service Bureaucracy

The ideal DevOps working environment is self-service, where one team never needs to wait on another to get what it needs. In IT, delivery teams used to wait for infrastructure engineers to procure and set up infrastructure for them, or for other development teams to update code they relied on. In a self-service model, they get the building blocks they need, subject to certain rules and formalities, as if they were buying cups of ramen noodles from a vending machine. The bureaucrats still play their part: they decide which brand of ramen noodles to make available in the machine—say, after checking to make sure it has the appropriate nutritional content. But the hungry customer doesn’t have to place an order and wait for someone who’s already eaten and therefore feels no urgency to approve it before they can slurp their noodles.

A DevOps organization will often have a special team that creates, maintains, and manages a standard platform and set of tools for all the product teams to use. They usually build it in the cloud, where there’s a great variety of tools available that can be provisioned or deprovisioned at will. Because the platform is self-service, developers use it to speed up their work. But the platform team can also satisfy the need for bureaucratic controls by vetting the tools they make available in the platform, configuring them with security features, or tagging them with cost categories. They can fill the vending machine with only operating systems that have been security-tested, hardened, and approved; when the development teams pop their slugs into the machine, out comes compliance.

Self-service is also an elegant solution to the annoying bureaucracy of ticketing systems. Instead of launching a request into the unknown and waiting for the Heavenly Immortal of the Great Nomad from the Eight High Caves to do something about it, the software engineer in need can just help themselves from the communal punchbowl.§

Earlier (in Chapter 3) I mentioned a trend in IT toward allowing users more freedom in how they use IT systems. This too is enabling bureaucracy, as Adler notes:

The distinction I propose to make between enabling and coercive designs of organizational systems parallels the distinction between equipment designed for usability and to enhance users’ capabilities, and equipment designed to foolproof the process and to minimize reliance on users’ skills.31

Agile Enablement

The shift from a coercive to an enabling bureaucracy is a large cultural change, as managers give up the prestige and comfort of being able to compel behavior from employees. Adler provides a few handy charts for distinguishing the designs of coercive and enabling bureaucracies.32 The similarity of enabling principles to Agile IT principles is striking. Then again, we shouldn’t really be surprised. As I explained earlier in this book, Agile frameworks provide a bureaucratic structure within which agility can be practiced. It happens to be a good—and enabling—structure. I’ve combined and paraphrased several of Adler’s tables and added the corresponding Agile principles in Table 1.

Table 1: Coercive versus Enabling Design
(with Corresponding Agile Principles)

Coercive Design

Enabling Design

Agile Principle

Have experts design systems and management enforce them

Involve employees in design to encourage buy-in

User-centric design and full-team participation

Focus design on technical features

Focus on flexibility and enablement

Reduction of technical debt; simplicity of design

Establish clear upfront goals and design carefully to accomplish them

Test successive prototypes with employees

Iterative, incremental design

Get it right the first time so it rarely needs revision

Encourage continuous improvement through suggestions from employees

Feedback and continuous improvement

Focus on performance standards to highlight poor performance

Focus on best practices to improve performance

Blameless retrospectives; best practices built into process frameworks

Standardize systems to eliminate game-playing

Focus on end results to allow for improvisation and application of employee skills

Diversity of skills in a cross-functional team; delivering value rather than implementing requirements

Enforce management control over employees

Give employees tools and insight to control their own activities

Give teams business goals and let them self-organize to find the best way to deliver them

Systems are instructions to be followed

Systems are best-practices templates to be improved

Continuous improvement

Enabling bureaucracy turns out to be the Agile version of bureaucracy. It includes participation by the “users” of the bureaucracy, design for agility and learning, and fast feedback loops for iterative improvement. These may just sound like the best practices of nonbureaucratic organizations, but with enabling bureaucracy we’re still talking about a system with well-defined accountabilities and rules that will be enforced impartially. It’s bureaucracy by definition, but applied with a view toward self-service empowerment rather than punishment and control. It’s no less bureaucracy for that.

*

A soupçon is just a trace of something. You might have thought it has something to do with soup and maybe a little spoon, but actually it comes from the Latin “suspectio” and has more to do with the word “suspicion.” -ed.

See Chapter 1. -ed.

Note that Schwartz is back on the subject of pasta here, as in A Seat at the Table, though he has diversified from Italian pastas to East Asian. -ed.

§

This is exactly what the Monkey King does at the Empress’s party. -ed.