APPENDIX
A Closer Look at the Factory Analogy
Foundation Principle: Production Process in Manufacturing versus Creative Process in IT
In manufacturing, the concept is that we design a product up front, define the production process, and then produce a number of identical items. In IT, we deliver a solution that is unique to our context each time we make a change. We are never making the same product again with the same components and the same source code in the same architecture setup. Legacy manufacturing was about reducing variability. In IT, we aim to innovate by using variability to find the best solution to a problem, to the delight of our customers.
Measuring Productivity and Quality Based on Standardized Output
We have discussed that IT never delivers the same thing twice, while in manufacturing, we deliver a large number of identical items. This has significant ramifications for the measurement of productivity and quality. Let’s start with the less controversial one: quality.
In manufacturing, if we are producing identical items, it is quite easy to assess quality once we have one “correct” sample or target specification. Any difference from this correct sample is a quality concern, and we can measure the number of such differences as a means to measure the quality of our production system. If there are systemic problems in the production system, fixing it once will fix the problem for all further copies of the product. Testing the manufactured product is often stochastical—we pick a number of samples from our production system to validate that the variance in production is as expected.
In IT, we don’t have such “correct” samples or target specifications. We test against requirements, user stories, or design specifications, but research and experience show that these are the root causes of many defects in IT. So, our sample or target specification is often not reliable. Fixing one defect is mostly done by fixing the problem on the individual product, not by fixing the production system. As a result, the number of found defects does not necessarily say anything about the level of improvement in the production system, as we might continue to deliver the same level of “ineffective” code as before. Measuring quality in IT has to be done differently. (I talk about this in chapter 7.)
Productivity is even more difficult to measure in IT.* Truth be told, I have attended many discussions, roundtables, and talks about metrics, but so far, an appropriate measure for IT productivity has not been found by anyone I’ve spoken to. Many CIOs admit that they don’t like whatever they are using for productivity measurement. When we put this in the context of the difference we pointed out earlier—that IT is a creative endeavor with unique outcomes in comparison to mass production in manufacturing—it becomes clear that productivity is elusive. How would you measure the productivity of a marketing department, an author, or a songwriter? You can measure the output (more flyers, more books, and more songs in a year), but does that make for a better marketing department, author, or songwriter?
I think you would agree that the outcome (e.g., a successful marketing campaign, bestseller, or number-one song) is, in this case, more important than the output. In the past, we have used lines of code, function points, and other quantifiable productivity measures, yet we all can quickly come up with reasons why those are not appropriate. With Agile, we started an obsession with story points and velocity, which are steps closer to a good answer, as they measure functionality delivered. But at its core, productivity continues to be difficult to measure. This leads to difficult discussions, especially when you are working with an IT delivery partner whom you would like to motivate to be very productive for you. (I explore some answers to this conundrum in chapter 4.)
I would love for someone to spend a bit of money to run parallel projects to see whether Agile or Waterfall, colocated or distributed, and so on can be proven to be better so that we can put that argument to rest for the holdouts that continue to argue for Waterfall delivery. In the meantime, the environment made the choice for us: the ever-faster moving market with changing requirements makes it difficult to justify doing any new work in a Waterfall manner.
Functional Specialization and Skill Set of Workers
Legacy manufacturing focused on getting the production process right by leveraging highly, narrowly specialized workers rather than focusing on the people involved in the process. IT delivery methodologies had a similar approach initially, and while a structured methodology is very beneficial. IT continues to rely on creative knowledge workers, which is different from manufacturing. The overspecialization to an assembly-like worker performing only one specific task in the IT supply chain (e.g., test or development) has proven to be suboptimal, as context is lost in highly specialized organizations. Because we are not re-creating the same solution again and again, context is extremely critical for successful delivery.
Figure A.1: T-shaped skills: T-shaped employees have broader skills than I-shaped employees
Both Agile and DevOps movements have been trying to break down the silos that exist along the SDLC to achieve a more context-aware, faster, and higher-quality organization. With this comes the transition from the ideal worker being a specialist in his field to a more broadly skilled “T-shaped” worker. T-shaped people have a working understanding of many areas and are deeply specialized in one of them, in comparison to “I-shaped” people, who know only one area.2 The whole idea of this specialization in manufacturing was to allow less-skilled workers to perform the job of a master by relying on the production process to be prescriptive enough (designed by a skilled engineer) to make up for the difference. This, unfortunately, is not possible in IT due to the creative and complex nature of IT work.
Production Process Predictability and Governance
In manufacturing, the production process is reasonably deterministic. Once you have defined the production process and the inputs, you will get a consistent outcome. Unfortunately, this is not true in IT. Following the same methodology as another project that was successful does not guarantee you a successful outcome. There is some correlation, which is why some organizations are more successful than others, and that’s the reason why certain methodologies are more widely adopted (like Scrum, SAFe, or PMBOK). But this methodology is not nearly as reliable in IT as it is in manufacturing.
This ability to predict the outcome in manufacturing means it is possible to fix a problem with a product by changing the production process. In IT, this is not the case—just changing the process does not mean it is fixed. Many change-management professionals out there will be able to testify to this.
Furthermore, the many creative inputs mean that the process itself is inherently more complex and less predictable. Governance processes that assume the same predictability as in manufacturing ultimately cause a lot of unproductive behaviors in the organization. People will “fudge” the numbers to show compliance with the predictions, but this is either done by building in enough contingency or by “massaging” the process results and leveraging the inherent ambiguities of IT. More empirical approaches like Agile allow us to show the real, less precisely predictable progress and adjust expectations accordingly. Don Reinertsen makes one of the best-articulated cases for why pretending that IT product delivery is predictable is the cause of many management problems. He explains that manufacturing is based on repetitive and predictable activities, while product delivery is inherently unique each time. Trying to use the techniques that work in a predictable delivery process in an ever-changing one will lead to failure.3
I discuss governance in a bit more detail in chapter 3, but let me tell you that if you see a burndown or burnup chart (Agile status reporting mechanisms) where the actuals match exactly what the original plan predicted, you’ve found yourself a cheater, not a predictable team.
Importance and Reliance on Upfront Planning
Due to the fact that the production process is more reliable in manufacturing than the cost of setup, it does make sense to spend more time and effort on the up-front plan. This means we can plan for a new product in manufacturing, prototype it, and then run batches and batches of the same product.
In IT, each product is different, as we identified earlier. This means we never get into the true manufacturing production process but work instead in an environment that is more comparable with the prototyping process in manufacturing. Even so, we often try to leverage the lessons learned from the production process with all its up-front planning rather than the prototyping process, which is much more incremental. As a result, we expect predictability where there is variance in outcome.
As Gary Gruver and Tommy Mouser stated in their book Leading the Transformation: Applying Agile and DevOps Principles at Scale, “Executives need to understand that managing software and the planning process in the same way that they manage everything else in their organization is not the most effective approach …. Each new software project is new and unique, so there is a higher degree of uncertainty in the planning.”4
Governance of Delivery
Scientific management has been guiding manufacturing for a long time, and while it has been adjusted over the years, it is still the backbone of modern manufacturing. In its extreme, it means we are happy to govern delivery of components at the end of the process; for example, we are happy for the actual delivery to be a black box as long as the end product is exactly the product we specified. Because the specification is a lot more objective too, we can rely on this process.
Figure A.2: Iterative versus incremental delivery: Iterative delivery slowly increases the benefits of the product, while incremental requires the full product before it is useful
(Recreated based on image by Henrik Kniberg, “Making Sense of MVP (Minimum Viable Product)–and why I prefer Earliest Testable/Usuable/Lovable,” Crisp’s Blog, January 25, 2016, http://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-MVP.)
In IT, the specification is less clear; it’s the governance that needs to be a lot more transparent. Over the years, Agile has shown that the cone of uncertainty can be significantly reduced by building small increments of working software that can be evaluated at each iteration. This is not really possible in manufacturing, as building multiple iterations of a car is much harder than building multiple iterations of a website. So, in manufacturing, we will continue to rely on incremental build-and-progress reporting, while IT takes a more iterative build-and-progress approach. Figure A.2 nicely shows the difference between incremental and iterative delivery, using the example of building a vehicle.
Automation Is Productivity
Automation is one aspect where both manufacturing and IT delivery have a common goal: automate as much as possible. Automation is really the key to productivity. In manufacturing, automation means we can produce more with fewer people involved; that is also true for IT. The difference is that in manufacturing, automation is part of the productive process (e.g., it contributes directly to the outcome by assembling or creating parts of the product). In IT, automation makes it easier for developers and testers to be productive and creative by automating repeating and nonessential tasks. In manufacturing, we see end-to-end automation of factories, while in IT, we cannot easily imagine such a thing (unless artificial intelligence really takes over for humans). What is true for both is that the reliability of the outcomes decreases as we use manual workers for tasks that can be automated. Humans are just not good with repeating tasks and should focus on the creative aspects of work.
Scaling Efforts to Deliver More
Everything we have said so far—a more deterministic production process, less reliability on skilled workers, and differences in the predictability of outcome—means that scaling up manufacturing is something we have mastered over the years. You build another factory and hire workers, and you have a good chance of producing a similar product—even with all the possible cultural or logistical challenges.
In IT, scaling increases the complexity of the production process significantly more than it increases in manufacturing. There are more parties that need to communicate, information needs to be disseminated more widely, and common context needs to be created across more communication boundaries. The cost of this additional scale is quite significant in IT. Yet IT systems continue to grow, and we have to find ways to deal with this apart from adding more people. We want IT systems to solve big problems, so we need better scaling approaches. Great work has been done to show that adding people to a project in trouble does not improve the outcome but makes it worse. Frederick Brooks describes how, in IBM, adding more programmers to a project did not make it faster but delayed it further.5 Or as he so poignantly summarizes it, “The bearing of a child takes nine months, no matter how many women are assigned.”6 Being able to do more with the same amount of people while still making systems easier to maintain are the answers that Agile and DevOps practitioners prefer over just adding more people.
Centralization of Resources
Factories were a mechanism to provide economies of scale. Central production resources like the manufacturing machines and the input material were stored in central places so that the workforce could have easy access to them and produce in the most efficient way. In IT, this was initially true too. You had to have access to powerful computing resources, and later, good internet connections. Today, this is all available to you from pretty much anywhere thanks to broadband and the cloud. This means location should not be driven by resources but by the need for communication and skill-set available. How do you bring the right team together to deliver the outcome you are trying to achieve? There is some centralization of resources that still makes sense (e.g., providing easy-to-use developer tools and standardized environments), but it has become a lot less necessary over time and, certainly, less location specific.
Offshoring
Offshoring in manufacturing was an appropriate way of reducing the economic footprint. The production process could be replicated offshore, and the variation in outcome was somewhat controllable. In IT, too often, the same principle was applied without consciously recognizing that context is extremely important for IT delivery to be successful; and accordingly, the communication channels need to be excellent to achieve the same outcome. Offshoring is still an appropriate way to extend your IT capabilities, especially when scale is required. To leverage wage arbitrage, delivering the same project with a colocated team onshore or with distributed teams across the world is probably going to cost you roughly the same. Distribution tends to slow things down due to the communication lag; hence, distributed projects tend to take longer. In many cases, onshore colocated teams are not an option, though, as the required skills or amount of engineers are not available. Then, offshore capabilities are required to deliver complex projects. Unfortunately, many IT executives still see offshoring as a cost-reduction exercise rather than a capability extension, which causes a lot of the problems that created offshore delivery’s somewhat mixed reputation.
Outsourcing
Outsourcing IT is a lot more difficult than outsourcing manufacturing. In manufacturing, you outsource the production of specific components, and as long as the specifications are adhered to, you are happy. You can control the output pretty well too, as there is a concrete item being produced.
Outsourcing IT is a lot harder. The specifications in IT are much more complex and less set in stone. We know that many quality problems come from bad requirements, and no project gets completed without stakeholders changing their mind on something. Given that we cannot easily determine the product of the relationship and evaluate the outcome, we need to care about the process. An outsourcing partner in IT should provide you with capability you don’t have in-house, either in skills or experience, and they should be transparent and collaborative in their approach, as you will have to work through the complex project delivery together. Only when both sides win can the project be successful. Think about what that means in regard to the commercial construct and the people strategy for both sides. Too many IT outsourcing arrangements are adversarial in their setup and either deteriorate quickly or cause everyone to walk a fine line. I have been lucky that I have been working with some great organizations who collaborated with me to identify a mutually beneficial structure to deliver projects.
What I hope is clear is that the ideas that got us here and made us successful in the past require some rethinking and adjustment to continue to work for us. Rather than thinking of labor-intensive legacy manufacturing factories, we need to operate more like highly automated factories. And I think the DevOps movement is a great catalyst for this required change in thinking.
* I elaborated in a post on my blog that productivity is very difficult to measure in IT. Instead, measure cycle time, waste, and delivered functionality.1