7

Governance and Investment Management

Is reality exhausted by what is, or does it leave room for all that could be?

Susan Neiman, Evil in Modern Thought

We do not desire a thing because we judge it to be good; on the contrary, we call the object of our desire good, and consequently the object of our aversion bad.

Baruch Spinoza, Ethics

An enterprise wants to direct its resources to investments that will deliver the most value. In Chapter 4 I discussed business value in the context of the value we can expect IT investments to generate. This chapter will look at how to decide which IT activities to fund in order to garner that value.

IT investment management has been seen as a process of weighing the merits of a proposed IT initiative, as expressed in a business case, and making a go or no-go decision. But in the digital age this approach is riskier than necessary—it doesn’t take full advantage of Agile and Lean revolutions. It’s also too slow, as it depends on accumulating the materials to support a business case and then running that through a series of hurdles. You can instead use speed as a way to improve your decision-making and extract more value from each investment.

In the traditional model, we’ve seen that the business proposes initiatives, then IT uses its limited capacity to execute on them. Demand always exceeds supply, since the enterprise is filled with creative people who are constantly finding friction in their work and dreaming up improvements. There is a constantly growing backlog of IT requests waiting to be served.

Some of those are small and will be handled through some low-overhead prioritization process, often drawing on IT resources set aside in the budget for ongoing maintenance. Larger requests often take the form of a project to deliver a new IT system or make substantial changes to an old one; these are generally treated through a capital budgeting process. There may be a formal governance process to vet proposals and ensure that they’re aligned with the company’s strategy and business needs. For an investment large enough to cross this threshold, senior leaders are often very interested in overseeing or at least monitoring its progress. I’ll refer to these more formal processes as investment management and oversight, and they will be the main focus of this chapter.

In IT Governance, authors Ross and Weill propose five archetypes for how companies might set up their governance: business monarchy, feudal, federal, duopoly, and anarchy.1 These names offer a good feel for the legalistic spirit of their processes. They’re governance in the sense of government, largely centralized (with the exception of anarchy, which, you’ll not be surprised to learn, is discouraged) and heavy-handed.

Notwithstanding Weill and Ross’s menu, I’ve found that governance decisions in an enterprise more closely fit a different archetype—something I described in A Seat at the Table as being like a Star Chamber.* Picture a group of hooded figures around a table in a dark room lit by a bright bulb overhead that makes it difficult to see their faces. Petitioners come with their investment proposals, and the Star Chamber renders judgment, awarding them either the boon of suffering through a large IT initiative, or condemning them to another year of workarounds and static websites.

In Real Business of IT: How CIOs Create and Communicate Value, George Westerman and Richard Hunter describe a classic Star Chamber process:

The basics of the process involve project sponsors (1) developing a formal proposal that incorporates estimates benefits, risks, and resource requirements and (2) submitting the proposal to decision makers who select preferred investments from the proposals. . . . In a transparent investment process, opportunities meet a well-defined prioritization process designed to identify winning proposals.2

I love their use of the expression “winning proposals.” It makes it clear that this is about choosing who gets rewarded—it’s about passing judgment.

Yet the Star Chamber has a fiduciary duty; it must decide how to deploy the organization’s resources to get a good return. It must oversee each initiative, again to fulfill its fiduciary responsibility, making sure the money is well spent and stopping the initiative if it’s not going well. The organization must have a well-defined process for making these decisions, as it will probably have to satisfy compliance requirements and auditors—and it will certainly have to satisfy its owners.

I’d like to suggest that there are better ways to accomplish this, if we’re willing to accept that uncertainty and complexity overwhelm traditional planning and that IT is not a passive, contractor-like fulfiller of requirements. In Star Chamber governance, the hooded figures hand IT its projects, selected from among those proposals that bubble up through the business side of the organization. Each investment decision is based on a proposed battle plan that is unlikely to survive first contact with bits and bytes and changing business needs. In the digital environment, Star Chamber governance places far too much reliance on a business case and plan that are prepared in advance.

Governance is about control, and it’s interesting to think about what that means in an environment of high uncertainty. In a sense, we clearly cannot control something that is subject to high degrees of chance, complexity, risk, and changing objectives. Nevertheless, intuitively we know that there is some kind of responsibility to keep the train on the tracks, to make the best possible decisions given the information at hand, to steer with an eye on getting the best outcomes for the enterprise—and that this can either be done well or done poorly.

There is a slight logical gap in the way this duty is exercised in the Star Chamber model. The governance body should be ensuring that the approved proposals are those that will do the most to advance the company’s objectives. But the Star Chamber is passive. It can only pass judgment on proposals that are presented to it. Isn’t it possible that some objectives, or parts of objectives, wouldn’t happen to be included in proposed projects? And how do they know those projects that are proposed represent the best way to accomplish the objectives?

Since the Star Chamber invests in a specific plan and its business case, neither should be allowed to change during execution. That’s the connection between investment management and investment oversight: the goal of oversight is to make sure a project doesn’t vary much from its approved plan. Unfortunately, that also means that it resists Agile adaptation as circumstances change. You can’t play Agamemnon and have your donuts too, as they say.

Is the Star Chamber really taking the best care of the company’s resources? It’s certainly not lean. Since a proposal might not be selected, the work the proposers did to justify it and the work the Star Chamber does to evaluate it might be wasted. Then there is the waste of even assembling the Star Chamber to make the decisions: proposals wait until the room is set, the spooky figures are convened, and the hooded robes come back from the cleaners. Remember that wait time is a classic source of waste in Lean thinking—a step that necessarily extends lead times.

The business case the Star Chamber evaluates is for the initiative as a whole, a monolithic set of requirements. You might say it’s a coarse-grained decision rather than a fine-grained one. It assumes that value is in the sum of the parts—that all those requirements add up to a single unit of value to be assessed. It’s analogous to a car; you can’t assess the value of its steering wheel alone—it only becomes valuable when it’s combined with other parts to make a complete vehicle.

To some extent this does make sense for IT governance as well. After all, an online shopping system, for example, isn’t viable unless it includes components that can take orders, handle payments, fulfill orders, and allow refunds. But you’d be surprised how minimal a product can be and still remain viable. And IT can often quickly roll out a great deal of functionality by reusing components they have already built for other purposes or by assembling components that are available in the cloud or from the open-source community.

It’s better to think of an IT initiative as delivering a number of individual capabilities, each having a different value, and some of which may need to be done in combination. Let’s say that investments A and B both include a number of features. Even if we decide to prioritize investment A over its counterpart based on business case assessment, that doesn’t mean every feature in investment A is more valuable than those of investment B. Perhaps we should build four features from A, two from B, another one from A, three more from B, and on from there. An Agile approach lets us do so.

Coarse-grained investment decisions sacrifice many advantages that fine-grained, Agile techniques offer. The coarse-grained approach made sense in the waterfall world that delivered a single, monolithic system at the end of each project. But in the Agile world, we can deploy individual capabilities to users as they are ready. We can work on individual requirements rather than a large batch, which we know from Lean theory will increase lead time. Coarse-grained governance results in a fixed scope, making it hard to remain flexible. It groups together work items that might individually have different priorities. As we strive to increase the agility of our IT processes, it would be a shame to forfeit their accompanying business agility.

The traditional project plan tries to manage risks by itemizing them in a register and proposing a mitigation plan for each one. Risks are mitigated to the extent necessary to bring the plan back into line—in other words, to adjust the initiative so the initial business case and plan are maintained. But the uncertainties in the IT domain go deeper—it might be that the very core of the plan needs to change.

Risks can only be itemized if they’re known. The problem is that true uncertainties—unknown unknowns—are probably what will have the deepest impact on the initiative. And the number of unknown unknowns is staggering in the digital world. They range from things we can’t know (Will a competitor suddenly release a new product tomorrow?) to things we just don’t know (Is a hacker about to compromise our system? Is there a bad piece of code in our system that’s about to be triggered when we add the next feature?).

Yes, you can incorporate risk into the traditional investment process by risk-adjusting the discount rate. But even this benign and textbook-adherent way of managing capital budgeting misses an important point. It assumes, incorrectly, that we have to make a single decision regarding our investment right now. But we don’t; agility allows us to make an IT investment in stages. We can choose to risk a smaller amount to begin the project, then gain further information that will help us make decisions about future stages. Such an approach is called metered funding, staged investments, or more broadly, discovery-driven planning.

Venture capital firms practice metered funding—series A investments usually fund a startup as it develops its products, hires its first set of employees, and performs its initial marketing and branding activities. Series B usually occurs once the product is in the marketplace; it’s used to scale up and establish a market position. Series C occurs when the company has been proven successful and is looking to introduce new products, grow more substantially, or prepare to be acquired or conduct an IPO. At each successive stage, investors pay more for the amount of equity they receive because uncertainty has been reduced.

In an old-style waterfall project, you wouldn’t necessarily gain useful information in early project stages that you could use to reduce the uncertainty of later stage decisions. After several months of work, the developers might report that they’re “15% done with component A and 13% done with component B.” That doesn’t give you much information about whether to invest in the next round of funding. But in the digital world you would set up the initiative to quickly deliver results, elicit feedback, and yield information about whether to fund the next stage, or what changes should be made in order to justify additional funding.

With an Agile initiative you can also get an immediate return on the early stage investments, since capabilities are constantly released to production. If the company decides not to fund the second stage, then the first stage’s product is still available for people to use. As I mentioned in Chapter 4: The Business Value of IT, the return should really be modeled as a return on series A plus an option on future stages. If the option isn’t exercised, there still remains the value of series A.

That’s why the Agile approach makes it effective to innovate through experiments; these are economically justified because of the option value. If you make the first stage short enough and its investment small enough, sooner or later the option values start to outweigh the first stage cost. And the portfolio of ideas being tested, like a VC firm’s startup portfolio, may yield a successful idea that the enterprise can later make a big bet on.

Metered funding can be used throughout the project’s life, which leads to my next point: we should always cancel successful projects, not failing ones.

Here’s why. If the investment decision-making process has done its work well, then the initiative is well-justified and is expected to return business value. Now let’s say that we’re staging our investments, which amounts to periodically remaking our investment decision—perhaps monthly. Since a successful initiative has been constantly delivering results—this is what we expect in the DevOps world—we can evaluate what it has delivered so far and what we believe will be delivered in the future. And since we’ve prioritized the highest return tasks and accomplished them first, we should be seeing diminishing returns. At some point our oversight process might find that enough has been achieved, so it makes sense to stop investing in the effort: resources should instead be moved to a different initiative. This would be a rational decision and one that reflects very well on the project.

On the other hand, if the project seems to be going off course—it’s not returning what we truly believe it could be returning—then we shouldn’t cancel it. After all, we believe it can return more. Rather, this is the moment to make adjustments to the project to get those higher returns. Is the team running into impediments that can be removed? Does it not have the resources it needs? Is this the wrong set of people to be executing the initiative? At this point we should address all of these issues.

We have often thought of project failure as being the fault of the team assigned to execute it; we cancelled their project as a sort of punishment. This makes little sense for two reasons. First, it’s probably not the team’s fault—after all, they were chosen for the project because we thought they could execute it best. Secondly, the justification for the project still exists; if it is a real business need then project cancellation still leaves that need unfilled.

We should instead take advantage of all the options that new IT approaches present. If we can buy additional information to reduce the risk of our investment decision, if we have the choice of stopping an initiative that has already returned sufficient value . . . well, why not? It would be irresponsible to pretend that we can make long-range, point-in-time decisions despite the uncertainty in our environment. We now have the option of staging investments, learning as we go, and adjusting plans. And if we insist on sticking to a plan that we made early—before the initiative started and when we had the least available information—then we’ll likely miss out.

In Chapter 4 I suggested that instead of soliciting initiatives from around the business and prioritizing them, it would be better to start from the organization’s strategic objectives and cascade down from these to the initiatives. Westerman and Hunter describe using this approach:

We used to work with the power users in every function from the bottom up to develop the IT strategy, and it didn’t necessarily connect to the business strategy. By coming from the top down, we were able to redirect IT effort on major initiatives.3

It might seem impractical to do this for basic maintenance work, which includes any number of small tasks that are difficult to tie to strategic objectives. But this can work for two reasons. The first is that all of the little maintenance tasks that “must” be done . . . must they? Has the company not been able to operate without them? It’s important to focus resources on what is most essential, not on what can somehow be justified.

The second reason is that the initial development work, if done correctly, might make such small tasks unnecessary. In the DevOps model the team that launches a feature continues to monitor its success and make adjustments to it. The feature is not really finished until it’s meeting all the company’s needs, so there is little reason to “maintain” it later by fixing bugs and tweaking functions. That backlog of small requests should become small.

The preceding thoughts apply as long as we’re governing discrete initiatives—projects, in oldspeak. We should always stage our investments and buy down risk. We should experiment freely, creating options that might become valuable. We should cascade strategic objectives into initiatives, rather than improvising initiatives that might or might not be relevant to strategic objectives. We should avoid vomiting user stories, in Pascal Van Cauwenberghe’s phrase.4 This is the Agile way to govern projects.

But we should not be governing projects. DevOps, as a Lean process, is based on minimizing batch sizes, which means processing very few requirements at a time, finishing them quickly, and moving on to the next set of requirements. Each requirement can be coded quickly and its capability delivered to users—on the order of minutes to days. DevOps can even take us close to single piece flow, where one requirement at a time can be worked on and delivered.

This is quite remarkable. It would make the IT process amazingly responsive, taking in each new mission need, immediately cranking out a solution, then quickly improving that solution until it is perfect. It would let you change course at any moment to respond to changing circumstances or to try new ideas. It would reduce delivery risk to near zero, since every item would be delivered almost as soon as work started.

But even if we have the technical ability, we often can’t use it because our governance committee only meets once a year, when the Star Chamber room can be rented. And of course we can’t convene the Star Chamber for every requirement. In fact, Lean principles would suggest that we avoid the wait time necessary for getting the hooded figures to make investment decisions in the first place. The only way to take full advantage of single piece flow is to decentralize governance decision-making.

But isn’t the whole point of governance to centralize these decisions, to avoid the chaos of decisions made separately across the organization? Yes, but there are ways we can decentralize decision-making and maintain centralized direction. I know of three models for doing so: the product model, budget model, and objective model.

In the product model, teams of technologists are assigned to work as part of a particular product group. This group oversees the roadmap for their product, taking feedback from the market and input from the company’s overall competitive strategy. They’re generally responsible for the performance of their product, measured in whatever way makes sense to the company, but they have some freedom to develop and prioritize ideas for their roadmap. For digital products, these are largely digital features, of course. This is fairly close to the model used by Amazon Web Services, where product teams manage their own feature roadmaps in consultation with customers.

In this model the technologists become very familiar with their product and its underlying technology. Because decision-making remains within the group, communication channels are short and lean. The team works toward product objectives, which might be cascaded down from companywide strategic objectives. They also work backward from customer feedback, test hypotheses about which features will be valuable to customers, and gather additional feedback from them as they use the product.

A similar idea can be applied to “products”—business support applications—used internally by the company. The technologists align to whomever is responsible for the product and become experts in both its use and internals. For example, the technologists might align with the HR group that oversees a human resources system.

The budget model is the approach we use all the time for spending that isn’t either “project based” or related to large capital investments. Now that we can execute our efforts at the single requirement level, why even have IT delivery projects? There’s just everyday IT work, analogous to routine efforts across the rest of the enterprise. IT folks simply come to work every day and, like everyone else in the company, produce whatever needs producing. This may mean they create new IT capabilities, modify existing ones, or perhaps improve security. Some of this effort might need to be capitalized for financial reporting, but that’s a topic for a later chapter.

When a company allocates budgets and cascades them through an organizational hierarchy, it’s passing governance authority down to the budget holders. Why shouldn’t this be done with IT initiatives as well? Some of IT’s expenditures are already managed as budget items, after all—why not the rest? Such an approach is all the more plausible now that there is very little difference, execution-wise, between maintenance of existing systems and development of new ones. There is simply a rolling set of tasks that must be completed by delivery teams.

If you drop the idea of individual systems or products and consider the entire IT estate as a whole—the single large IT asset I’ve described—then all IT development work simply amounts to enhancements or maintenance work to this asset, whether expensed or capitalized. Investment decisions are really the assignment of budgeted teams to work streams, along with the decision as to how many teams to fund in the first place. If the company funds twenty delivery teams, for example, then the CIO can decide how many of them to put on each objective or set of capabilities, and can move those teams between work streams as deemed appropriate.

The budget approach allocates funds to the CIO to use in managing the company’s technology assets. It’s the approach most consistent with the Intrax CEO’s message in the Introduction, as it makes the CIO responsible for the returns from the organization’s IT investment portfolio. Yes, this puts a lot of responsibility in the CIO’s hands—just as the enterprise places heavy responsibilities in the hands of other CXOs. They all report to the CEO or board and are managed by them. No CIO is free of oversight.

One reason why this approach has seemed out of the question is simply the traditional business/IT split—that arms-length, contractor-control model. You wouldn’t give this decision power to a contractor, right?

In the objective model, a team is chartered with a specific business objective, cascaded from a critical company objective. The team consists of technologists together with business operations people—a group the organization believes can actually accomplish the objective. The team then owns the objective rather than a set of requirements. It does whatever it can to accomplish it: testing hypotheses, making decisions, and rolling out IT or business process changes.

I can explain this best by an example. My team at USCIS was responsible for E-Verify, an online system employers use to check whether an employee is eligible to work in the US. Although employers aren’t generally required to use E-Verify, we were afraid that its use would become mandatory as part of a broader immigration reform. If so, we knew it wouldn’t be able to scale up enough to handle that transactional volume.

We also realized that expanding E-Verify wasn’t primarily a technical problem but a human one. The system could automatically determine the eligibility of 98.6% of the people presented to it, but a person (called a status verifier) had to research and adjudicate the remainder. In addition, observers had to monitor use of the system for potential fraud and misuse. Neither set of people would scale with increased use of the system.

So we launched an E-Verify modernization project, initially using the traditional waterfall approach. A team collected requirements, over time organizing them into about eighty-five required capabilities—including hundreds of specific features. They then began designing the system and preparing the many documents required for the DHS investment governance process. After four years, all they had produced was a stack of one-inch paper binders.

We decided to take a radically different approach. We . . . ahem . . . reclassified the one-inch binders as trash, then reduced the project to five well-defined business objectives:

  1. 1.Raise the number of cases a status verifier could process per day (about seventy at the time).
  2. 2.Increase the 98.6% of cases the automated system could process to be closer to 100%.
  3. 3.Improve the registration completion rate—a large number of companies were beginning the E-Verify user registration process, but never completing it.
  4. 4.(A goal around fraud and misuse.)
  5. 5.(A goal around technical system performance.)

We then made a very Lean investment decision. We said we were willing to spend 100 livres every three months to accomplish each of these goals, but would informally revisit the investment decision every month, and formally each quarter. Meanwhile, we also built dashboards to track metrics continuously for each objective. Because the project executors had all of the technical tools and cloud platforms already set up for them, we expected them to show results in some metrics within two weeks and continuous improvement thereafter.

Having formed a team consisting of technologists (with skills in coding, testing, infrastructure, and security) and business operational folks (status verifiers), we gave them the first objective. We instructed them to do whatever they thought best to raise that number of cases, whether by writing code or making business process changes, and that we (management) would help remove impediments.

More precisely, I said that for every case above seventy they were able to deliver, they would get a gold star. If they did any work that wasn’t intended to increase that number, with a wink I said I’d take one of them outside and shoot them as an example to the others. That was our control for scope creep and feature bloat. I also said that we would meet every two weeks to discuss the results and see what we in management could do to help.

To begin the initiative, we also brought together a broader team—managers, verifiers, technologists—to brainstorm ideas that might help the team in its efforts. We used an impact mapping technique (described in the next section) to create a “mind map” of hypotheses about what might increase that metric. But the team wasn’t required to use the mind map—they were to use their judgment to prioritize tasks. We only cared about results.

Every two weeks we had a discussion to align management and the team, as well as to remove impediments, and every month we reported our results to the steering committee responsible for overseeing the investment. We were able to show immediate gains, and after several months the metric continued to improve. The steering committee chose to continue with the investment.

We did something similar with the other four objectives by assigning each to a team, then regularly checking on progress. Something interesting happened with the registration rate objective (number three). Initially the team showed improvements in the metric, but after a few months it reached a plateau. The business owners and I asked about the ideas the team was trying—the hypotheses it was testing—and agreed with the team that it was doing the right things. We concluded that the metric was not likely to improve any further, perhaps because a certain number of companies who started the registration process realized that E-Verify wasn’t for them, or because people were trying it out to see what it was but weren’t ever planning to sign up.

In reporting back to the steering committee, we therefore recommended that it stop investing in that objective, and instead move the budget to another one—even though we had originally planned to spend more. In other words, the team cancelled the remainder of its own project, with the consent of the steering committee.

What had been planned as a four-year project ended after two and a half years because it was so successful. Each objective had been accomplished to the extent that we all agreed it could be, so the remaining funds were returned for use in other projects. You could say that the project had achieved the Agile ideal: maximizing outcomes while minimizing output, or in other words, maximizing the amount of work not done.

To me, this shows the power of DevOps when used with an appropriate investment management process. The amount of money at risk at any given time was only one month of funding, as the investment was reviewed monthly and showed results daily. Value was delivered immediately and frequently thereafter. The teams could innovate freely but only in relation to an agreed-upon business objective. And the process had very little overhead: each month we reported the business results (obtained from our dashboard) and the amount spent to the steering committee, and each quarter we had an hour-long discussion with them.

What if the objectives hadn’t been so easily quantifiable? Organizations often force themselves to find something quantifiable, even if it doesn’t exactly measure what the objective intends. Instead, I would put the burden on the team to provide evidence of its results, even if the evidence isn’t quantitative. Since the team is thoroughly absorbed in the effort, they are the most likely to know what impact they are having. At the biweekly meetings, management can evaluate the evidence, decide whether it is reasonable, and suggest other methods if necessary.

As I mentioned before, the objective model works especially well with impact mapping, a technique developed by Gojko Adzic. Impact mapping provides a cross-functional team with a way to visualize problems and possible solutions, such that everyone can work from a shared view of their task, a sort of mind map diagram showing possible routes to the solution. Impact mapping begins by identifying the most important goal the team should be working on. “Goals should not be about building products or delivering project scope. They should explain why such a thing would be useful . . . [they] should present the problem to be solved, not the solution,” Adzic says.5

Team members first ask themselves the question, “Who are the actors whose behavior needs to change in order for the organization to accomplish the goal?” These are the people who can produce the desired effect or obstruct it—often employees, consumers of the product, or other stakeholders such as regulators. This becomes the first layer of the impact map.

The team then asks what behavior changes on the part of each identified actor will help achieve the goal—these become the impacts the team is trying to create. Finally, they ask what they can do as a delivery team to support those impacts—these become deliverables, software capabilities, and organizational activities.6

In the E-Verify project, we used impact maps to bring the team and management together to frame the problem, brainstorm alternatives, and develop a common language. For each branch of the map we estimated the amount of impact it might have on the target objective.§ This gave the team some ideas about prioritization and an initial set of hypotheses they could test.

A team’s results are easy to measure with the impact map in hand. As Adzic says:

The role of testing becomes to prove that deliverables support desired actor behaviours, instead of comparing software features to technical expectations. If a deliverable does not support an impact, even if it works correctly from a technical perspective, it is a failure and should be treated as a problem, enhanced or removed.7

What is tested is the achievement of the goal, or, to put it differently, the business value created.

Star Chamber governance is based on a mental model in which coarse-grained projects are vetted, compared, and given to IT for delivery. Oversight then focuses on making sure the plan that has been approved is executed. But this way of overseeing investment decisions does a poor job of supporting an organization’s need for agility and continuous innovation.

Star Chamber governance, I would argue, doesn’t provide the best stewardship of an enterprise’s resources. Instead, you should stage investment decisions to deliver minimal products, learn from their results, then invest incrementally in additional capabilities. This is how you create the organizational agility you need to survive in the digital world, and at the same time gain better control over your investments.

* I had to look up “Star Chamber”—it was one of those terms I vaguely knew of and that felt like it applied here. It turns out that the Star Chamber was a British court from around 1487 to 1641, which became known for its arbitrary and subjective judgments, as well as its secrecy.

They don’t really. I do. See epigraph to Chapter 3.

Disclaimer: not official government policy, but we really have to do something about feature bloat.

§ Impact mapping does not include estimating the value of each branch—we added that.