The IBM 360, a Roman Longship, and the Principle of Cascading Objectives
Establishing clear goals and objectives is essential to the process of successful execution and adaptation. It was clear goals that made Eisenhower’s D-Day plan so adaptive, because they allowed the field commanders to make adjustments to the plan when things didn’t go as expected. When most of the amphibious vehicles sank, jeopardizing the ability to take out the howitzers that lined the sea cliffs, Lieutenant Colonel James Rudder concluded that he would have to improvise by using bayonets and grappling hooks to secure the beach because he understood how crucial that goal was to the rest of the plan.
There are two basic types of goals—mission and metric—and they’re symbiotically related. One, the mission goal, is a narrative; the other, the metric goal, is mathematical. Put simply, the mission is the narrative that creates the metrics. Put another way, mathematics can be thought of as a language, and the metrics goal is just another way of stating the mission goal. Facebook defined its mission as being a social communication platform. Its associated primary metric was the time users spent communicating. The mission of Myspace was to be a way to meet people, so its primary metric was the number of connections or friends made. NASA’s mission was to create a reusable space truck, and its associated metric was the unit cost to orbit. An essential step in developing our business model in the shuttle program was turning the problem to be solved into a mission and then turning the mission into specific mathematical metrics. If you find you can’t turn your problem into a metric or series of metrics, I would argue that you have a fuzzy model at best.
Choosing the metric to measure is absolutely critical but deeply misunderstood, especially in larger organizations. Most publicly traded companies manage according to financial metrics like sales, revenues, and profitability. This is a mistake; these are not strategic metrics, and managing according to them causes us to lose focus and creates strategic misalignment. It’s not that they aren’t important things to track, they are; they’re just not inherent to the strategy, and they don’t help us to manage and to make strategic decisions. Remember, it’s the problem that determines the strategy, so the metrics need to be tied to the problems being solved. Financial metrics are scorecards that tell us how well we’re doing, the ultimate effectiveness of our strategy, but they don’t give us insight into why the model is or isn’t working, or into how to evolve it. Financial metrics tell you about the past, but you need to manage to future opportunities, not to history. A manager who makes strategic decisions based on income statement metrics is like the captain of a boat who steers it by assessing the wake.
Choosing the Right Metrics
Take, for example, the story of IBM and how it managed to evolve into one of the world’s most powerful companies in the 1970s and 1980s. In the 1950s, IBM dominated the computer industry, and by 1962 it had revenues of more than $2.5 billion and profits in the hundreds of millions of dollars. The products it produced were used for accounting, record keeping, inventory control, forecasting, and cost analysis as businesses became more and more complex. But Thomas Watson, Jr., who’d taken the helm from his father, sensed an impending doom.
Watson was a salesperson, and IBM was, and still is, a sales-oriented company. At the time he took over, the company’s financial metrics were strong. In fact, IBM’s strategy had increased revenue and profitability consistently over the past decade. However, being a salesperson, Watson was very concerned by the number of deals he was losing to start-up companies. He was losing market share. In 1950, there were only four computers in the entire world; in 1957, there were more than 250, and the demand for the machines was growing. Companies such as Honeywell, Raytheon, and Sperry Rand were developing their own lines of computers, and although they weren’t near the size of IBM, they were successful companies in their own right. The problem Watson identified, and feared, was a compatibility one. None of the IBM machines could talk to one another. And worse, when a customer wanted to “upgrade” into a newer and more powerful machine, it had to write a whole new set of instructions, develop new software, to make the new machine do what the other one was already doing. This meant that there was little pain in changing vendors when a customer upgraded; put another way, there was equal pain in getting a new IBM machine and switching to a new Honeywell machine. Watson wasn’t blind to this problem, and although he was currently winning the battle, he feared ultimately losing the war.
This was the genesis of one the biggest product development projects in the history of modern business, the development of the IBM 360. The Harvard Business School professor Richard Ted-low compares it to Oppenheimer’s Manhattan project in terms of technical challenge, complexity of project management, and the huge amounts of resources required. It took more than five years to develop at a cost of almost $5 billion. No other company has ever made such a significant investment in a new, untested product. The $5 billion was invested at a time when IBM’s annual revenues were only $2.5 billion. That’s like Google announcing a $32 billion project today or Wal-Mart making a $720 billion investment in new infrastructure or stores. To say it was a gamble is an understatement.
But to IBM’s credit, and Watson’s, it was a gamble that paid off—big time. The 360 was really a series of products, all built on a common internal architecture. The product line included a series of computers from small to large, from low to high performance, all using the same internal command set. This meant that IBM could sell a small, cheap computer and then upgrade customers to larger systems without the time and expense of rewriting the software to fit their needs. It also included a line of peripherals like terminals, magnetic tape storage devices, optical character readers, and high-speed printers. Not only that, but a company’s accounting system could now communicate with its record-keeping system, eliminating the need for additional communication software. This solved the compatibility problem and became a very powerful selling tool. “No one ever got fired for buying IBM,” the mantra went. The competitors’ machines, in the meantime, each had its own operating system, incompatible with existing equipment, and so required a huge investment. Some say that the IBM 360 was the most successful product in the history of business, and it catapulted IBM into an even loftier positional rank among the great companies of the time. As Tedlow said, the magnitude of the success “startled” the company. According to Watson himself, “My anxiety was misplaced. We got an immense number of orders—far more than expected—and even more kept pouring in.” For the next few decades, IBM would ride the wave of success of the 360 and the 360 business strategy. It would dominate the computer and data-processing industry the way Google dominates the Internet search engine industry today. In 1987, Fortune magazine published an article about Watson, the development of the 360, and the 360 strategy, titling it “The Greatest Capitalist in History.” It called Watson the “most successful capitalist who ever lived.”
How did Watson do it? It was by managing to his metrics. His primary metric was market share, and even though his sales, revenues, and profitability were strong, his company was losing market share, so he devised a new strategy to combat that problem.
When choosing your metrics, you want to choose ones that are leading indicators and tied directly to the problems you’re solving. In economics, there are three types of indicators: leading, lagging, and coincident. A leading indicator is a metric that changes before the economy starts to follow a particular trend and includes things like initial unemployment claims, building permits, and new orders for manufacturers. Lagging indicators change after a change in economic trends and include things such as the prime interest rate, the consumer price index, and the ratio of outstanding consumer credit to personal income. Coincident indicators change at the same time that the economy changes and include personal income, retail sales, and gross national product (GNP). In business, your income statement is a lagging indicator; it tells you the sales trend after it happens. Your balance sheet is a coincidental indicator because it’s a snapshot in time of the financial condition of your company, important for obtaining financing and making sure the company is solvent, but it doesn’t do much to help you make strategic business decisions.
Market share is a good metric to choose if you’re the market share leader, your strategy is well established, and you’re playing a defensive game. For companies like IBM, GE, and Google, this metric works. General Electric, under Jack Welch, used market share as a primary metric in each of its businesses, saying that it had to be number one or number two or else the company would divest the business. He understood the superiority of defense and made it a key part of his strategic decision-making criteria. However, if you’re not playing a defensive game, you need to choose a metric that’s more closely tied to your strategy.
Remember, your hypothesis is based on solving a customer problem. This led to your strategy and, if conceived correctly, will also lead to your choice of metric. Sam Walton was solving a low-price problem; his metric was therefore the price of goods on the shelf. Everything in the organization laddered up to this metric, and it was easy to measure; he simply compared his prices to those of his competitors. In turn, he used this metric to determine the effectiveness of his tactics. If a tactic pushes the price lower, it’s effective. If it pushes the price higher, it’s not aligned with your strategy. Wal-Mart could raise the price of toilet paper and increase profitability, but it would be violating its business model.
Every business model that aspires to be adaptive needs both a primary metric and a series of supporting or secondary metrics. The primary metric is the one most purely inherent to your strategy. For Wal-Mart it’s the price of its products, and for Facebook it’s the time spent communicating. I call these primary metrics because they should trump the secondary ones. In other words, everyone in the organization needs to manage to the primary metric and not be led astray by watching secondary ones too closely.
The Principle of Cascading Objectives
Imagine that you’re the captain of a first-century Roman longship. It’s more than a hundred feet long and propelled by a large square sail. On board you have a hundred slaves. When you prepare for battle, you lower the sails and use ten pairs of oars to power your vessel. Your slaves man the oars on the lower decks. Oar power makes your vessel more maneuverable than sail power. The problem you face is coordinating the one hundred slaves to make each of them more effective. There are five slaves per oar, and you’ll maximize speed if they’re all pulling at the same time and with the same draw. If you’ve ever rowed a small boat or canoe, you know the problem. If the oarsmen on the starboard side of the boat row faster than the ones on the port, you won’t maximize your power and you’ll have to constantly adjust your direction to stay on track. When the oarsmen get out of sync with one another, all hell breaks loose. An individual oarsman can be working his ass off, but if he’s not coordinated with the other oarsmen his efforts can be futile, and in some cases can actually hurt the progress of the ship. In order to become an effective fighting machine, the slaves have to be in sync. This is the primary metric.
So the captains developed tactics to improve this metric, like having a drummer beat out a rhythm for the slaves to row to. In business we should use the principle of cascading objectives to ensure that we’re all in sync.
Years ago, I founded a company called Preferred Capital. It was a leasing company that financed equipment for small and medium-sized businesses. The problem we were solving was providing “convenient” access to capital. We wanted to make it easy for companies to acquire new equipment for their businesses. We used a direct marketing program to generate sales. First we’d prequalify companies through Dun & Bradstreet for credit. Then we’d send them a “prequalified leasing card” in the mail (it was a faux credit card that said we’d established a lease line of credit for their company). We’d ask them to activate it—whether they needed it now or not. Once they did, then we’d market to them, periodically reminding them of their line of credit and working to establish a relationship with these “activated customers.” When they needed equipment, they’d call us and we’d process the order for them by paying for the equipment and then setting them up on a monthly payment plan.
Our primary metric was the activation rate of our faux credit card mailer. It was the leading indicator of our business and the effectiveness of our tactics. We worked hard to develop ways to increase our activation rate and build our database of activated customers. We tried different kind of cards. We used different envelopes. We changed the offer. We tried mailing to different types of lists, different industries, different-sized companies. We measured the effectiveness of each of these tactical changes by its effect on the activation rate. Things that increased the rate, like mailing to printing companies, were deemed successful and fully implemented. Things that decreased the rate, like mailing to Fortune 500 companies, were deemed failures and discarded. We chose the activation rate as our primary metric because it was the best indicator of whether a prospective customer was interested in convenient financing or not. It was inherent to our strategy. For example, Fortune 500 companies didn’t need convenient financing—access to capital for them was easy—while small printing companies had a hard time getting a loan and it took a lot of work (putting together financial packages for their bank) and so took away from valuable work time. For them, convenience was an important problem to solve.
Our secondary metrics were underwriting rates and closing ratios. These metrics cascaded from the primary metric. Let me explain. The underwriting rate was the percentage of activated customers who contacted us when they needed equipment. We devised a whole series of tactics to drive this metric, such as follow-up phone calls, e-mail messages, and a zero-balance bill (we sent a monthly statement indicating how much credit they had available even though they owed nothing). The closing ratio cascaded from this metric; it was the percentage of deals we finalized from the customers that contacted us. Our tactics included things such as sales training and creative payment plans. These other metrics were extremely important in determining if our tactics were working.
The primary metric, as I said earlier, trumps the secondary ones. In other words, if we devise a tactic that increases the effectiveness of a secondary tactic but decreases the primary metric, we don’t consider it effective. For example, at Preferred Capital we could have improved our closing ratio by requiring financial statements from our customers the way a bank does (this would have helped us give certain customers a lower rate because we knew more about them), but it would have been counter to our “convenient” strategy and would have eroded our primary metric even though it would have increased the effectiveness of our secondary metric. It would be like Wal-Mart raising the prices of certain items in order to meet a profitability metric—it would erode the primary metric.
In the process of monitoring and adapting your business model, your metrics are your benchmarks for making adjustments. Some business models are easier to measure and control, like online models, while others are more difficult, like creating a low-earth-orbit launch vehicle. Either way, though, the key is to develop tactics to improve your metrics, to move the dial and become more effective. By choosing the leading indicators for our metrics, you get an early “heads up” as to how the business is playing out and determine whether you are going in the right direction or not. Thomas Watson used his market-share metric to sound the alarm and make critical adjustments before it was too late. You can’t do this, though, unless you clearly and accurately define your metrics from the beginning and create a system for tracking them.
Creating the Dashboard
Running an organization is like commanding a ship. You need a dashboard to help you make decisions. On a complex vessel like an aircraft carrier, the captain has a set of instruments to help him navigate, called the dashboard. It includes radar, a compass, a depth finder, and a speedometer. His map is his strategy, his destination and goal, and his dashboard is the mechanism he uses to make sure he reaches his goal. Without it, he’s nautically blind, steering his ship with little or no understanding of where he is or where he’s going. In business, dashboards give us instant feedback and tell us where we are and where we are going. They, too, will alert us if we’re headed into shallow water and allow us to make adjustments.
A business dashboard is easy to build. It’s merely a formal way for us to track the series of cascading metrics we’ve defined. At Preferred Capital, our dashboard looked like this:
Metric |
Value |
Prospects |
1,000,000 |
Activation Rate |
5% |
Application Rate |
20% |
Live Deals |
10,000 |
Closing Ratio |
25% |
Deals |
2,500 |
Activators |
50,000 |
This dashboard is a mathematical method of assessing the success of our business model. It clearly identifies our cascading objectives and gives us insight into the effectiveness of our tactics. For Preferred Capital, the prospects were the total number of potential customers that were possibly interested in convenient financing. We constructed a direct-mail program offering them a “lease line of credit” and asking them to activate their account so they could use it when they needed new equipment. The activation rate was the number of prospects that activated their account. We developed a number of tactics designed to drive this metric, such as a faux credit card, a targeted prospect list (mailing to the right people), and clever marketing pieces. Since we made money only when an activator used its line of credit, the next metric was the application rate, the percentage of activators who were in the market for equipment and thus ready to finance. We developed a number of tactics to drive this metric, like the follow-up phone calls and the zero-balance bill (reminding people that they had the open line of credit). This created our pool of live deals. The next metric was the closing ratio. Once we processed the application and approved the customer for credit, we needed to close the deal. As with the other metrics, we developed a number of tactics to drive it, most being sales training–type things like developing closing tactics for different customer types, recording sales conversations, and coaching sales reps through the process, and developing different types of financing options, like a step lease (with low payments up front and that increased over time).
By setting up a dashboard to keep track of our metrics, we can closely monitor the effectiveness of our tactics. At Preferred Capital, if our activation rate dipped below 3 percent, it sounded an alarm, and we would decide what course correction we had to make.
Every business should have a deep understanding of its metrics and a dashboard for monitoring them. With this worked out, it’s time to go ahead and actually write out a plan.