A platform is where people come together to collaborate. If you have no platform, there can be no collaboration.
What do we mean by a platform? Most people associate the term platform with technology. The Microsoft Windows operating system, for example, is a well-known platform for integrating your computer programs.
But our concept of platform is much bigger than that. The military thinks of its planes and ships as combat platforms, for example. Citibank might think of its branches as platforms where people manage their money. A hospital’s surgical rooms are high-performance platforms—in fact, the entire hospital is a platform where doctors and sick people easily find each other and engage.
While platforms are about more than technology, increasingly today collaboration takes place on digital platforms. Google is a platform that links advertisers with searchers; Twitter is a platform where people exchange brief, anywhere-anytime messages.
All platforms are clearings, spaces that make it easy for folks to find each other, engage, and, if all goes well, collaborate to achieve something together they can’t accomplish alone. A platform should be the WD-40—the lubricating oil—of collaboration. Some platforms are exclusive by design; others are intended to attract new partners, growing the network and increasing the power of the collaboration.
A platform, ultimately, is where you meet up to work together on solving right-sized problems to achieve a blue-sky vision.
Signing on as secretary of education for the municipality of Rio de Janeiro, Claudia Costin inherited a platform for the schooling of children that was broken: inefficient, ineffective, and tragically inequitable.
A longtime public servant, World Bank official, and policy wonk, Costin was shocked upon entering office in 2008 to discover how badly the system was serving its one million elementary, middle school, and high school students. Forty percent tested below grade level in math; twenty-eight thousand students from the fourth, fifth, and sixth grades were completely illiterate. “Social promotion”—passing students on to the next grade no matter how ill prepared—was rampant. Even so, many students were much older than grade level. Having started school late, they sat in classrooms with children two and three years younger.
Unequipped to pursue or even imagine a better future, children were easy to exploit. “It is my strong belief that it’s not acceptable that a ten-year-old kid will be working for narco-traffickers,” Costin told a gathering of industry and government executives. “Twelve-year-old girls are being prostituted because they want to have kids from those guys because they will be respected if they are mothers. So, I want to change the lives of those kids.”
These were complex problems, but Costin held to a simple vision: “You have to have the teachers with you. You have to build trust in the teachers. Otherwise, you won’t change education.”
Costin breathed life into platforms for communication and engagement that have encouraged teachers to collaborate with her on school reform efforts. While her efforts were anything but haphazard, she discovered one of the most powerful aspects of her platform by accident
“Let me be frank. I entered Twitter because I have five kids,” Costin said. “One of them was in Italy. And I wanted to connect. One of the kids—they are my teachers in technology—said, ‘Let’s write what you’re doing.’ And then I noticed that teachers were following.
“I said, ‘Why are they following? Why do they want to know what I’m saying to my daughter in Italy?’ Very quickly I discovered that Twitter is the key for success in mobilizing teachers.”
In today’s digital world, serendipity can be a potent force for finding and making new “friends” fast. But converting “friends” and “followers” into collaborators takes work. Since stumbling onto the power of Twitter, Costin has zealously taken advantage of it. She spends two hours each day communicating with her more than sixteen thousand followers, many of them teachers. Those exchanges are the cornerstone of a complex platform for collaboration that also includes e-mail, a private online channel called “Fala, Professor!” (“Speak, Teacher!”), and intensive in-person engagement.
Rio de Janeiro’s teachers had for a long time felt abandoned. First the middle class fled Rio’s schools, leaving only the children of the desperately poor behind. Then the state let the schools physically crumble—plumbing, electrical systems, and air-conditioning were in disrepair. Some teachers, frustrated and dejected, developed chronic absenteeism; administrators turned on many teachers for the sins of a few. Finally civil authorities fled narco-trafficker-controlled areas until the schools were the state’s last outpost standing—surrounded by gunfire and armed gangs, a traverse of filth and danger that teachers and students had to cross simply to reach the school grounds.
The municipality of Rio de Janeiro was failing its schools, and as a platform for education, the schools were failing Rio’s children. Costin sought a clearing—a safe zone—where teachers could find her, catch their breath, and together begin the arduous process of healing themselves, recommitting to their profession, and restoring Rio’s schools.
Twitter helped her engage lightly, broadcasting answers to all. The “Fala, Professor!” platform was where more serious work on online collaboration took place. To get everyone moving quickly in the same direction, she right-sized the problem. “We decided to first offer a task to start with, ‘What should the teachers teach, the kids learn?’ ” Costin said.
“Here is the current existing guideline for curriculum,” she offered. “Propose your changes.” Using the “Fala, Professor!” platform, teachers did exactly that. It was a much-needed icebreaker. “They didn’t trust us,” Costin said. “In the beginning my Twitter had some huge offenses. I kept on responding nicely in the sense that, as a leader, you too are a teacher.”
As Harry Spence did with his social workers, Costin was teaching teachers how to teach by treating them as she wished for them to treat all others—especially their students.
Within six months, Costin had in hand a standardized curriculum for each grade and for each discipline, courtesy of the collaboration over “Fala, Professor!,” Twitter, and other channels. From there, Costin right-sized again, asking, “How do we effectively use this curriculum to improve learning?”
She hoped to ensure that every teacher had a laptop—that would make collaborating on the platform easy and uniform for all thirty-eight thousand teachers. She would also ensure there was one laptop for every three children, and perhaps that every classroom would have a projector. There would be a portal with blogs for teachers to share insights and best practices, and stronger tools for curriculum design and development, including a new Wiki-based platform—Educapedia—importing video, best practices, and digital classroom materials. With Ministry of Education funding, Costin selected ninety teachers to develop platform content and agreed that it would be shared throughout Brazil.
The Rio schools’ collaboration platform energized supporters and brought in new ones. Telecoms provided connectivity, the British consulate supported free English classes, firms contributed educational videos. The network effect was working. As the platform attracted more and more supporters, the collaboration became even stronger.
Meanwhile, Costin also attended to the little stuff. Teachers tweeted her attention to school events and conditions without having to go through hierarchical channels. She now had a direct link to everything from complaints about bathrooms that didn’t work, crumbling stairwells, and impossibly littered playgrounds to the roar of anguish when in October 2010 a child of eleven was shot sitting in a classroom, killed by narcogang cross fire.
Costin used the platform to track and reveal results—were children learning what they should be? She started rewarding outstanding performance, giving teachers long-overdue recognition.
“We give visibility to those unknown heroes,” Costin said. The same was true for kids; every month, each of the twenty students making the greatest gains received a bicycle. Their teachers also received rewards. Schools that met goals for improvement gave bonuses to each teacher and administrator.
The collaborations have been paying off. Student performance has improved across the younger grades, though by 2010 not yet in the older ones. Teachers are more engaged. As we write this, eighteen months into her tenure, the teachers haven’t staged a single strike since Costin became secretary.
Costin recalled her boss, Mayor Eduardo Paes, taking her aside one afternoon and telling her, “I don’t know what’s going on. I’m in the streets and people are saying, ‘Wow! Thank you for the way you’re changing the schools.’ ”
“Sometimes,” Costin said, “we have the illusion that we in social policy are changing things. But a teacher, once he closes the door of his classroom, he does what he wants because he is alone with his classroom. So, either we succeed in convincing them that learning matters—that we can do a wonderful job changing the lives of those kids—or we will be selling illusions to the citizens.”
“I will continue using Twitter every day,” Costin said. “It is the only way to know if what we are proposing is really happening or not, and if there are problems on the street and in the schools that have to be addressed.”
The work isn’t finished. But having built a platform—a clearing where formerly disenfranchised teachers become valued collaborators—Costin has the entire system moving toward that blue-sky vision of schools with the power to change children’s lives.
The success of a platform, and therefore the collaboration it supports, begins with getting the architecture right. Harvard Business School’s Thomas Eisenmann has written extensively about platforms; each, he says, has infrastructure that enables people to collaborate and a set of rules that governs their collaborations.
Digital platforms make the “search-find-and-engage” precursors of collaboration easier than ever. If everything else is aligned, they speed collaboration. JetBlue was one of the earliest firms to monitor Twitter, for example, jumping all over negative tweets to fix what was wrong for dissatisfied passengers even as they still stood fuming on line. Senior executives in industry and government have internal company blogs where they communicate new directions and take comments from employees—right over the heads of managers. Former vice chairman of the US Joint Chiefs of Staff James Cartwright, a Marine Corps four-star general, was famous for his blog. Many cities use digital platforms to let constituents weigh in on budget priorities. Should we add streetlights, pave the street, or (as hundreds preferred in Seattle) add a nude beach?
Claudia Costin stumbled on Twitter and found it a pretty good way to start. It doesn’t always work that way. Sometimes, you can’t just stumble on first steps. The world demands change—collaboration to do together now what no one can achieve alone. That may mean you have to forge a new platform that quickly bridges the gulf between groups where someone has the iron, someone the carbon, someone the smelter—and they all need to come together to make steel. When you stitch together that archipelago, each island has its own rules, ruler, and infrastructure. Who’s in charge of the overall platform? Trying to agree on the infrastructure, rules, and governance can create conflict and confusion. It can take strong leadership to resolve the issues.
In the aftermath of 9/11, the city of Los Angeles felt especially vulnerable to attack. Its seaport, its airport, its vast entertainment industry, and its large Jewish population all put a big red bull’s-eye on the city.
The region’s law enforcement agencies raced to spot terrorist plots. Every whisper of a plot was investigated, every lead pursued. Already there were huge new expenses, massive increases in security, and daily distractions at even the hint of a threat from al-Qaeda.
Under all the commotion, though, little had changed in how the FBI, the LA County Sheriff, and the LAPD dealt with the threat of terrorism. The habits and culture that over the years resisted collaboration and information sharing were as strong as ever.
“Everybody was in their own little basket,” Gary Williams, head of the LAPD’s Antiterrorism Division, said. “They weren’t talking to one another. I include us in that. LAPD wasn’t taking in information from our own department.”
It was as if the LAPD, FBI, sheriff, and dozens of local chiefs in the LA region were each working multiple jigsaw puzzles at once. None knew how many puzzles, or what any one overall picture looked like. Yet the leads kept pouring in. Workloads were staggering. Yet there was no cushion, no room for error.
The agencies badly needed a clearing where everyone could find each other and exchange insights—a platform that took the friction out of information sharing. Without that shift to a new platform, information sharing would be left to the same “who ya know” basis that led agencies to miss the warning signs of 9/11 in the first place. In the press of doing business, data, information, and suspects could easily get stuck at the boundaries of units and agencies—and suspects fall through the cracks.
There were plenty of joint task forces around. The LA County Sheriff had organized the region’s “first responders” into its Terrorist Early Warning Group. The LAPD’s Terrorist Threat Assessment Center focused on threats and analyses. The FBI had established a Field Intelligence Group in each of its fifty-six field offices, including in LA. So LA didn’t lack for platforms.
But each agency had its own limited view of the problem. What the LA region needed was a regional platform of platforms—a clearing where anyone with pieces of the puzzle could spread them on the table and see how they fit together, and how others’ pieces fit.
Gary Williams envisioned an intelligence center bringing together data and staff from around the region, the nation, and the world to make sense of it all for LA’s decision makers. And not just data vetted by law enforcement agencies, but intelligence gathered from building inspectors, fire department investigators, and cops on the beat who noticed suspicious activities. The center would also collect information from the owners of potential targets like the port, power plants, gas lines, and financial institutions.
Right after 9/11 Williams and some of his colleagues from the sheriff’s department traveled widely through the region and the nation, seeking support and resources to build this center. They gained some attention and a few commitments, but not enough. By the end of 2002, there was little money or wherewithal to do more than the ad hoc information sharing that left threat detection to chance. That left LA’s citizens vulnerable to a 9/11-style attack.
When the LAPD needed it most, Williams’s plea to pitch in and collaborate was met with empty hands and deaf ears. Was Williams paying the price for decades of LAPD go-it-aloneness? Perhaps. Tip O’Neill, the great Speaker of the US House of Representatives in the 1980s, once said, “Make friends before you need them.” The LAPD had not been paying attention.
Its culture could be summed up in one word: isolation.
Williams’s boss, John Miller, was a former investigative reporter from New York and in the CompStat era the NYPD’s deputy commissioner for public information. Upon his arrival at the LAPD in 2003, Miller quickly came to understand LA’s fractured law enforcement landscape; it reminded him of home. When Williams proposed a Joint Regional Information Center, Miller’s first instinct was to let his New York stripes show. Suspicious as ever, Miller asked, “Why do we want that?”
Williams argued to right-size the problem by making it bigger and more expansive—go beyond terrorism to include ordinary crime. That’s where terrorism might show its face first, and the LAPD had the eyes and ears on the street to see it. The Madrid railway bombers, for example, had financed their attacks with local hashish dealing, counterfeit CDs, and credit card fraud.
“All terrorism is at some point local,” Williams said. “The formation of attack is always local, so eyes and ears on the street are critical. The response is always local because the target is cities.” That argument won Miller’s support.
To make the center effective, they needed reports from the county’s sheriff, chiefs of police, and LAPD patrol officers. They also needed the FBI’s classified system. The FBI saw things globally that might give new meaning to the LAPD’s local data. Information collected locally might give new meaning to the FBI’s global data. Together, if shared, the information from all platforms could create a much-needed common operating picture of threats to the region.
Even if the need for a regional platform for collaboration had become obvious, that didn’t make financing the platform, setting its infrastructure, and making its rules easy.
Miller and Williams, who had secured $1 million in funding for the center—good enough for some early work, but not enough to sustain it past a start-up—considered locating at the Los Angeles County Regional Criminal Intelligence Center (“LA Clear”), a “deconfliction” facility run by the county chiefs in partnership with the sheriff. LA Clear tracked the region’s narcotics cases to avoid “blue on blue” incidents, where cops might inadvertently come upon each other’s cases. The LA Clear facility had computers and a room to operate multiple wiretaps and surveillances—infrastructure the Joint Regional Intelligence Center, or JRIC, could use.
“They had a high interest in getting us in their space, seeing our million and perhaps hoping to get into the counter-terror funding stream that was just opening as counter-narcotics was drying up,” Miller recalled. “We had a high interest in their space. It was high quality and free.”
The FBI offered to contribute some analysts to JRIC if it went into LA Clear. But when Miller asked the bureau, “And what about FBI systems?” the answer came back, “No.” The bureau didn’t consider LA Clear secure enough to house top-secret information.
Without FBI systems bringing in classified data and intelligence from overseas, Williams and Miller were convinced, the JRIC effort would be pointless. “A Mickey Mouse Club,” Williams cautioned. “Secret decoder rings but no access to the tree house.”
Ron Iden, the FBI’s LA bureau chief, was willing to work with Miller and Williams. But if JRIC wanted to use FBI systems, the FBI would have to lead or at least house JRIC in an FBI space. There was room adjacent to the FBI’s Joint Drug Intelligence Group facility just southeast of LA. With FBI systems, the LAPD staff could get clearances, clean up the data, and boil it down for local law enforcement. Moreover, the FBI would add $2.5 million to help fund the center.
Williams urged caution. “The FBI meant well but didn’t quite have the same vision,” he told Miller. “They’re very headquarters-centric, and they march to the headquarters beat. At the time,” Williams said, “headquarters did not have a clue about what was needed from state and local governments in terms of dealing with terrorism.”
Ultimately, Miller acquiesced to the FBI’s demands, figuring that he could negotiate a shared governance arrangement to keep the bureau in check.
LA’s Joint Regional Intelligence Center would be made secure to FBI specifications and use FBI systems. The agencies would share governance, with each partner contributing staff and finances. With the FBI and LAPD involved, Miller had no doubt that others would soon join the platform and add their data.
The arrangement was acceptable to the LAPD, but it infuriated the LA Clear chiefs and the county sheriff, who felt snubbed and frozen out.
“Bratton worked overtime to persuade them that a federal link to JRIC was essential,” Miller said. “It was his political savvy, his investment, and his contributions to their Association that made this a wrinkle instead of a disaster.”
The “new” LAPD’s efforts to “make friends before you need them” seemed to be paying off. As negotiations and clarification took hold, conflict and confusion receded. LA JRIC moved from the Lead quadrant of high conflict and confusion to the Just Do It! quadrant of “good to go.”
Now on a roll, LA JRIC found financing easier and easier. In addition to the FBI’s $2.5 million, the Sheriff’s Department ponied up $450,000, $1.5 million came in from the state of California, and the LAPD kicked in its $1 million.
But who was going to run what? That’s when the serious wrangling began. What were the details of governance? What records would be kept? How long would they be retained? How would the names of sources from one agency be safeguarded as the information flowed to others? Who would have access to whose system, and which data?
Agencies that were used to collaborating with each other only on an ad hoc, “who ya know” basis now had to make collaboration formal. Agencies exchanged draft agreements, with each agency or department using a different color font for its corrections, turning the document into a Christmas tree of multihued inks.
“Iteration by iteration the color started to disappear in the drafts,” Miller said. “A little bit at a time, as we reached agreement after agreement, the document came back to being black and white.”
The partners finally agreed that the FBI would act as administrator of JRIC, and that governance would be shared but rotated among agency leads. They created both classified and unclassified areas—“high” and “low” sides—where law enforcement and intelligence could operate alone, or together with fire, health, and other agencies, as needed.
The Los Angeles Joint Regional Intelligence Center finally opened July 27, 2006. Even after it commenced operations, questions remained about how to scale it up, how to sustain it, how it would be staffed, and what the next technological steps should be.
“What we did accomplish,” Miller said, “was to get all the systems running into one place—narcotics, organized crime, street crime, terrorism—and we got the people sitting next to each other.”
With that clearing operational, the LA JRIC platform started pulling in new network partners.
Las Vegas was among the first, Miller recalled. “They said, ‘Our criminals are your criminals. Our gangs are your gangs. They just go back and forth. Why,’ they asked, ‘should we set up a fusion center in Vegas when JRIC has the same crooks, same cases, and same information? We’ll add some, take some.’ ”
Having established LA JRIC, the LAPD, the FBI, and their partners became the stewards of the platform. It was up to them to ensure that it ran smoothly and operated as agreed; that people could search for, find, and engage each other and share information; and that it produced the promised results that drew the network partners to it in the first place.
That’s the nature of a platform. You can’t just build it, forget it, and assume it will always be there for you. A platform needs constant tending. Architecture that once made sense can become outdated, and your platform obsolete. Securing data, for example, is an investment that’s never finished. Something can come along—a heightened challenge, a stronger competitor, a more capable adversary—to reveal weaknesses the platform had kept hidden.
It’s not likely that LA JRIC, of course, will have to fend off competitors. After all, the FBI and the LAPD have monopolies on the information they gather, whether classified and from overseas or confidential and local. One day, though, one or more of the partners could walk away from LA JRIC, or just stop paying attention to it.
As a platform steward you will need to tend to its infrastructure and rules, and ensure that it does its job: to create a trusted clearing where people can search for, find, and meet like-minded souls, share assets, and collaborate to achieve together what they can’t alone.
Without vigilance and leadership, you may find the sands under your feet shifting until you no longer have a platform to stand on.
Back in the mid-1960s, while rummaging through the bank’s trash bins looking for a missing transaction slip, Dee Ward Hock wondered whether he could stoop any lower. After sixteen years in the business, he was now a trainee again, this time at Seattle’s National Bank of Commerce (NBC). Hock had been dispatched to a branch to learn the teller’s business. Hired at NBC as a promising young executive with a track record in consumer finance, he had been treading water ever since.
Two hours and eleven trash bins into his search, the missing slip he needed to balance the day’s books was nowhere to be found. Hock had gone nose to nose with the head teller and all he could think was “I’m outta here.”
Summoned to the bank president’s office the next day, Hock expected the chat to be brief and his employment to be over. He was wrong.
The president told him NBC wanted a new bank card operation up and running in ninety days. Bank of America (“BofA”) had licensed its BankAmericard product to NBC and dozens of other out-of-state banks. BofA’s plan: break out of its home state beachhead in California, where regulators had it pinned down and where BofA had already supersaturated the market with two million unsolicited blue, white, and gold BankAmericards. BofA looked to build a national market using partner banks like Seattle’s National Bank of Commerce.
Hock’s ninety-day mission: qualify 120,000 NBC customers, get a BankAmericard with their name embossed on it in the mail, and get them spending. His partner was to be Bob Cumming, a top bank branch manager.
Hock could not have known then that he was about to create a platform for electronic payments that would prove irresistible to billions of consumers and businesses worldwide—or that it would spark one of the great collaborations in the history of modern finance.
Pause for a moment to reflect. In 1965, there was no such thing as MasterCard or Visa. There were plenty of specialized cards around: American Express for travel and entertainment, Diners Club and Carte Blanche for restaurants, Sunoco or Esso for gas. There were regional bank cards, like MasterCharge in the eastern United States, and BankAmericard in the West. But those cards were tethered to their home base. BankAmericard was for Californians to use in California, for example, not for trips to Aspen, New York, or Vegas, let alone for jetting to Paris, dining in Rome, or getting massages in Tokyo—all of which consumers were beginning to do.
There was no all-purpose charge-anything-anywhere-anytime bank card. Institutions like Bank of America noticed that. They also realized that unlike Sunoco or American Express or Diners Club, banks didn’t have to go in search of consumers or merchants—they already had millions of them as account holders. If Bank of America could figure out how to put a piece of plastic in a customer’s wallet and guarantee payment to a merchant when he accepted the card as payment—even when one bank owned the merchant account and another owned the customer account—there might be millions of dollars in fees to be had on both sides of the transaction.
Actually, it turned out to be trillions of dollars.
Fast-forward to the year 2009: 1.7 billion Visa-branded cards were in use globally; Visa handled 57 billion transactions, valued at $4.3 trillion.
Talk about a platform.
Visa is what BankAmericard became, after Dee Hock got his hands on it.
To Dee Hock and Bob Cumming, BofA’s so-called “strategy” for new card partners, including National Bank of Commerce, sounded desperate, like a Hail Mary pass by the quarterback on the losing end of a football game. Send millions of cards out. Sign up every merchant in sight. The rest will take care of itself.
Except this was no end-of-the-game miracle play. It was BofA’s opening play in taking its card national. It quickly became obvious that the BofA licensing crew knew little about consumer lending or card operations.
But there was no turning back. BofA and NBC had a forty-year business relationship and the chairmen were pals. Hock and Cumming realized they would have to home-grow a fix at the level of day-to-day bank operations. They’d have to scope vision, right-size, and deliver. And they had seventy days to do it.
First things first: NBC ordered the gear it would need no matter what Hock and Cumming came up with—embossers, thousands of imprinters, two hundred thousand plastic cards. Hock and Cumming knew the price of failure: someone would melt all that plastic down for guitar picks. Hock and Cumming would be strumming tunes for spare change on a San Francisco street corner.
Hock and Cumming baked some bankers’ common sense into NBC’s upcoming rollout. In their plan, NBC’s frontline lending officers would prequalify their branch’s customers before the cards went out. None of the managers liked these new card products much. Unsecured credit? “Worse than financing cars,” they grumbled. Stand by for fraud and losses.
But like Hock, the managers all held a dozen niche credit cards in their own wallets—gas, merchandise, airlines, entertainment. They understood the vision: a single bank card would replace them all. The bank would claim all those fees. With that came the much-needed frontline support. Thirty days later, NBC had sent 120,000 prequalified customers invitations; 100,000 accepted. Hock’s team embossed and mailed a card for each. One hundred thousand prequalified NBC bank card customers were now armed with shiny new blue-and-gold plastic, ready to spend. All in ninety days.
Meanwhile, Bank of America continued to push the Hail Mary play, with predicable consequences. By 1968, other licensees had sent out millions of cards, often without prequalification or other safeguards. As a result, fraud was rampant, hitting banks with losses and dragging down consumer confidence. Banks’ reputations were being sullied (yes, there was a time when that was still possible!). Congress was calling for hearings.
Bank of America turned out to be a poor steward of its platform. It branded the card—and that was about it. It was as if it owned a campground with a scenic overlook, but never emptied the garbage or groomed the trails. It just did some advertising and went around collecting fees.
Problems started at the point of sale when merchants made authorizations by telephone and voice with a customer waiting. This was the era before the advent of magnetic stripes and swipe machines, or even touch-tone phones. Voice authorizations were painfully slow—five to twenty-five minutes on average. Customers waiting to sign the slips balked and left or opted to pay cash instead. Rather than lose sales, merchants completed purchases without authorization, with predictable consequences for fraud.
In the back office operations, payments were backlogged and error ridden. With no standard way for member banks to clear payments, every bank was on its own to square away its transactions with every other bank. Interchange fees—the “vig” that merchants’ banks paid customers’ banks for the convenience of accepting the card—varied widely. Each bank and merchant seemed to be making it up as it went along.
Glitch after glitch slowed payments between banks; customers weren’t billed; banks couldn’t balance their books at night; millions of dollars were being held in suspense ledgers. It became an accounting nightmare.
By 1968, BankAmericard’s licensee banks were in revolt. Bank of America Service Corporation (BASC)—the corporation that BofA had formed to front its card operation—called a meeting of its restive licensees for October, in Columbus, Ohio.
Senior Bank of America executives didn’t even bother to show up. Instead, BoA sent a pair of mid-level managers, adding insult to the licensees’ grievances.
Dee Hock, who by then was running NBC’s bank card business, attended the meeting. Accusations flew: licensees took aim at each other and BASC. Hock was about to give up when he drew a committee assignment: recommend some fixes. Hock was reluctant. The problems were huge; incremental fixes wouldn’t work. “How about we step back,” he told the others on the committee. “Instead of BASC convening us small banks, we will convene us and take a look at what needs fixing top to bottom.”
In other words, they shouldn’t count on the organization responsible for this hopelessly flawed platform to build a better one.
BASC agreed. It would be off the hook; Dee Hock would be on it to fix the platform. Why not? the other licensees figured. The worst that could happen is that BofA would ignore their recommendations and the licensees would be back where they started.
On the flight home Hock carved up a map of the United States torn from an airline magazine into seven regions. Each region, he decided, would convene four committees organized around operations, marketing, credit, and systems. The committee leaders would make up the region’s executive team and sit on national committees.
A band of equals. None was more equal than another. In essence, he’d form a collaboration platform to work on building a new platform.
The Visa card as we know it today was born on the torn-out page of a seat-back pocket airline magazine.
In Hock’s call to look not just at bits and pieces of the problem but at its entirety, you may hear echoes of Harry Spence from our discussion of vision in chapter 2. Like the platform Spence came on in 2001, this platform, too, was not delivering on its promise. The potential of the network to create value for all was barely touched. The current vision had no blue skies for anyone but Bank of America (and those skies were by now cloudy, too). The steward could not be trusted to reinvent the future. It was time to get back to basics—to what the licensees like Hock had signed up for in the first place.
Dee Hock started by taking a hard look at the existing platform. He’d need to listen to the data.
Region after region, Hock and his committee discovered the true extent of the licensees’ problems. Fraud losses were in the hundreds of millions of dollars and accelerating (in 1968 dollars—real money!). There would be no quick fix.
As long as there was no quick fix, Hock argued, let’s fix this right. What assets are we actually dealing with here? Let’s deconstruct and revision this thing: how do we use our assets to greatest advantage?
A bank, all knew, was made up of brick-and-mortar branches, teller windows, deposits, and currency. Banks brought them all together; customers experienced them as trust and security. A great asset.
The card was a halfway physical thing, a piece of plastic in your pocket from which a merchant read numbers into a telephone. The merchant’s voice linked the brick-and-mortar bank by telephone to the point of sale.
Dee Hock’s vision began to take shape. That voice authorizing the card transaction was made up of nothing but electronic impulses. If you could get rid of the voice and just move the impulses machine to machine, with a transaction started by a card at point of sale and completed by computers in the back-office operation, you could buy and sell stuff 24/7, around the globe, all in seconds. You didn’t even need a card physically present. You could shop from a catalog in the comfort of your home!
Why, for that matter, did you even need a brick-and-mortar bank? Answer: you didn’t. A bank was in the business of the exchange of monetary value. Start there, and invent the best organization to accomplish that exchange. But Hock, like Spence and many others, was seeing a vision of blue skies when reality was anchored to earth, in old-time systems and processes.
Hock needed many eyeballs and brains on this problem. He retreated with his committee for a week’s getaway at a hotel in Sausalito. “Let’s go back to first principles and build from there,” Hock urged. He gave them a blue-sky challenge and a one-sentence problem statement: “If anything imaginable was possible,” he said, “if there were no constraints whatever, what would be the nature of an ideal organization to create the world’s premier system for the exchange of value?”
His thinking: just as you could reduce money to electronic impulses, you could deconstruct an organization to its minimum principles. From there, you could reinvent the minimum platform necessary to provide for the anywhere/anytime transmission of electronic impulses and the exchange of value.
In fact no one, Hock later wrote, could imagine the complexity of this organization. The rate of change coming in the new digital world would be so fast and furious that nothing anyone could plan today would make sense tomorrow. It was beyond the “power of reason” to design even if all the variables could be known—which they couldn’t.
The ideal organization would therefore be one that thrived on emergence, on continual change, not top-down “my way or the highway” thinking. A hierarchical organization like a bank would simply re-create itself. It would take a network to invent a network capable of continuous change.
What principles should govern this network? Simplicity itself: the Golden Rule. The organization should be equitably owned by all participants. Rights and obligations should be tiered but uniform within tiers. It should be open to all qualified participants. No cartels, blackballs, or club rules. Revenues should flow to the owners, with the organization retaining only as much as it needed to continue growing. Governance should be of the governed, by the governed, for the governed.
Coming back down to earth, Hock had to deal with the eight-hundred-pound gorilla in the room: the Bank of America. Was there any way BofA would cede control to a band of upstart licensees?
Bank of America, Hock later told its vice chairman in a face-to-face session in San Francisco, “should be the leader of a movement, not the commander of a structure.” Bank of America, Hock argued, would benefit as the size of the pie grew; its share would grow beyond anything it could achieve if it ruled this world alone.
The vice chairman bought it. Perhaps he saw MasterCard, Bank of America’s credit card competition, in his rearview mirror; maybe he saw a well-organized band of banks attempting to do something together without Bank of America. In any event, Dee Hock’s argument carried the day.
Next, the member banks would have to make the change as well. Hock needed two-thirds of the licensees to vote yes. He got 100 percent. And with that, he moved quickly to establish National BankAmericard Incorporated (NBI).
Hock brought on a small senior staff—the best in the business in bank operations, technology, and law. Aram Tootelian, for example, came over from TRW to work with Hock as technical lead on the new platform. Tootelian was an expert in bank technology, airline reservation systems, and credit-checking platforms. As a TRW guy, he understood that the bank card platform required high reliability—not far from the fail-safe methods used in top-secret military projects.
Hock’s team first addressed the problems members felt most pained by: the interchange fees. Making them transparent and consistent, Hock quieted a persistent grievance, gave his team credibility, and increased the licensees’ conviction that this could work.
Setting the interchange fees accurately came at a cost: member banks had to report their key performance indicators to NBI each quarter so NBI could fix the interchange rates accurately. Hock converted that pain into gain: those numbers helped NBI understand the members’ card business as never before, and proved that banks were beginning to profit from their card operations. That built confidence. It also showed Hock which banks were stumbling, and let him send in fix-it teams to pull up those struggling operations. There would be no weak links on this platform.
Next up: authorization delays. Hock couldn’t take it all on at once so he right-sized the problem down to the point-of-sale authorization issue. Hock didn’t like any of the commercially available options, so he turned to his technology team. They were to design a credit card authorization system for members that not only reduced wait times from minutes to seconds but was fail-safe and infinitely scalable.
On April 1, 1973, the team delivered: a real-time system and switch for authorization requests coming directly from large merchants’ cash registers, as well as from card centers handling the smaller merchants. NBI required each bank to build its interface to the switch so every system, diverse as they were, “looked” the same to the switch and data could flow easily.
The platform reduced point-of-sale authorization time to under sixty seconds. It was reliable, with safeguards against everything from power failure to people failure. It was able to grow as the business grew. Hock accomplished all this while letting the local card centers run their businesses as usual, with the NBI switch performing only the community functions no one could take on alone.
With business soaring—first quarter sales in 1973 were up 43 percent over 1972—upgrades were constant, and pressure to keep the projects small, scoped, and delivered on time was critical. These were tough schedules tightly managed by Hock’s in-house team.
“One was given latitude and support to do what had to be done often with unrealistic deadlines,” a developer recalled of Hock. “But he made sure—and nobody questioned—who was really in charge.”
Lax management and governance would not be an issue. There was a new sheriff in town now as platform steward. The community had been called to order: a platform governed by a guiding principle of “no one more equal than another” had been built. Democracy notwithstanding, Hock led his members forward with a firm grip.
The results proved the platform and Hock’s leadership. More banks joined, giving customers and merchants assurance that the card was in fact “everywhere you want to be.” More people went traveling with their cards, generating more fees, making members richer. Hock kept proving the platform, the collaboration, his team, and his leadership.
The NBI name soon changed to Visa, USA, signaling the final break with the past and a forward march to a future of global membership growth, massive expansion of the cardholder base, near-universal merchant acceptance, staggering sales volume and—even after forty-four years in business—dominance in the global bank card system.
Hock had the advantage of building upon a platform that already existed but was floundering and unfinished in terms of infrastructure or rules. The wildly underperforming BankAmericard was underserving the collaboration, putting the entire network at risk of collapse. Having convinced Bank of America that together they could make the pie bigger for everyone, Dee Hock convened his members, and together re-visioned the future, right-sized the problems, rebuilt the platform, and delivered unsurpassed value for all.
Sometimes the platform steward resists such a radical restructuring. When that happens, you can have a fight on your hands.
On November 2, 2004, on a target range off Kauai, the US Pacific Fleet sank one of its own, the USS Valley Forge. A Ticonderoga-class guided missile cruiser armed with Lockheed Martin’s famed Aegis combat control system, the Valley Forge had been commissioned eighteen years earlier. But halfway through its expected thirty-five-year service life, the ship was too expensive to upgrade or modernize. Stripped of its sensitive electronics, the Valley Forge was tethered like some goat to a stake and sent to the ocean bottom in a hail of gunfire. All for practice.
“The US Navy is sinking its ships,” a US Navy captain said, “making them reefs. We lost 50 percent of what we intended when we invested in that ship. We cannot afford to keep doing this.”
By November 2010, Ashton Carter, on leave from Harvard and then US Under Secretary of Defense, had had enough. In a sternly worded directive to the Secretaries of the Army, Air Force, and Navy, Carter ordered them to move to “open architecture” using easily swappable components, commercially available and well-priced, and to make sure it all worked together.
The idea was simple: Stop using “closed” systems from vendors that required ripping everything out to fix one broken component. Deal with a fast-changing world—and fast-changing technology—by shopping commercially for gear; measuring, testing, and proving best in class; refreshing it constantly; and assuring that everything afloat could “interoperate”—work together coordinated by computers.
If they needed inspiration, Carter said, they should look to the submarine service. The Navy’s newest Virginia-class submarines had proven the benefits of the approach, as had every back-fit class of submarines afloat. Now, Carter said, “You do it.”
Bill Johnson could only feel satisfaction. Fifteen years earlier, Johnson had brought the submarine service back from the brink of destruction by Russian submarines running superior sonar systems—using commercially available off-the-shelf systems, just as Carter was now demanding. He knew Carter would have a fight on his hands.
Throughout the Cold War, finding, stalking, and exposing the Soviet nuclear submarine fleet had been a top priority of US forces.
The Soviets ran noisy subs but they were good. One day in the midst of 1962’s Cuban Missile Crisis, four Soviet Northern Fleet subs surfaced inside the US quarantine line around Cuba. Even with the US Atlantic Fleet on a wartime footing, with 85 percent of its assets at sea, the Soviet boats had somehow made it through the northern seas to their stations off Cuba without detection.
The US fleet hit the panic button. Over the next thirty years the United States developed and deployed an amazing array of sonars: surveillance sonar sitting passively on buoys or ocean bottoms, or sonar towed in arrays tethered thousands of feet behind surface ships and submarines, or active pinging sonar in the subs themselves. The Soviets may not have known it, but through the 1970s and ’80s the United States Navy enjoyed a decisive acoustic superiority.
Then in the 1990s, the oceans suddenly became very quiet. Unbeknownst to the United States, for twenty years naval-officer-turned-spy John Walker and his gang had been supplying the Soviets with the Navy’s secrets. The Soviets came to understand just how thoroughly trumped they had been and adapted. Using Walker’s data and Toshiba’s manufacturing prowess, the Soviets quieted their “boomer” missile subs and, keeping them closer to home, defeated US efforts to find them.
“This is the first time since we put Nautilus to sea,” Admiral Jeremy Boorda, US chief of naval operations, told the House Armed Services Committee, “that the Russians have had submarines at sea quieter than ours.”
Deep at sea, Russian subs stalked US subs mere yards away without being detected. In one incident, the American attack sub Augusta had been trailing a Russian ballistic missile sub off Bermuda. Unseen by the Augusta, another Russian sub was close behind, stalking the American. The two collided, forcing the Augusta to return to the sub yards at Groton, Connecticut, for a $3 million repair.
Meanwhile, with the Cold War “over,” Presidents Ronald Reagan and George H. W. Bush slashed US strategic nuclear forces. That included reducing the number of nuclear missile-launching subs from thirty-six to eighteen.
Alarm bells were ringing all over Washington. Not only was the US Navy down missile subs, it was clear that the Navy had lost its longdominant undersea acoustic advantage.
In 1997, for example, Norman Polmar, a long-time observer of US submarine strategy, testified before Congress. “Last month,” Polmar said, “the Director of Naval Intelligence wrote that the Russian submarine force, despite its reduced numbers and manning problems, ‘still remains the technological pacing challenge by which the US submarine force measures itself.’ Is there another area of naval warfare in which Russia—or any other nation—is the ‘technological pacing challenge’ for American forces?”
Bill Johnson’s mission was to restore that advantage. Navy admirals figured it would take six years; that was the yardstick they’d always used for major overhauls.
Johnson figured eighteen months, and he was right.
By 1999, Admiral Malcolm Fages, director of the Submarine Warfare Division, testified in the US Senate to a much-changed state of affairs:
The Submarine Force is making significant, rapid improvements in acoustic sensors and processing. In real-world exercises and operations, towed array and sonar systems are ensuring our submarines retain the acoustic advantage. Use of commercial off the shelf equipment has resulted in substantially reduced costs with significantly improved processing capability. Each ship-set costs only a small fraction of the price of its predecessor, yet improves processing power by an order of magnitude. Improvements in processing power allow the use of powerful new algorithms that result in much improved detection ranges in testing to date.
What had happened in the two years between Polmar’s and Fages’s testimony? Johnson had been at work, and by 1997 had quietly delivered on the promise. By 1999 the US submarine fleet was well along a program of renewal, restoration, and recapture of its acoustic superiority.
How did Johnson effect such a dramatic turnaround so quickly? With a new platform that unleashed the power of collaboration.
Johnson’s vision was (as John Shea might say) “simple in the extreme.” US submarines will hit and hold new levels of acoustic superiority without coming out of service for time-consuming rebuilds.
Johnson’s plan was decisive and right-sized: we will rip the guts out of the compromised sonars now in subs and replace them with commercially available, easy-to-upgrade, off-the-shelf gear. The sonar systems will refresh constantly to match latest technology against new fleet challenges. The US Navy will no longer wait to build entire new classes of boats to bring its combat platforms current.
Johnson’s method: he would collaborate in ways that would soon shock contractors, acquisition bureaucrats, fleet commanders, and scientists alike, using collaboration methods that became the essence of the transformed combat platform itself.
And Johnson delivered unprecedented performance. Fast.
But Johnson faced powerful opposition from groups that had benefited from the old, broken platform. Like Dee Hock, Johnson knew he needed to go beyond the tight circle of insiders who stewarded the old platform. Figuring out what needed to be done from a technical perspective was easy compared to dealing with the politics of changing the platform. Getting beyond the conflict and confusion would test Johnson’s mettle and his leadership.
———
Step back to 1995. America’s “acoustic superiority crisis” had become critical. “We’re getting way too close to the bad guys. We can’t hear them. The danger of collision is high. We need to get that standoff range back,” said Bruce DeMars, a four-star admiral and director of Naval Nuclear Propulsion.
Fear of collision was half the Navy’s concern. But the greater concern was having a hot Russian torpedo coming at US subs out of the blue. For US submarine captains, not knowing where the Russian subs were was a disaster waiting to happen.
Crystal City, Virginia, was ground zero for the crisis. It was there that the Navy convened the first meeting of the Submarine Superiority Technology Advisory Working Group (SSTP), a distinguished group of ten outsiders led by the esteemed Johns Hopkins University researcher John Schuster and chaired by MITRE Corporation’s Ken Hawker.
Their mission: figure out what’s wrong with the US sonars, prescribe a fix, and do it fast.
At the time, Bill Johnson was the lead civilian for submarine sonar in the Navy’s Submarine Combat System Program Office. That made him the top acquisition expert for new submarine sonar. Even so, Johnson recognized only a few faces at the SSTP session—those of his counterpart from the surveillance sonar community, who ran the passive seafloor listening systems, and his surface group counterpart, who ran towed arrays behind cruisers and destroyers.
Strange as many faces in the room were, Johnson was even more in the dark about what had caused all the fuss. The US Navy was so compartmentalized that he had little direct contact with the fleet. Everything Johnson knew about the performance of his sonar systems at sea—in navyspeak, “forward deployments”—came filtered down through the Office of Naval Intelligence or the Naval Undersea Warfare Center (NUWC) at Newport, Rhode Island.
Six months down the road, as expert as he was, Johnson had received quite an education from the SSTP briefings. By his own account, Johnson had had his eyes wide shut, head in the sand.
First, the SSTP discovered staggering waste. For each new class of boat the Navy developed, for example, it had vast, duplicative organizations and infrastructure—sonar platforms, labs, and test facilities.
Second, Johnson learned that although the Walker spy ring had given the Soviets all sorts of quieting advantage, the United States could still hear Russian subs. The Americans’ ocean-floor sonar, for example, was effective.
Sitting in the Panel meetings, Johnson saw the data for the first time. The Office of Naval Intelligence (ONI) had processed a recording of a Russian sub using the ocean-floor surveillance systems, and then put the same signal through the submarine-based systems.
The difference was astounding. Like a heart-rate monitor, the ocean-floor system showed a healthy straight line across a chart: it had maintained nearly continuous contact with the Russian sub. The submarine-based system was on life support: nothing but dits, dashes, dots, and gaps—mostly gaps, mostly noncontact.
Pictures—“lofargrams” in sonarspeak—were worth a thousand words. The data spoke: ocean-floor signal processing was superb. Submarine signal processing was broke.
The difference was in the gear and the algorithms. The ocean-floor listening devices had been constantly improved with the latest off-the-shelf technology and dazzling new software. They’d been able to reap the dividend of plummeting costs of signal processing and microchips, and skyrocketing performance in both.
In contrast, NUWC’s sub-based sonar was refreshed at best every five or six years. Not surprisingly, sub-based sonar had fallen well behind. Once compromised, it would stay compromised until the next major rebuild.
Only now, sitting with the other Panel members, did Johnson realize how far the United States had fallen behind.
Bill Johnson didn’t wait around for orders to fix it. As he gained insights over the months, he’d put his plan together. By September 1995, when the Panel reported its results, Johnson was ready to move.
Johnson’s vision lined up perfectly with the Panel’s recommendations: Quickly adapt and use the ocean-floor systems on US subs. Over time, standardize 80 percent of the infrastructure for all sonar platforms, and then highly customize the last 20 percent. Buy commercially available off-the-shelf (COTS) hardware and software, saving money and time. Plan for system refreshes every couple of years. Don’t waste money stockpiling soon-to-be-out-of-date spare parts. Engage the fleet and expert peer groups of academics, operators, and vendors for design and evaluations of the fleet’s sonar.
Johnson disagreed with only one Panel conclusion. “There is no quick fix to the sonar problem,” it reported. Johnson thought he could get the first fix on the subs and out to sea within eighteen months of authorization.
It would be a complete overhaul of the platform—not just the technology, but the way it was conceived, designed, developed, and deployed. It was imperative that this new platform for acquisition and procurement support a new broad-based collaboration, from the fleet itself to scientists, engineers, and vendors.
And Johnson vowed to make the process transparent. He would rely on peer group evaluations and data-driven analysis collected directly from his labs or subs at sea.
Johnson had the latitude to do it: he ran a post-Panel working group tasked to incorporate the best features of all signal-processing techniques into the Navy’s new sonars.
The key to platform overhaul: performance, budget, and cover.
Johnson wasn’t looking for more. He already had $80 million allocated, which he would redirect to the crash program. His promise: retarget funds I already have; boost performance without new money.
Better than that, Johnson’s new strategy promised a steady spend. Rather than refresh systems by total system overhaul, or buying a new boat class with huge upfront expenditure, sub sonars could refresh every year with low-cost processing upgrades. Where the Navy stockpiled $600 million in spare sonar processors for the the 23 oldest Los Angeles class attack subs alone, Johnson’s strategy relied on constant refresh and commercial, off-the-shelf (COTS) gear: it would end stockpiling. The performance of sub sonar would be stellar; its costs steady and low.
With his captain, John P. “Jack” Jarabak, Johnson coined the term for the initiative: the Acoustic Rapid COTS Insertion Program—ARCI, for short (pronounced AR-kee). Behind that awkward acronym was an idea that audaciously challenged the old ways of doing business and threatened long-entrenched forces for the status quo. Johnson’s world would soon feature combat of the bureaucratic, not nautical, sort.
Johnson and Jarabak soon got their first green light: put the ocean-floor software onto a submarine and see if they could replicate the performance ONI had demonstrated in Crystal City, but this time at sea.
Resistance began to mount. A blue-ribbon panel had warned Congress in 1989 to expect bureaucratic warfare from any major submarine overhaul. “The Navy establishment,” it said, “is burdened with internal vested and sometimes conflicting interests that encumber innovation and execution on the scale required here.”
Top NUWC executives were having none of Johnson’s ways. NUWC was the sub’s signal-processing experts—“design, develop submarine sonar” was their “house.” From the moment the ONI demo in Crystal City tore the veil off NUWC’s poorly performing sub-based signal processing, NUWC executives felt their franchise was under threat. Johnson’s plan to put some other shop’s software onto actual submarines and take it to sea infuriated them. Johnson’s plan to test other sonar designs against NUWC’s was practically sedition.
NUWC executives fought back, even as NUWC managers began quietly supporting Johnson and ARCI. One afternoon Jarabak grabbed Johnson at his desk. “Come on,” he said. “We’re going up to the admiral’s office. He’s being briefed by NUWC.” It turned out NUWC was wrapping its own sonar in the same COTS “flag” Johnson was waving. “Hey,” NUWC’s executive was saying, “NUWC already has a commercial off-the-shelf system out there in the fleet. Let’s use that and take our time with ARCI and do it right.” Johnson went up to the front of the room, took the pointer from the briefer’s hand, and critiqued the NUWC claim.
“It was a hard pill for NUWC to swallow,” Johnson said. “They had a lot of good ideas. They just didn’t have all the good ideas. And some were incapable of being objective when it came to comparing their own ideas with somebody else’s.”
By June 1996, Vice Admiral George R. Sterner, commander of the Naval Sea Systems Command, approved the ARCI plan. Eighteen months later—in November 1997—Johnson’s group had developed, tested, and certified the new ARCI system and readied it for sea trials on the first ARCI-equipped sub, the Augusta, the same boat that had collided with the Russians and started the whole sonar saga.
The Augusta set sail in January 1998; it was the first submarine equipped with the ARCI system sonar running the new algorithms from the ocean-floor community on commercially available software militarized for the purpose.
The engineers’ reports from a predeployment workup at sea had been raves: the new sonar algorithms let the sonar operators see things they’d never seen before. But when the Augusta returned from its ninety-day deployment, the report was “Nice job on the new sonar—but ‘no change.’ We don’t see anything different.”
Johnson didn’t get it. The data from the Augusta confirmed the pretrial engineers’ reports: huge gains. Why was the crew reporting no improvement in performance?
Johnson turned to a user-group of active duty master chiefs. He’d convened these savvy operational sonar guys from the fleet to help configure the new sonar displays. “What gives?” he asked.
“We have a training issue here” was the reply. “And we’re still working with the old legacy system displays. We need to put the new signal through new displays that can handle it.”
“When I developed this system I was thinking of it in terms of hardware and software,” Johnson said. “The people part of the equation was really somebody else’s to deal with.”
Which led Johnson to ask of the fleet, “Do you guys really know what you’re looking at?”
A test devised by the chiefs soon answered his question. Among two hundred sonar operators, expert-level personnel correctly answered the question “What is this thing I am looking at?” 76 percent of the time. (These were the “Jonesys” we all saw in the movie The Hunt for Red October.) But the average operator got it right only 25 percent of the time.
No wonder the fleet reported that the subs were not seeing much difference. Its glasses were fogged up; no one was seeing much of anything.
“Here we were pouring hundreds of millions of dollars into these sonar systems that extracted the last decibel of information out of the ocean,” Johnson said, “and it’s all falling on the floor because these guys don’t recognize what they’re seeing.”
Johnson didn’t run training—he had no budget for it, no say in it. The people who did pushed back. “We know what we’re doing. Look at these testimonials!” But the data were clear. Nothing on Johnson’s platform would deliver the performance expected without training.
Nevertheless, no amount of logic would move the trainers. So Johnson went to the admirals, showed them the tests, and embarrassed the trainers into compliance.
Together with new flat-screen displays that made signals easier to read, a four-hour training session for the operators did the trick. The flat screens were the master chiefs’ handiwork, ARCI’s first user-configured component. Once trained on the new screens, younger, less-experienced operators actually did better than the experienced crew used to working the old screens.
By March 1998, Johnson’s team in Manassas, Virginia, was at work retooling the “baseline” ARCI platform from the Augusta for a first wave of fleet-wide installations. Johnson would run sea trials in the fall of 1998; the fleet wanted the ARCI build operational on the USS Memphis by the spring of 1999. That worked for Johnson: he wanted his master chiefs to see their idea in the fleet in months, not years, to keep them jazzed up and engaged.
If the chiefs were thrilled, Lockheed Martin, which owned the Aegis-class system displays and was looking to lock up the submarine franchise, too, was less so. ARCI was like some new trophy wife. Its slick flat screens displaced twenty years of custom monitors. And what was with this “fleet-sourced design” of the master chiefs? “Operators will be running around changing their minds with every deployment!”
For Johnson, that was exactly the idea. Fleet operators—“customers” like his master chiefs—collaborating with designers was the ARCI platform’s “new normal.” A new Advanced Processing Build (“APB”) would be done each year for the next four—an exhausting pace. The technology permitted it, and Johnson fixed the business process to accommodate it.
There was a “war” on; Johnson was point man in the race to restore American acoustic superiority. Johnson took the display work away from Lockheed Martin and gave it to DRS Technologies, which would build the display hardware, and Digital Systems Resources, which would write the display software. These smaller firms “got” the concept of fleet consultation and were more than willing to follow the sailors’ lead on this. And Johnson got better prices: DRS Technologies charged $180,000 for displays that performed better than the old $1 million monochrome Lockheed sets.
True, a small firm like DRS Technologies lacked Lockheed’s lifetime stockpile of already out-of-date spare parts. But DRS’s displays were built with off-the-shelf parts, which meant they could be refreshed faster, better, and cheaper anyway. And for good measure, DRS Technologies was good politics. Located in Johnstown, Pennsylvania, the company was smack in the middle of the Twelfth Congressional District. That was “Murtha Country,” home base for Representative John Murtha, the chairman of the US House Appropriations Subcommittee on Defense.
Johnson’s initial Advance Processing Build would be ARCI’s first peer-reviewed system choice, involving world-class signal processing experts like MITRE Corporation’s Gary Jacyna. Until then, NUWC had almost always had final say on what went into the boats. And NUWC, with Lockheed Martin, had already developed a new high-frequency sonar for the Navy’s next generation submarine, the Virginia class.
Using newer technology, another Navy group working with the University of Texas claimed to have NUWC’s approach beaten.
“I saw that none of us had the market cornered on sonar brains,” Johnson said. “These guys came from university labs, Johns Hopkins, Penn State, University of Texas. My idea was we’re going to get the best idea from wherever they are, including NUWC—and it’s to my benefit to collaborate with these other domains,” Johnson said.
In the fall, Johnson sent the two systems to sea, and convened a peer panel of Penn State and Johns Hopkins experts to review performance. Johnson roiled NUWC managers further by using submarine funds to pay nonsubmarine sonar experts to peer-review NUWC plans.
The data spoke, proving the superiority of the University of Texas system. With some tweaking, Johnson’s group added in the best features of the NUWC system. Johnson made the call: the Memphis goes to sea in the spring with the University of Texas system.
It was a bizarre situation: Johnson was backfitting his older subs with technology that had already leapfrogged the Virginia class systems before the Virginia subs were even built. Wherever the Memphis went on that highly classified mission in the spring of 1999, the admirals rated it one of the most successful deployments ever. “Spectacular, actually,” Johnson said.
Johnson, too, was satisfied. The first APB used peer-reviewed beam-forming and passive sonar processing improvements, many of which originated from the University of Texas. It used displays conceived of by the master chiefs.
“This first APB,” Johnson said, “proved the value of inclusiveness and transparency in our new process.” The ARCI platform had delivered price, performance, and agility.
By the end of 1999, the ARCI backfitting program was well underway. Fifteen subs were backfitted and ready for sea. The rest of the fleet was to come.
The acquisition and procurement platform Johnson developed—with architecture that invited the collaboration of universities, fleet operators, and private companies that weren’t part of the inner circle, that based decisions on peer review and data analysis, and that supported a constant refresh of technology—was nothing short of revolutionary.
As in any revolution, those who saw their power, hegemony, and bottom lines threatened fought back. The fleet took a different perspective. US Navy Admirals Edmund Giambachianni, George Sterner, Frank Bowman, and Bruce DeMars saw the results, backed Johnson, briefed Congress, and won support.
Overall performance gains were astounding. Using ARCI-based sonars, all operator detection rates—not just experts’—were now 87 percent, a factor of four improvement. False alarms were down 40 percent, to fewer than one per deployment. Time to detection—and once detected, classification—was twenty-seven minutes faster. Mean contact holding time had improved by twenty-five minutes per detection.
By 2007, every submarine hull in the American fleet had been converted to ARCI-based sonar. Many boats already had had five or more hardware and software upgrades. For the operators, ARCI meant dramatically faster, more accurate detection. US undersea acoustic superiority was restored, and holds to this day.
After much conflict, Johnson, NUWC executives, and Lockheed made their peace. The ARCI build and major contributors like MITRE Corporation received two of Vice President Al Gore’s “Hammer” awards for innovation. At the awards ceremonies, Johnson insisted that every ARCI collaborator’s name be announced—all five hundred.