ROBUST, YET FRAGILE
This is a book about why things bounce back.
In the right circumstances, all kinds of things do: people and communities, businesses and institutions, economies and ecosystems. The place in which you live, the company at which you work, and even, without realizing it, you yourself: Each is resilient, or not, in its own way. And each, in its own way, illuminates one corner of a common reservoir of shared themes and principles. By understanding and embracing these patterns of resilience, we can create a sturdier world, and sturdier selves to live in it.
But before we can understand how things knit themselves back together, we have to understand why they fall apart in the first place. So let’s start with a thought experiment:
Imagine, for a moment, that you are a tree farmer, planting a new crop on a vacant plot of earth. Like all farmers, to get the most use out of your land, you will have to contend with any number of foreseeable difficulties: bad weather, drought, changing prices for your product, and, perhaps especially, forest fires.
Fortunately, there are some steps you can take to protect your tree farm against these potential dangers. To lower the risk of fire, for example, you can start by planting your saplings at wide but regular intervals, perhaps 10 meters apart. Then, when your trees mature, none of their canopies will touch, and a spark will have trouble spreading from tree to tree. This is a fairly safe but fairly inefficient planting strategy: Your whole crop will be less likely to burn to the ground, but you will hardly be getting the most productive use out of your land.
Now imagine that, to increase the yield of your farm, you randomly start planting saplings in between this grid of regularly spaced trees. By definition, these new additions will be much closer to their neighbors, and their canopies will grow to touch those of the trees surrounding them. This will increase your yield, but also your risk: If a fire ignites one of these randomly inserted trees, or an adjoining one, it will have a much higher chance of spreading through its leaves and branches to the connected cluster of trees around it.
At a certain point, continuing to randomly insert trees into the grid will result in all of the trees’ canopies touching in a dense megacluster. (This is often achieved when about 60 percent of the available space is filled in.) This, in contrast to your initial strategy, is a tremendously efficient design, at least from a planting and land-use perspective, but it’s also very risky: A small spark might have disastrous consequences for the entire crop.
Of course, being a sophisticated arborist, you’re unlikely to plant your trees in either a sparse grid or a dense megacluster. Instead, you choose to plant groves of densely planted trees interspersed with roads, which not only provide you access to the interior of your property but act as firebreaks, separating one part of the crop from another and insuring the whole from a fire in one of its parts.
These roads are not free: By reducing the area usable for planting, each imposes a cost, so you must be careful where you put them; adding too many roads is just as bad as adding too few. But eventually, with lots of trial and error, and taking into account local variations in the weather, soil, and geography, you may alight upon a near-perfect design for your particular plot—one that maximizes the density of trees while making smart and efficient use of roads to access them. Your tree farm’s exceptional design will easily withstand the occasional fire, without ever burning entirely to the ground, all the while providing you with seasonally variable but not wildly gyrating timber returns.
Imagine your horror, then, when one day you discover that much of your perfectly designed plot has been infested by an invasive foreign beetle. This tiny pest, native to another geographic region entirely, stowed away on a shipment from an overseas supplier, then hitchhiked its way to the heart of your tree farm by clinging to your boot. Once inside, it exploited the very features of your design—your carefully placed roads—that were intended to insure against the risks you thought you’d confront.
It is at this moment in our thought experiment that you have painfully discovered that, in systems terms, your tree farm design is robust-yet-fragile (or RYF), a term coined by California Institute of Technology research scientist John Doyle to describe complex systems that are resilient in the face of anticipated dangers (in this case forest fires) but highly susceptible to unanticipated threats (in this case exotic beetles).
On any given day, our news media is filled with real-world versions of this story. Many of the world’s most critical systems—from coral reefs and communities to businesses and financial markets—have similar robust-yet-fragile dynamics; they’re able to deal capably with a range of normal disruptions but fail spectacularly in the face of rare, unanticipated ones.
As in our tree-planting example, all RYF systems involve critical trade-offs, between efficiency and fragility on the one hand and inefficiency and robustness on the other. A perfectly efficient system, like the densely planted tree farm, is also the most susceptible to calamity; a perfectly robust system, like the sparsely planted tree farm, is too inefficient to be useful. Through countless iterations in their design (whether the designer of a system is a human being or the relentless process of natural selection), RYF systems eventually find a midpoint between these two extremes—an equilibrium that, like our roads-and-groves tree farm design, balances the efficiency and robustness trade-offs particular to the given circumstance.
The complexity of the resulting compensatory system—the network of the roads and groves in the example above—is a by-product of that balancing act. Paradoxically, over time, as the complexity of that compensatory system grows, it becomes a source of fragility itself—approaching a tipping point where even a small disturbance, if it occurs in the right place, can bring the system to its knees. No RYF design can therefore ever be “perfect,” because each robustness strategy pursued creates a mirror-image (albeit rare) fragility. In an RYF system, the possibility of “black swans”—low-probability but high-impact events—is engineered in.
The Internet presents a perfect example of this robust-yet-fragile dynamic in action. From its inception as a U.S. military funded project in the 1960s, the Internet was designed to solve a particular problem above all else: to ensure the continuity of communications in the face of disaster. Military leaders at the time were concerned that a preemptive nuclear attack by the Soviets on U.S. telecommunications hubs could disrupt the chain of command—and that their own counterstrike orders might never make it from their command bunkers to their intended recipients in the missile silos of North Dakota. So they asked the Internet’s original engineers to design a system that could sense and automatically divert traffic around the inevitable equipment failures that would accompany any such attack.
The Internet achieves this feat in a simple yet ingenious way: It breaks up every email, web page, and video we transmit into packets of information and forwards them through a labyrinthine network of routers—specialized network computers that are typically redundantly connected to more than one other node on the network. Each router contains a regularly updated routing table, similar to a local train schedule. When a packet of data arrives at a router, this table is consulted and the packet is forwarded in the general direction of its destination. If the best pathway is blocked, congested, or damaged, the routing table is updated accordingly and the packet is diverted along an alternative pathway, where it will meet the next router in its journey, and the process will repeat. A packet containing a typical web search may traverse dozens of Internet routers and links—and be diverted away from multiple congestion points or offline computers—on the seemingly instantaneous trip between your computer and your favorite website.
The highly distributed nature of the routing system ensures that if a malicious hacker were to disrupt a single, randomly chosen computer on the Internet, or even physically blow it up, the network itself would be unlikely to be affected. The routing tables of nearby routers would simply be updated and would send network traffic around the damaged machine. In this way, it’s designed to be robust in the face of the anticipated threat of equipment failure.
However, the modern Internet is extremely vulnerable to a form of attack that was unanticipated when it was first invented: the malicious exploitation of the network’s open architecture—not to route around damage, but to engorge it with extra, useless information. This is what Internet spammers, computer worms and viruses, botnets, and distributed denial of service attacks do: They flood the network with empty packets of information, often from multiple sources at once. These deluges hijack otherwise beneficial features of the network to congest the system and bring a particular computer, central hub, or even the whole network to a standstill.
These strategies were perfectly illustrated in late 2010, when the secrets-revealing organization WikiLeaks began to divulge its trove of secret U.S. State Department cables. To protect the organization from anticipated retaliation by the U.S. government, WikiLeaks and its supporters made copies of its matériel—in the form of an encrypted insurance file containing possibly more damaging information—available on thousands of servers across the network. This was far more than the United States could possibly interdict, even if it had had the technical capability and legal authority to do so (which it didn’t). Meanwhile, an unaffiliated, loose-knit band of WikiLeaks supporters calling itself Anonymous initiated distributed denial-of-service attacks on the websites of companies that had cut ties with WikiLeaks, briefly knocking the sites of companies like PayPal and MasterCard off line in coordinated cyberprotests.
Both organizations, WikiLeaks and Anonymous, harnessed aspects of the Internet—redundancy and openness—that had once protected the network from the (now-archaic) danger that had motivated its invention in the first place: the threat of a strike by Soviet missiles. Four decades later, their unconventional attacks (at least from the U.S. government’s perspective) utilized the very features of the network originally designed to prevent a more conventional one. In the process, the attacking organizations had proven themselves highly resilient: To take down WikiLeaks and stop the assault of Anonymous, the U.S. government would have had to take down the Internet itself, an impossible task.
Doyle points out a dynamic quite similar to this one at work in the human immune system. “Think of the illnesses that plague contemporary human beings: obesity, diabetes, cancer, and autoimmune diseases. These illnesses are malignant expressions of critical control processes of the human body—things like fat accumulation, modulation of our insulin resistance, tissue regeneration, and inflammation that are so basic that most of the time we don’t even think about them. These control processes evolved to serve our hunter-gatherer ancestors, who had to store energy between long periods without eating and had to maintain their glucose levels in their brain while fueling their muscles. Such biological processes conferred great robustness on them, but in today’s era of high-calorie, junk-food-saturated diets, these very same essential systems are hijacked to promote illness and decay.”
To confront the threat of hijacking, Internet engineers add sophisticated software filters to their routers, which analyze incoming and outgoing packets for telltale signs of malevolent intent. Corporations and individuals install firewall and antivirus software at every level of the network’s organization, from the centralized backbones right down to personal laptops. Internet service providers add vast additional capacity to ensure that the network continues to function in the face of such onslaughts.
This collective effort to suffuse the system with distributed intelligence and redundancy may succeed, to varying degrees, at keeping some of these anticipated threats at bay. Yet, even with such actions, potential fragility has not been eliminated; it’s simply been moved, to sprout again from another unforeseeable future direction. Worse, like all RYF systems, over time the complexity of these compensatory systems—antivirus software, firewalls—swell until they become a source of potential fragility themselves, as anyone who’s ever had an important email accidentally caught in a spam filter knows all too well.
Along the way, paradoxically, the very fact that a robust-yet-fragile system continues to handle commonplace disturbances successfully will often mask an intrinsic fragility at its core, until—surprise!—a tipping point is catastrophically crossed. In the run-up to such an event, everything appears fine, with the system capably absorbing even severe but anticipated disruptions as it was intended to do. The very fact that the system continues to perform in this way conveys a sense of safety. The Internet, for example, continues to function in the face of inevitable equipment failures; our bodies metabolize yet another fast-food meal without going into insulin shock; businesses deal with intermittent booms and busts; the global economy handles shocks of various kinds. And then the critical threshold is breached, often by a stimulus that is itself rather modest, and all hell breaks loose.
When such failures arrive, many people are shocked to discover that these vastly consequential systems have no fallback mechanisms, say, for resolving the bankruptcy of a major financial institution or for the capping of a deep-sea spill.
And in the wake of these catastrophes, we end up resorting to simplified, moralistic narratives, featuring cartoon-like villains, to explain why they happened. In reality, such failures are more often the result of the almost imperceptible accretion of a thousand small, highly distributed decisions—each so narrow in scope as to seem innocuous—that slowly erode a system’s buffer zones and adaptive capacity. A safety auditor cuts one of his targets a break and looks the other way; a politician pressures a regulator on behalf of a constituent for the reduction of a fine; a manager looks to bolster her output by pushing her team to work a couple of extra shifts; a corporate leader decides to put off necessary investments for the future to make the quarterly numbers.
None of these actors is aware of the aggregate impact of his or her choices, as the margin of error imperceptibly narrows and the system they are working within inexorably becomes more brittle. Each, with an imperfect understanding of the whole, is acting rationally, responding to strong social incentives to serve a friend, a constituent, a shareholder in ways that have a significant individual benefit and a low systemic risk. But over time, their decisions slowly change the cultural norms of the system. The lack of consequences stemming from unsafe choices makes higher-risk choices and behaviors seem acceptable. What was the once-in-a-while exception becomes routine. Those who argue on behalf of the older way of doing things are perceived to be fools, paranoids, or party poopers, hopelessly out of touch with the new reality, or worse, enemies of growth who must be silenced. The system as a whole edges silently closer to possible catastrophe, displaying what systems scientists refer to as “self-organized criticality”—moving closer to a critical threshold.
This dynamic—and our first hints about what we might do to improve the resilience of such systems—can be seen in two very different robust-yet-fragile systems that, with new analytical tools, are beginning to powerfully illuminate each other: coral reef ecology and global finance.
In the 1950s, Jamaica’s coral reefs were a thriving picture-postcard example of a Caribbean reef, supporting abundant mixtures of brilliantly colored sponges and feather-shaped octocorals sprouting up from the hard coral base. The reefs were popular habitats for hundreds of varieties of fish, including large predatory species such as sharks, snappers, groupers, and jacks, which were harvested by local fishermen as a reliable and time-honored food source for the island’s population.
By the 1970s, things appeared relatively unchanged. In the intervening two decades, however, Jamaica’s population had swelled by a third. Local fishermen, struggling to feed the island’s growing population, had begun using motorized canoes to harvest not only the predator species, but the smaller, plant-eating fish, such as surgeonfish and parrotfish, as well. The reefs still appeared healthy, however, and most of their habitants seemed to be thriving. Sea urchins, in particular, were flourishing since they no longer had to compete with the plant-eating fish for algae, the mainstay of their diet.
Then, on August 6, 1980, after almost four decades without a major storm, one of the most powerful Caribbean tempests in history, Hurricane Allen, descended on the island and its surrounding reefs. Winds exceeding 175 miles per hour whipped up a forty-foot-high storm surge, which pounded the reefs. Shallow-water corals were devastated; however, their deeper-waters cousins, well below the surface, emerged relatively unscathed. In fact, a few months after the hurricane’s strike, substantial coral recruitment—the measure of young corals entering the adult population—was found in the deeper waters. For the next three years, coral cover slowly increased.
The general consensus among marine biologists at the time was that Jamaica’s reefs had survived remarkably well after the battering of Hurricane Allen. Data seemed to suggest that the reef system was surviving and, in the deeper waters around the island at least, maybe even thriving.
Then, in 1983, something terrifying happened below the surface of Jamaica’s waters. An unidentified pathogen decimated the entire population of Jamaica’s long-spined sea urchins. The illness was unprecedented in its lethality and its speed: Within a few days of the onset of symptoms, one observer reported, “on reefs that used to be black with urchins, one could swim for an hour without seeing a single living individual.” By February 1984, the sea urchin had been virtually eliminated from its normal habitat—the most extensive and severe mass mortality ever reported for a marine organism.
Coming after the long and slow decline of the other native plant-eating fish species from overfishing, the loss of the urchins proved catastrophic to Jamaica’s reefs. With no urchins—or other species—to keep it in check, algae quickly came to dominate every corner of the reef system, eventually covering 92 percent of its surface area and killing the corals underneath. With the loss of the corals went the remaining fish—reefs that supported hundreds of species for thousands of years were transformed into vacant algal wastelands seemingly overnight.
On a healthy reef, a new pathogen decimating a single species (like the urchin) might not have had catastrophic consequences, because an essential reef function—like keeping algae in check—could be performed by more than one species. On the highly compromised Jamaican reef, however, the continued flourishing of the ecosystem as a whole became entirely dependent on a single species continuing to do that job. The loss of the urchins, an otherwise modest trigger, caused the reef to collapse virtually overnight.
Yet, had you asked a marine scientist in 1982 to describe the reefs, you would have received a promising assessment: The reef had proved robust in the face of significant disturbances, ranging from hurricanes to extensive human fishing. There was little evidence of the hidden fragility that was being exacerbated by the slow loss of biodiversity.
Such clarity is available to us in hindsight, but consider the challenges posed by trying to manage such a system at the time. The interactions between the various agents affecting the health of the reef (fish, urchins, algae, corals, and human beings) were imperfectly understood and nonlinear—small changes could lead to big outcomes, and vice versa. Some of the dependencies between these agents were masked: Prior to the loss of the urchins, it was hard to tell if the loss of herbivorous fish was having a significant impact or no impact whatsoever. What’s more, recent experience suggested that the system could rebound from a disturbance on the scale of a hurricane. And even a healthy reef can behave in highly variable ways. Fish stocks rise and fall—how would you separate normal variability from the onset of collapse?
Similar questions confront us whenever we try to manage a complex system with highly interdependent parts. Whether we’re dealing with fish stocks or financial stocks, to improve the resilience of the system as a whole, we first need measurement tools that take the health of whole systems into account, not just their pieces. At least if we want to keep eating seafood.
• • •
As it happens, in the 1950s, the same decade that our story in Jamaica began, the California sardine industry was also devastated by a collapse. The booming fishing industry had provided the dramatic backdrop for John Steinbeck’s novel Cannery Row: In the mid-1930s, 790,000 tons of sardines were commercially fished from California’s waters. Yet by 1953, the catch had plummeted to less than 15,000 tons—a drop of 98 percent.
Two competing hypotheses emerged to explain the collapse: one based on traditional simple overfishing and another on the La Niña currents that cooled California’s ocean waters. However, in 2006, after poring over fifty years of data on sardine larvae, George Sugihara, a mathematician and theoretical ecologist at the Scripps Institution of Oceanography, proved that both theories were wrong. The sardine stock had collapsed not because the fishing industry was taking out too many small fish, but because it was taking out too many big ones: Sugihara found that California’s industrialized fishing operations had become so efficient at catching adult sardines that they had significantly changed the age structure of the entire stock. Bereft of adults in 1949 and 1950, the substantially more juvenile sardine population had failed to spawn, and when it encountered additional stressors from the natural world, it flipped into collapse.
Given sufficient time, it’s possible that the sardine population could have recovered from this megabust all by itself—such instabilities do occasionally occur naturally in ecosystems from time to time. Unfortunately, fisheries management practices of the time rarely took these kinds of instabilities into account. For much of the twentieth century (and in many places around the world today) the status quo of most fisheries management was based on maximum sustainable yield (MSY), or the largest catch that can be fished from the stock of a species over an indefinite period of time.
MSY is predicated on a linear system with stable equilibrium points. By addressing only one species at a time, and assuming that all other variables were fixed and unchanging, the MSY regime didn’t call for moderation from California’s commercial fishing fleets when sardine stocks started to decline. Rather, in the face of the intensified economic pressure of a reduced catch, many local fishing operations accelerated their harvesting efforts, pushing the ecosystem past the point of recovery. The fishing system had no breaks.
And MSY is still in widespread use today. It has been implicated in declining fish stocks all over the world: Today 63 percent of all fished species are overfished and in danger, and 29 percent have collapsed outright, meaning they are now, like the sardine, at least 90 percent below their historic maximum catch levels. In 2006, an international research team led by Boris Worm of Dalhousie University in Canada calculated that, if these trends continue, by 2048 all commercial fishing on Earth will cease—there will simply be no more fish in the sea.
To prevent such a catastrophe, Sugihara and others are trying to promote a more holistic alternative to MSY called ecosystem-based fishery management (EBFM). This whole-systems model is rooted in the observation that ecosystems are difficult to model and predict, with constantly changing threshold points that, when exceeded—as in the case of the California sardine population—can cause the system to collapse or restructure. EBFM counteracts this by promoting the maintenance of biodiversity as a central management goal whenever and wherever possible, at every level of organization, from small niches to entire regions of the sea.
Doing so starts with a very different system for measuring what’s going on in an ecosystem that’s being fished. Under MSY, fisheries managers gathered and analyzed information primarily about the species in the catch and little else; EBFM, on the other hand, requires them to measure the species of fish that are not caught as well as the ones that are. Many other factors, referred to as ecosystem indicators, need to be gathered and correlated as well, such as coastal upwelling, the pattern of wind and water movement that brings cool, nutrient-rich waters to ocean’s surface near the coast, where they provide the diet for plankton, which in turn form the base of the food chain. Critically, EBFM also calls for measurement and correlation of societal trends on land as well as what’s happening in the water—merging the social and ecological pictures.
Such a holistic management regime, based on nonlinear, complex systems, has gained traction politically, but implementing EBFM remains challenging: Very few of the fisheries management institutions have the expertise, history, resources, and know-how to turn away from the more straightforward equilibrium models that underpin the maximum sustainable yield regime.
“We like to see the world as consisting of separate parts that can be studied in an isolated, linear way, one piece at a time,” Sugihara explains. “These pieces can then be summed independently to make the whole. Researchers have developed a very large tool kit of analytical methods and statistics to do just this, and it has proven invaluable for studying simple engineered devices. But we persist with these linear tools and models even when systems that interest us are complex and nonlinear. It is a case of looking under the lamppost because the light is better even though we know the keys are lost in the shadows.”
To get fisheries to switch, ecologists are now working to make EBFM principles easier to implement, and to do so they’re borrowing a central concept from finance: the portfolio model. Interacting species in the ecosystem are linked together as a set of correlated assets, and, just as in a financial portfolio, fisheries managers make decisions that balance risks and returns among them. The portfolio method suggests that a particular species should be assessed on the basis of how it contributes to the overall performance of the ecosystem, rather than in isolation—in much the same way that a particular financial stock contributes to the value of a diversified fund. In both cases, there is no intrinsic value to the asset or the species—context is everything.
Managing entire classes of interacting species in this way simplifies what are otherwise very complex and nonlinear dependencies. By focusing on the relationships among categories of species, and not their absolute numbers, a portfolio approach can more easily accommodate complex factors like environmental fluctuations and advances in fishing technologies, even as it makes potential risks to the system much more explicit. And a multispecies portfolio can also be recalibrated according to the changing conditions of the local ecosystem, much like a financial adviser might arrange a portfolio one way for a young, risk-tolerant active investor and very differently when that investor becomes a risk-averse retiree.
The portfolio approach has other benefits as well. In a financial market, investing in groups of assets allows investors to lower the variability of their returns, in exchange for increasing the reliability of those returns. A properly balanced portfolio incorporates various strategies and hedges to smooth out the troughs and the peaks: During boom times, having commodities mixed in with aggressive growth stocks might keep you from generating sufficient returns to buy a Ferrari, but during a bust it will also keep you out of the poorhouse. Similarly, maintaining a balanced “portfolio” of species allows a fisheries manager to reduce the variability of the annual catch—a major benefit to both fish and fishermen—in exchange for greater stability. In one study of the Chesapeake Bay, ecological economists Martin Smith and Douglas Lipton found that, using just such a portfolio-driven method, over the forty-year period from 1962 to 2003, the bay’s fisheries managers could have generated better financial returns from fishing and reduced the season-to-season variability of the catch—a true win-win.
In matters of finance and fisheries, portfolio-driven approaches provide us with choices: If we seek a specific level of return, there are portfolios that can be designed to deliver that return while minimizing the risks; conversely, if we are comfortable with a particular level of risk, there are portfolios that can be designed at that level of risk to maximize our return.
ECOLOGY FOR BANKERS
Even as concepts from financial management begin to inform new strategies for improving the resilience of ecological systems like fisheries, the reverse is also occurring. A new generation of economists and finance experts are beginning to unearth important lessons in ecology for improving the resilience of the global financial economy. These insights are fueling the birth of an entirely new field, called ecofinance.
Consider, for a moment, some parallels between the Jamaican coral collapse mentioned earlier and the recent global financial crisis. Just as with the die-off of the sea urchins, the trigger of the financial collapse—Lehman Brothers’ $600 billion filing for Chapter 11 bankruptcy in mid-September 2008—was, in the context of a $70 trillion annual global economy, a relatively modest event. Much like the Jamaican reefs’ prior survival of Hurricane Allen, the financial markets had robustly weathered a prior decade filled with significant disturbances, including a dot-com bubble, oil shocks, and wars in the Middle East. Yet the decision to let Lehman fail, which we explore in greater detail later in this book, brought about an epidemic of fear and uncertainty that ground the world’s entire capital markets to a halt.
Just like the decades-long erosion of biodiversity on the Jamaican reef, the underlying reasons for the financial collapse also involved a number of creeping structural changes to the market that set the stage for catastrophic collapse. And, as we’ve seen in other RYF systems, like those in the human body or the Internet, when that fragility finally did appear, its effects were amplified by the hijacking of normally beneficial aspects of the global financial system, including its structure, its approach to risk management, its feedback mechanisms, its levels of transparency, and its product innovations.
• • •
In 2008, George Sugihara, the same ecologist who had unearthed the reasons for the California sardine collapse, and two other ecologists, Simon Levin and Robert May, published an essay in Nature titled “Ecology for Bankers.” They wanted to offer guidance for applying the tenets of holistic, ecosystem-based management to the financial markets. Like a densely planted tree farm, both systems had been managed for increased efficiency at the cost of increasing complexity and fragility. And both had implemented similar risk-management techniques: In the same way that maximum sustainable yield practices were applied to lone species of fish, it was common in the banking sector to apply single-firm risk analysis, looking only at individual banks without ever examining interconnection in the system as a whole. Worse, financial firms tended (and still tend) to calculate risk additively, meaning they take individual measures of risk for each transaction they make and add them up to arrive at a picture of their total potential exposure. This type of modeling makes the financial system seem much more secure than it really is; in fact, just as in marine ecosystems, the nonlinear nature of the financial network is multiplicative: Certain failures multiply the risks of other failures.
“Economics is not typically thought of as a global systems problem,” Sugihara explains. “Investment banks are famous for a brand of tunnel vision that focuses risk management at the individual firm level and ignores the difficult and costlier systemic perspective. Monitoring the ecosystem-like network of firms with interlocking balance sheets is not in a corporate risk manager’s job description. But ignoring these counterparty obligations and mutual interdependencies is exactly what prevented financial institutions from seeing and appropriately pricing risk, which in turn dramatically amplified the recent financial crisis.”
The essay in Nature came on the heels of an earlier, high-level conference in 2006 sponsored by the Federal Reserve Bank of New York, the U.S. National Academies, and the National Research Council to stimulate fresh thinking on systemic risk. Sugihara and Levin introduced the bankers in attendance to the concept of a trophic web—the way in which energy and nutrient flows in an ecosystem connect different species.
The basic concept will be familiar from grade-school science class: Aquatic plants convert the energy of the sun into food; the plants are eaten by small fish; the small fish by bigger fish. As fish and plants die, their bodies provide nutrients to smaller organisms and are recycled through the system. Trophic webs map these energy-transfer relationships in detail.
A similar framework can be used to analyze the transfer of value across a financial network. Yet, while creating detailed trophic webs is a common activity in ecology, creating similarly detailed flows is less common for large-scale financial networks. To remedy this, in 2006, the Federal Reserve Bank of New York commissioned a study of the topology of the interbank payment flows inside what’s called the Fedwire system—the mechanism that U.S. banks use for transferring funds between them. Fedwire acts as the backbone of the U.S. financial system, processing an astounding average daily volume of almost 500,000 interbank payments, with a daily value of approximately $2.4 trillion (in 2010, the last year for which data was available at the time of writing). These payments can be thought of as the equivalents of energy and nutrients in the financial ecosystem being transferred between different species of institutions.
The sample used for the study involved approximately 700,000 transfers, and more than five thousand banks, taken from a typical day. The picture that emerges is startling: While most banks had a small number of connections, a few hubs had thousands. At the core of the network, just sixty-six banks accounted for 75 percent of the daily value of transfers. Even more telling, the network topology revealed that twenty-five of the biggest banks were completely connected—so intertwined that a failure among any strongly suggested a failure for all, the very definition of “too big to fail.”
What’s true within the core of the U.S. financial sector is increasingly true at an international level as well. An analysis of the connections between eighteen national financial markets reveals that, over the last two decades, central finance hubs around the world, like London and Hong Kong, have swelled to roughly fourteen times their earlier size; at the same time, the links between them have increased sixfold.
The increased size, connectivity, and volume of capital flowing through the financial system are not intrinsically bad—in fact, in healthier circumstances, they ensured that capital could flow where (and when) it was most needed, generating improved returns and keeping the system liquid. This connectivity was celebrated in the precrash era as a tool for dispersing, not concentrating risks.
The market’s densely connected configuration, much like the Internet’s, ensured that randomly shutting down one of the innumerable banks in the global system would not be likely to cause systemic problems, because, statistically, the vast majority of banks in the network are at the end of spokes connected to a very limited number of hubs. But flip one of those central hubs (a rare and dangerous occurrence), and you might not only take down the thousands of banks directly connected to it, but the other hubs as well, along with thousands of institutions connected to them. Like a game of Jenga, pull a random plank and almost certainly nothing will happen. Pull the wrong one, and the entire robust-yet-fragile edifice comes crashing down.
And that’s just what happened: For several decades, a string of moderate and even serious failures was capably handled by the financial system, from the dot-com collapse to oil shocks—each one increasing confidence in the ability of the system as a whole to manage disruption. Then a hub failed. Panic spread throughout the network, and things quickly ground to a halt. As Levin says, “It’s not that the hubs were too big to fail, but too interconnected to be allowed to fail [italics added].” When calamity struck, there was no way to separate or decouple them from one another.
These interconnections in the financial system didn’t just take the normal form of funds flowing between banks to cover daily activities. They were anchored by new, sophisticated debt and insurance derivatives: collateralized debt obligations (CDOs) were the securities that allowed banks to cut up, repackage, and then sell one another the debt from risky U.S. home mortgages; credit default swaps (CDSs) were the insurance contracts that tied these banks to one another in massive webs of financial interdependence. Together, they were the financial equivalent of nitric acid and glycerin—in small amounts they keep your heart pumping; in large amounts: boom.
FINANCIAL CLUSTER BOMBS
A CDO is a financial instrument that can best be understood by imagining a set of wineglasses stacked in a pyramid. When champagne is poured over the pyramid, the glass at the top is filled first, the glasses in the middle are filled next, and those at the bottom are filled last. A CDO was an equivalent financial instrument, but instead of disbursing wine into wineglasses, it poured the monies from mortgage payments into a set of specialized bonds.
To create a CDO, a bank would bundle together a group of mortgages, held by regular U.S. homeowners. Each month, as these homeowners wrote their monthly mortgage checks, the banks would pool these payments together and make payouts to a series of bonds called tranches that were stacked up just like the wineglasses. The tranche at the top of the chain, like the glass at the top of the pyramid, got paid first, then the one following it, and so on, until either the tranche at the bottom was paid or the pool of funds was exhausted.
By definition, the top tranche, at the front of the line to be repaid, was the least risky, so it earned both the best rating (AAA) from ratings agencies like Moody’s and Standard & Poor’s and also the lowest rate of return, perhaps 2 percent. The bottom tranche, on the other hand, was the most risky: If mortgage holders in the CDO pool stopped paying their mortgages, the loss would be felt in that tranche first—so it earned both the lowest rating (say BB) and the highest rate of return, perhaps 10 percent.
So far, so good. But from there, CDO engineering quickly entered into the twilight zone. A bank might, for example, take the lowest-rated tranche (BB) of a particular CDO (let’s call it Lucifer) and turn it into its own CDO (let’s call this bottom tranche Damien). Even though it was comprised of junk, Damien, through the magic of financial engineering, was divided into its own set of tranches, the top one of which was awarded its own triple-A rating. This absurdity masked the fact that the underlying asset upon which it was based—Lucifer’s bottom-most BB tranche—was toxic, high-risk junk. It was like saying “here is the safest house that you can build with this toxic waste” and conveniently forgetting to say the toxic-waste part. Or, if you prefer, like taking a dozen hobbled horses from the glue factory, lining them up according to their speed, and calling the fastest one a Thoroughbred.
When some homeowners originally bundled together under Lucifer stopped being able to pay their mortgages, not only did its BB tranche bondholders lose out, so did those holding all of Damien’s tranches—even the ones rated AAA. And there’s the true tragedy: Those holding Damien’s triple-A’s were well-managed, risk-averse pension funds, municipalities, and 401(k) plans.
If CDOs were the financial equivalent of chloresterol-laden junk food mislabeled as health food, the credit default swaps were the mechanism by which the banks ensured that a blockage in any artery would give a heart attack to all. CDSs are insurance contracts, much like the contract that you might sign to insure a car, a home, or your own life. In such arrangements, you make a monthly payment to the insurance company, and, in the event of disaster, it makes you whole. Similarly, these insurance contracts enabled a bank to insure against the loss in value of a stock or bond it might purchase. If Bank A buys $10 million in corporate bonds, and it loses $2 million in value, then the insuring Bank B would make up the difference. In the meantime, Bank A would pay a premium on this insurance to Bank B, much as you pay your health and car insurance premiums.
CDSs had several crucial differences from traditional insurance, however. First, a CDS contract could be traded from investor to investor with no oversight or even regulations ensuring that the insurer had the ability to cover the losses when and if it needed to. By calling the contract a swap, and not insurance, the investors in CDSs were able to avoid the capital reserve requirements and regulatory oversight of the traditional insurance industry. (This was presumably the “innovation” at CDSs’ core.)
Second, CDSs allow firms not only to insure against the possible default of their own investments, but to insure against the possible default of another company’s assets—akin to taking out an insurance policy on your neighbor’s Ferrari. Firms could use swaps as a tool for speculation—to bet a company would fail. This practice was made illegal in traditional insurance markets as far back as the 1700s, before which it was legal for individuals to buy insurance on British ships that they didn’t own, creating—quelle surprise—a rash of perfectly seaworthy ships mysteriously sinking to the bottom of the Thames. In its place, Parliament codified the notion of “insurable interest,” the requirement that you have an actual economic interest in the asset being insured. It was a concept that reigned unchallenged for two and half centuries—until the rise of the CDS.
Finally, CDS contracts were sold and traded privately, or “over the counter.” While they added enormously to the risk profile of the institution doing the insuring, they didn’t show up on the traditional balance sheet. When the crisis came, nobody knew who owed what to whom and what it meant for anyone’s bottom line.
In theory, CDOs and CDSs were originally designed to allow the market to do two things that are quite beneficial: first, to distribute risks to those who were most capable and willing to take them, and second, to allow banks to diversify their portfolios by mixing and matching some of their own activities with one another’s. A typical big bank might find itself originating its own mortgages, buying them from others, selling mortgage-backed securities that blended the two, and insuring those of another bank against default. All of this looked, superficially at least, like diversification—a wise strategy for balancing efficiency and robustness.
But, in allowing debt, credit, and risk to be sliced into tranches, packaged, bought, repackaged, sold, and resold, these instruments also made the dependencies between institutions mind-bogglingly complicated. The chain of custody for the underlying assets lengthened to the point of incomprehensibility.
Thus, when the crisis hit, none of the banks could be quite sure if the other institutions with which they had contracts might also be enmeshed in other contracts that left those institutions on the hook in some potentially catastrophic way. This is known as the problem of counterparty risk—not the risk that you’ll become a deadbeat, or the risk that your partners will become deadbeats, but the risk that some of your partners’ partners will become deadbeats.
In the low-risk, precrash world, from an individual bank’s perspective, having a contract with another bank—which itself had many other business relationships—was perceived as a good thing: It was a measure of diversification and suggested lower risk. After all, what’s the likelihood of a significant number of contracts in a large portfolio defaulting all at the same time? In the high-risk, postcrash world, however, having a contract with another bank with lots of counterparties was nightmarish: After all, it might be holding a contract with an unknown third party that might, at any moment, blow up in its face. How could you possibly determine how creditworthy the other bank was? How could you (or it) know what its obligations were? Or those of its counterparties’ counterparties? How could you trust anybody?
When the crash came, a sense of transparency disappeared overnight, and with it went the most important variable in the system, trust—a theme we will revisit later in this book.
It didn’t help that the derivative contracts themselves were mind-bogglingly complicated. The simplest of these ran some two hundred pages—the most advanced varieties required reading in excess of one billion pages. (Reading one page a minute, it would take you slightly more than 1,900 years to read a single contract for one of these products.) Firms had foregone the due diligence and just swallowed the contracts whole. After the crash, figuring out who owed what to whom wasn’t just hard, it was impossible. It’s unsurprising that the institutions that vaporized amid the destruction—Lehman, Bear Stearns, and AIG Financial Products—had among the largest counterparty exposure. They were attached to an anchor of unknowable size.
The rise of these complex derivatives had another, subtler impact as well. Their promise—of higher returns and dramatically lower risk—proved so intoxicating to financial organizations of all stripes that these firms effectively homogenized their methods of generating revenue and their risk management strategies to take advantage of them. Many different kinds of actors in the market—from commercial banks to hedge funds—started getting into one another’s businesses, holding the same classes of assets and liabilities, in the same proportions, seeking out the same goal, in the same way. As each market player began to internalize more and more of the ever-increasing complexity of the market as a whole, market risks became internal risks. This was diversification without a difference. Right under everyone’s noses, the complex ecosystem of the global financial markets was turning into a monoculture—of very complicated lemmings.
THE TELLTALE SIGNS OF A SYSTEM FLIP
According to Sugihara, in a complex system, there are telltale warning signs of a critical transition, or system flip, and they were visible in the run-up to the financial crisis. One is a phenomenon called critical slowing—the tendency of a system to become unstable near its threshold point. “When a system is under stress, it can be thrown out of equilibrium more easily, and it is slower to recover. Without sufficient recovery time, small perturbations can be amplified until the system is oscillating wildly out of control—even squealing from one stable state to another—like a car being oversteered on an icy patch.”
Just before such destabilization occurs, a system paradoxically may experience synchrony, as agents within it briefly behave in lockstep just before being thrown into chaos. Synchrony can be seen in the brain cells of epileptics, for example, minutes before the onset of a seizure, and it was evident in the financial markets prior to the crash. By the height of the credit boom, from 2004 to 2007, performance across sectors of the financial system was correlated by more than 90 percent, a reflection of the self-similarity of the various market participants. “This was clearly an early indicator of impending danger,” says Sugihara.
“Synchrony is evident when incentives or pressures lead individual actors to fall into step and make similar choices,” adds Levin. “In unsynchronized populations, some individuals thrive while others are in decline; in synchronized populations, a collapse in one place translates into a collapse in all places.”
In a healthy financial system, interconnectivity, risk management, diversification, and product innovation typically act as shock absorbers, dispersing risks and mitigating the impact of inevitable failures. But in the lead-up to the crash, the diversity of the market was slowly eroded, the actions of the market players became synchronized, and the dependencies between them became unknowable. Then, as with the collapse of Jamaica’s sea urchins, the collapse of Lehman introduced a virus of unprecedented lethality and speed that decimated the financial system’s most critical resource: trust. When that happened, the system’s shock absorbers were hijacked and turned into shock amplifiers—spreading the contagion of uncertainty rather than the perception of safety.
In response, the bankers did what seemed rational in such an uncertain situation: They hoarded cash and desperately tried to sell the depressed assets they had on their books. Yet most of the banks had followed similar business strategies before the crash, and so their most rational responses to it were also almost identical. In a densely connected network filled with clones, both responses, taken en masse, made the situation worse for everyone. Hoarding cash caused a liquidity crisis that made it harder for all banks—including the hoarders—to meet day-to-day obligations. The mass sell-off of depressed assets further accelerated the decline in value of the remaining assets on everyone’s books, harming the balance sheets of healthier banks and pulling more institutions into the vortex. In this light, it’s easy to see why, even with hundreds of billions in bailouts from the U.S. Treasury, banks were so reluctant to start lending again.
The global financial markets may share various dynamics with robust-yet-fragile ecosystems, but what does it tell us, if anything, about how to avert or lessen the next crisis? After all, doesn’t the RYF nature of the market mean that the risk of future calamity is built into the equation?
Increasingly that question is being asked not only by ecofinance theorists like Sugihara and Levin, but by leaders with more formal influence over global financial policy, including an unexpected voice in the world of risk management, working in one of the most traditional financial institutions in the world: Andrew Haldane, the executive director of financial stability for the Bank of England.
Like Levin and Sugihara, Haldane attributes the problems with the precrash financial order to the complexity of the system—the hornet’s nest of interconnections between institutions—and the homogeneity of those institutions’ business strategies. And his recipe for improving the resilience of the financial network bears striking resemblance to ecologists’ prescriptions for ecosystems. “We need more complete, holistic measures of the health of the financial system and the dependencies between various institutions within it; we need to improve communications about it with the public at large in times of impending crisis or system flip, and we need to take steps to improve the financial system’s biodiversity.”
SEEING THE FINANCIAL WHOLE
Creating more holistic, EBFM-like measures of health for the global financial network begins with collecting a great deal more data. “Today, risk measurement in financial systems is atomistic—each institution, each node, is judged by itself,” says Haldane. “After a systemic failure, this leaves policymakers—and indeed everyone—navigating in a dense fog.”
To clear the air, regulators and governments need to be able to see the full number, size, and type of connections, flows, and dependencies between various institutions and markets at a glance. They need financial observatories to continuously replicate the Fedwire study, but on a much larger, more comprehensive, more timely, and more international scale.
Doing so will require building distributed sensor networks for the global financial system and bringing many more transactions into the light of day. This is already starting to happen. In the United States, for example, the package of financial reforms signed into law after the 2008 crash requires that derivatives like credit default swaps be cleared through mechanisms designed to mitigate the problem of counterparty risk. They call these mechanisms centralized counterparty clearinghouses, or CCPs.
In the precrash era, each bank dealt directly and independently with every other, so when the crash came one bank could never be sure if its counterparties had signed more contracts than they could afford to pay. The clearinghouse addresses this murkiness by standing in the middle of every trade, becoming the buyer to each seller and then turning around and becoming the seller to each buyer. If one of the parties goes bankrupt, the CCP covers its side of the contract.
In theory, this achieves several things at once. First, it makes mapping the complicated web of relationships between financial parties much easier, as the clearinghouse sees every relationship between every buyer and every seller in its domain. “Clearinghouses compress the highly dimensional web of financial obligations to a sequence of bilateral relationships with the central counterparty—a simple hub and spoke network. The lengthy chain of relationships is condensed to a single link,” says Haldane. Thus, in a crisis, a bank would not have to worry whom its business partners might also have dealings with—it would have dealings only with the clearinghouse. “Provided that link is secure—the hub’s resilience is beyond question—counterparty uncertainty is effectively eliminated,” he adds.
CCPs also simplify and cut down the thicket of counterbalancing claims between parties; if the First National Bank owes the Second National Bank $50 million, spread across a dozen derivatives contracts, and Second National owes First National a similar amount over a different set of contracts, the CCP can offset the claims through a process called netting. (In the wake of the financial crash, efforts were undertaken to do this manually in the CDS market by “tearing up” redundant, offsetting claims, reducing the volume of contracts by more than 75 percent. A centralized clearing facility would make this process automatic.)
Centralized counterparty clearinghouses also provide a mechanism to ensure that each party can actually pay out its contracts, because the clearinghouse requires each party to set aside sufficient capital to fulfill the contract as if it had to be settled each day—commonly known as marked-to-market accounting. Additionally, clearinghouses make it far less likely that any particular firm will become dangerously overexposed—as AIG and Lehman did—without anyone being able to tell, since the clearinghouse is in a position to see such buildups as they occur.
In addition to using central clearinghouses, other steps toward greater transparency have also recently been mandated in the United States, including the use of an open exchange for such derivatives, akin to a stock exchange. The primary benefit of exchange-based trading is in setting a market price. In the precrash era of direct deals between banks, nobody knew what anyone else was paying for these contracts. By moving to a market-based system, everyone can see what the going rate is, and companies are far less likely to get gouged.
Not everyone is happy with such an arrangement. Making derivatives trades more transparent in turn makes the underlying investment strategies of some investment firms more visible, and many of those strategies have previously depended for their success on being kept private. And Wall Street traders dislike exchange-based trading for another, simpler reason: When nobody knew what anyone else was paying for derivative contracts, they could charge whatever they wanted. The pricing transparency that comes with selling such contracts via exchanges naturally erodes their profits.
Nor is interposing a central counterparty clearinghouse in derivatives trading a panacea—it does not entirely eliminate the fragility in the financial network, but rather relocates it to the risk-management strategies of the central clearinghouses themselves. If managed well, a CCP can effectively mitigate the risk for a whole market; if not, its very centralized position could give it the starring role the next crisis.
But a centralized counterparty clearinghouse does enable the collection of vastly more information and greater transparency about the activities of the market. This data can be combined with many other sources to inform sophisticated, system-wide measurement tools that, much like EBFM, reveal the connectivity of institutions, not just their size and behavior.
In public health, practitioners often use informal social network mapping to identify and then inoculate super spreaders, those most at risk for initiating or propagating a contagion. So too in finance. “In early 2007, it’s doubtful whether many of the world’s largest financial institutions were more than two or three degrees of separation from AIG,” says Haldane. “And in 1998, it’s unlikely that many of the world’s largest banks were more than one or two degrees of separation from Long-Term Capital Management, originators of the last major crisis. Mapping the links in the financial network might have identified these financial black holes before they swallowed too many planets.”
Yet troublingly, prior to the crash, there was virtually no correlation between the size of the most important financial institutions and how much they held in reserve to cover potential problems. In spite of—or more likely because of—their supposed importance, these critical mega institutions seemed to have held less in reserve as a percentage of assets than their smaller peers. “One explanation is that, because big banks thought that they were diversifying, they thought they could afford greater risk,” says Haldane. “Another explanation is markets allowed these banks to operate less conservatively because of the implicit promise of government support if things turned for the worse. Either way, there was no targeted vaccination of the super spreaders of financial contagion.”
The best way to determine the appropriate levels of inoculation for such super spreaders is to model crises before they occur. This was undertaken in limited form in 2009 when the U.S. Federal Reserve began stress-testing banks that received bailout assistance, to make sure they had sufficient capital to weather difficult conditions such as sustained high levels of unemployment or ongoing home mortgage defaults. Stress testing gave a snapshot of each bank’s health and risk exposure—and in the first round of testing, regulators discovered that more than half of the banks tested required additional capital. But even this limited stress testing is only a once-a-year affair, like an audit, and doesn’t analyze how the system as a whole might be impacted if any of its subjects were to collapse. Such tests and simulations are regular occurrences for other network systems like electrical utilities, the military, and the air transportation system—and will need to become a regular feature of the financial system. And not just in a few market hubs, but everywhere.
Still, these are compensatory measures. They contain and minimize the damage of a systemic fragility once it has surfaced, but they can’t flip the system back into its preferred state. Is there anything that can help the financial markets better self-regulate? The coral reefs may—once again—hold an intriguing clue.
THE BATFISH AND THE WIR
David Bellwood is obsessed with fish. His friends constantly poke fun at his ability to bring any conversation back around to them. When he was younger, growing up in northern England, he started an aquarium to observe reef fish. The interest propelled him all the way through an undergraduate degree at the University of Bath, where he did an honors project on the aquarium industry and then, later, to a PhD and professorship at James Cook University in Australia, focusing for more than twenty years on what is now his formal area of expertise and informal object of obsession: the parrotfish.
It’s easy to see why Bellwood would fixate on this beautiful colored fish. The parrotfish is one of the mainstays of the coral reef system, and it can do some genuinely interesting things: Not only can it change gender—female parrotfish can transform themselves into males when their dominant male leader dies—but certain species of parrotfish have developed the ability to envelop themselves in a transparent cocoon, made from a viscous substance that comes out of an organ in their heads. This homemade sleeping bag disguises the scent of the parrotfish at night, leaving it safely hidden from nocturnal predators.
Of far greater interest to Bellwood, however, is the parrotfish’s function on the reef: to consume large quantities of algae extracted from gnawed chunks of coral. Parrotfish are voracious algae eaters—Bellwood refers to them as “lawn mowers”—and on a healthy reef they play a critical role in regulating the competitive relationship between algae and corals. By cleaning the system of excess algae and enabling new corals to compete for space, parrotfish keep a coral-dominated reef in a continually self-regenerating state and prevent it from flipping into an algae-dominated state. Particularly after the difficult lessons of Jamaica, protecting the parrotfish and other herbivores has become a centerpiece of many reef resilience strategies.
Yet in recent research conducted on the Great Barrier Reef, Bellwood has discovered that the mechanisms that regulate the reef are more complex than marine ecologists had previously thought. His discoveries have significant implications for future reef management, and for other fields as well.
In controlled experiments, Bellwood and other researchers set up large open cages on top of the reefs—each one the size of a small office. In each of the cages they planted dense macroalgae assays, which simulated a reef system fully dominated by algae—what might be described as the parrotfish equivalent of an all-you-can-eat buffet.
Bellwood and his team set up multiple underwater cameras to film what came next. “We thought there was going to be a spectacular circus because we had forty-five species of herbivorous fish in the vicinity, and huge densities of parrotfish. This particular location should have brought a feeding frenzy to the algae.”
The team sat waiting in anticipation for the first few hours. That anticipation soon turned to bafflement. The only thing they could see on the video footage was a thick cluster of algae waving in the current. Not only were there no parrotfish, there were no fish to be seen at all.
“Lights! Camera! Action! And then . . . nothing. Were the fish reproducing or gone somewhere else? I mean, this was three-meter-high, beautiful algae. These are herbivores. These fish should have been having a feast.”
What the team did notice, over the next few weeks, is that the algae were slowly getting thinner. “You know how people get older and you can slowly start to see their scalp? Well, the algae were like that. You still had the bits that were reaching to the top but it was getting really thin in the middle. Eventually, there was very, very little on the footage.”
Bellwood was expecting the algae to be gobbled down in a frenzy, primarily by parrotfish, in twenty-four hours. Instead, over a period of three weeks, the algae were slowly nibbled and eroded until they finally just collapsed—but what was eating them? The results were so curious that Bellwood and the other scientists didn’t even have a framework to understand them. All they could do was go back and review the footage again and again.
Then, during one of their repeated viewings, they spotted something altogether unexpected. From out of the murk, a completely unrelated species, a small black fish with a golden fringe (called a pinnate batfish) appeared on the footage. And then, to everyone’s shock, it slowly started feeding on the algae.
“We were stunned for two reasons. To begin with, batfish aren’t supposed to eat algae. They are an invertebrate feeder, not an herbivore. Secondly, we could never catch them doing it. As soon as we got in the water, they would swim away. It was like the Far Side cartoon where the cows are having a conversation and then the car comes and they all suddenly go back to eating the grass.”
Bellwood’s research suggests that while the parrotfish act as lawnmowers for the reef, they can do so only when the reef is in the healthy, coral-dominated state. When the system has flipped and algae have taken over, they’re no longer able to provide this function. And that’s when the batfish—which normally doesn’t eat algae—is “deployed” on the reef to correct the imbalance. One prevents a flip, the other reverses it.
Bellwood likens the reef to a golf course. “Under ordinary circumstances, the equipment you need is a lawn mower because it keeps the nice grass down. The coral reef lawn mowers are the parrotfish, surgeonfish, and other species that graze the turf and keep the greens tidy. But if, for some reason, your mowers are broken, the weeds on the golf course start to get big, and soon there is a back garden overgrowth. Now, by the time the lawn mowers show up, they won’t work. You need a chainsaw and a bush saw. Surprisingly enough, we discovered that this is how the batfish functions.”
The batfish are part of what Bellwood calls a “sleeping functional group,” a species or group of species capable of performing a particular functional role—but which do so only under exceptional circumstances.
Bellwood’s team was both euphoric and confused by the discovery of these sleeper groups. The good news is that fish exist with the capability to help reverse a system flip from an algae-dominated state back to a coral-dominated state. The bad news, however, is that in many cases, scientists have no idea which fish they are. Because the resilience function of these species emerges only in exceptional circumstances, no one before Bellwood had ever connected the batfish with coral-algal interactions in more than fifty years of research.
What might it look like for the financial system to have such sleeper functional groups of its own—countercyclical strategies lying dormant within the financial network, stirring only when the system flips, as it did in 2008? Finance may just have found its batfish in tiny Switzerland, in the form of a unique alternative currency called the WIR.
• • •
If you have traveled around Switzerland at any point in the last few decades, you may have noticed stickers in the windows of local shops and businesses: “We Accept WIR.” Perhaps you caught sight of one of WIR’s glossy catalogs displaying a full list of all of its participating businesses. If you stayed in a hotel, the clerk may have even asked you if you wanted to pay through WIR. No doubt she or he soon rescinded the offer upon realizing that you were not a native of Switzerland.
What is this mysterious alternative currency and how does it work?
Bernard Lietaer, a Belgian economist with more than twenty-five years of expertise in currency systems, has been a keen observer of this Swiss alternative currency for many years now. He is not alone. A number of macroeconomic analysts are starting to look more carefully at the way it functions.
WIR—Wirtschafstring, or “circle” in German—began in the depths of the Great Depression. In the wake of the stock market crash of 1929, total world trade plummeted—by 20 percent in 1930, another 29 percent in 1931, and another 32 percent in 1932. Unemployment reached 30 million. Wealth that was once all but assured disappeared into thin air, and banks that seemed certain to stand suddenly collapsed, as stocks lost nearly 90 percent of their value.
Switzerland was pulled into the crisis more slowly than some of the other European countries, and it was slower to recover. By 1934, when the United States and Germany were showing faint signs of recovery, Switzerland remained in a malaise. The number of bankruptcies hit record heights and trade and tourism took deep hits. The Swiss railroad, SBB, carried a deficit double that of the federal budget. Unemployment was the norm, not the exception: In 1934, there was one job opening for every seventy-three official job seekers.
In the midst of all this, sixteen businessmen decided to create a solution for themselves. All of them, along with their clients, had been informed by their banks that their credit lines were no longer open to them; without credit, their businesses were positioned on the brink of bankruptcy. Rather than fail, they decided to set up a complementary form of currency.
Their result, the WIR, is a mutual credit system. A debt in WIR is either reimbursed by bartering in sales with someone else in the network or paid in full in the national currency. Over time, this network expanded to include one-quarter of all the businesses in the country. Today it is a thriving barter network that makes up a well-recognized complementary currency in Switzerland.
An analysis of more than sixty years’ worth of data on the WIR by macroeconomist James Stodder has shown that whenever there has been a recession, the volume of business in this unofficial currency has expanded significantly, cushioning the negative impact of lost sales and increased unemployment. Whenever there has been a boom, business in national currency has boomed, and activity in the unofficial currency has dropped proportionally again. In the past, people have attributed the success of the Swiss economy to a national character of pragmatism and thrift. Stodder’s study offered unexpected proof that the secret behind the country’s legendary stability and economic resilience is the spontaneous countercyclical behavior of this small alternative currency system.
WIR functions precisely like Bellwood’s batfish—a latent, just-in-case contingency system that is activated when the economy is at or near a phase shift. Lietaer is an advocate for more complementary currencies that function like the WIR—business to business—in the European Union and the United States. “The substance that circulates in our global economic network—money—is maintained as a monopoly of a single type of currency (bank-debt money, created with interest). Imagine a planetary ecosystem where only one single type of plant or animal is tolerated and officially maintained and where any manifestation of diversity is eradicated as an inappropriate ‘competitor’ because it is believed it would reduce the efficiency of the whole.”
• • •
The batfish and the WIR illustrate, in very different ways, an essential strategy of many resilient systems: They are embedded countercyclical structures that can respond proportionally, and in the same time signature, to disruptions as they emerge. These structures are part of an essential inventory of diverse tools often found in resilient systems: The batfish is an example of biodiversity, the WIR, of economic diversity. Like all inventory, this diversity imposes a carrying cost. And because such structures are latent in a system—they are only called upon when a crisis emerges—it can be difficult to place a value on them when things are in a more humdrum state of affairs. In a relentless quest for greater systemic efficiency, this diversity can be lost and unmourned—until it’s too late.
As we’ve seen, the fragility and resilience of most systems begins with their structure. The complexity, concentration, and homogeneity of a system can amplify its fragility; the right kinds of simplicity, localism, and diversity can amplify its resilience. The lens of resilience suggests, for example, that what’s needed is a smaller, simpler, more accountable, and more decouplable financial system, with genuinely diverse participants, that is more closely aligned with its original purpose—providing liquidity to organizations and individuals—than with its more recent one, which seemed to be engineering wealth from thin air.
But resilience takes more than just the right structure—it takes the right kinds of processes and practices: measuring the whole health of the system, like EBFM; modeling and stress-testing the system, like the Fedwire study; scanning for emerging disruptions and mobilizing the right, inclusive responses when they strike, like the proposed financial observatories; building in feedback and compensatory systems, like the batfish and the WIR, that help keep the system in check. Most of all, it requires keeping the system dynamic and reconfigurable. As we’ll see next, it is in such dynamism that many systems’ resilience rests.