The digital revolution produces more data, more speed, more connectivity, and more complexity. Besides creating new opportunities, how will this change our economy and our societies? Will it make our increasingly interdependent systems easier to control? Or are we heading towards a systemic collapse? In order to figure out what needs to be done to fix the world’s ills, we must explore why things, as they currently stand, go wrong. The question is, why haven’t we learned how to deal with them yet?
These days, we seem to be surrounded by economic and socio-political crises, by terrorism, conflict and crime. More and more often, the conventional “medicines” to tackle these sorts of global problems turn out to be inefficient or even counter-productive. It is increasingly evident that we approach these problems with an outdated understanding of our world. While it may still look more or less as it looked for a long time, the world has changed inconspicuously but fundamentally.
We are used to the idea that societies must be protected from external threats such as earthquakes, volcanic eruptions, hurricanes and military attacks by enemies. Increasingly, however, we are threatened by different kinds of problems that come from within the system, such as financial instabilities, economic crises, social and political unrest, organized crime and cybercrime, environmental change, and the spread of diseases. These problems have become some of the greatest threats to humanity. According to the “risk interconnection map” published by the World Economic Forum, the greatest risks faced by our societies today are of socio-economic and political nature.1 These risks, including factors such as economic inequality and governance failure, are twenty-first century problems which cannot be solved with twentieth century wisdom. They are larger in scale than ever before and result from the complex interdependencies in today’s anthropogenic systems. As a result, it is of paramount importance that we develop a better understanding of the characteristics of complex dynamical systems. To this end, I will discuss the main reasons why things go wrong, such as unstable dynamics, cascading failures in networks, and systemic interdependencies. And I will illustrate these problems through a large variety of examples such as traffic jams, electrical blackouts, financial crises, crime, wars and revolutions.
2.1 Phantom Traffic Jams
Complex systems can be found all around us and include phenomena such as turbulent flows in our global weather system, decision-making processes, opinion formation in groups, financial and economic markets, and the evolution and spread of languages. However, we must carefully distinguish complex systems from complicated ones. While a car, which consists of thousands of parts, is complicated, it is easy to control nevertheless (when it works properly). Traffic flow, on the other hand, depends on the dynamical interactions of many cars, and forms a complex dynamical system. These interactions produce counter-intuitive phenomena such as “phantom traffic jams” which appear to have no cause. Such “emergent” phenomena cannot be understood from the properties of the single parts of the system in isolation, here the driver-vehicle units. While many traffic jams occur for specific, identifiable reasons, such as accidents or road works, almost everyone has also encountered situations where a queue of vehicles seems to form “out of nothing” and where there is no visible cause.2
To explore the true reasons for these “phantom traffic jams”, Yuki Sugiyama and his colleagues at Nagoya University in Japan carried out an experiment in which they asked many people to drive around a circular track.3 The task sounds simple, and the vehicles did in fact flow smoothly for some time. Then, however, one of the cars caused a minor variation in the traffic flow, which triggered stop-and-go traffic—a traffic jam that moved backwards around the track.
While we often blame poor driving skills of others for such “phantom traffic jams”, studies in complexity science have shown that they are actually an emergent collective phenomenon, which is the inevitable result of the interaction between vehicles. A detailed analysis demonstrates that, if the traffic density exceeds a certain “critical” threshold—that is, if the average separation of the vehicles is smaller than a certain value—then even the slightest variation in the speed of cars can eventually cause a disruption of the traffic flow through an amplification effect. As the next driver in line needs some time to adjust to a change in the speed of the vehicle ahead, he or she will have to brake a bit harder to compensate for the delay. The then following driver will have to break even harder, and so on. The resulting chain reaction amplifies the initially small variation in a vehicle’s speed and this eventually produces a traffic jam which, of course, every single driver tried to avoid.
2.2 Recessions—Traffic Jams in the World Economy?
Economic supply chains may exhibit a similar kind of behavior, as illustrated by John Sterman’s “beer distribution game”.4 The game simulates some of the challenges of supply chain management. When playing it, even experienced managers will end up ordering too much stock, or will run out of it.5 This situation is as difficult to avoid as stop-and-go traffic. In fact, our scientific research suggests that economic recessions can be regarded as a kind of traffic jam in the global flow of goods, i.e. the world economy. This insight is actually somewhat heartening, since it implies that we may be able to engineer solutions to mitigate economic recessions in a similar way as one can reduce traffic jams by driver assistant systems. The underlying principle will be discussed later, in the chapter on Digitally Assisted Self-Organization. In order to do this, however, one would need have real-time data detailing the global flow and supply of materials.
2.3 Systemic Instability
Crowd disasters are another, tragic example of systemic instability. Even when every individual within a crowd is peacefully minded and tries to avoid harming others, many people may die nevertheless. Appendix 2.1 outlines why such extreme systemic outcomes can result from normal, non-aggressive behavior.
What do all these examples tell us? They illustrate that the natural (re)actions of individuals will often be counterproductive.6 Our experience and intuition often fail to account for the complexity of highly interactive systems, which tend to behave in unexpected ways. Such complex dynamical systems typically consist of many interacting components which respond to each other’s behaviors. As a consequence of these interactions, complex dynamical systems tend to self-organize. That is, some collective dynamics may develop (such as stop-and-go traffic), which is different from the natural behavior of the system components when they are separated from each other (such as drivers who don’t like to stop). In other words, the overall system may show new characteristics that are distinct from those of its components. This can result, for example, in “chaotic” or “turbulent” dynamics.
Group dynamics and mass psychology may be viewed as typical examples of spontaneously emerging collective dynamics occurring in crowds. What is it that makes a crowd turn “mad”, violent, or cruel? For example, after the London riots of 2011, people have asked how it was possible that teachers and the daughters of millionaires—people you would not expect to be criminals—were participating in the looting? Did they suddenly develop criminal minds when the demonstrations against police violence turned into riots? Possibly, but not necessarily so.
Understanding the emergence of new properties requires an interaction-oriented perspective.
When many components of a complex dynamical system interact, they frequently create new kinds of structures, properties or functions in a self-organized way. To describe newly resulting characteristics of the system, the term “emergent phenomena” is often used. For example, water appears to be wet, extinguishes fire, and freezes at a particular temperature, but we would not expect this based on the examination of single water molecules.
Therefore, complex dynamical systems may show surprising behaviors. They cannot be steered like a car. In the above traffic flow experiment, although the aim of participants was to drive continuously at a reasonably high speed, a phantom traffic jam occurred due to the interactions between cars. While it is straightforward to control a car, it may be impossible for individual drivers to control the collective dynamics of traffic flow, which is the result of the interactions of many cars.
2.4 Beware of Strongly Coupled Systems!
Instability is just one possible problem of complex dynamical systems. It occurs when the characteristic parameters of a system cross certain critical thresholds. If a system becomes unstable, small deviations from the normal behavior are amplified. Such amplification is often based on feedback loops, which cause a mutual reinforcement. If one amplification effect triggers others, a chain reaction may occur and a minor, random variation may be enough to trigger an unstoppable domino effect. In case of systemic instability, as I have demonstrated for the example of phantom traffic jams, the system will inevitably get out of control sooner or later, no matter how hard we try to prevent this. Consequently, we should identify and avoid conditions under which systems behave in an unstable way.
In many cases, strongly coupled interactions are a recipe for disaster or other undesirable outcomes.7 While our intuition usually works well for problems that are related to weakly coupled systems (in which the overall system can be understood as being the sum of its parts and their properties), the behaviors of complex dynamical systems can change dramatically, if the interactions among their components are strong. In other words, these systems often behave in counter-intuitive ways, so that conventional wisdom tends to be ineffective for managing them. Unintended consequences or side effects are common.
What further differences do strong interactions make? First, they may cause larger variability and faster changes, particularly if there are “positive feedbacks” that lead to reinforcement and acceleration. Second, the behavior of the complex system can be hard to predict, making it difficult to plan for the future. Third, strongly connected systems tend to show strong correlations between the behaviors of (some of) its components. Fourth, the possibilities to control the system from the outside or through the behavior of single system components are limited, as the system-immanent interactions may have a stronger influence. Fifth, extreme events occur more often than expected and they may affect the entire system.8
In spite of all this, most people still have a component-oriented worldview, centered on individuals rather than groups and interdependent events. This often leads to “obvious”9 but wrong conclusions. For example, we praise heroes when things go well and search for scapegoats when something goes wrong. Yet, the discussion above has clearly shown how difficult it is for individuals to control the outcome of a complex dynamical system, if the interactions between its components are strong. This fact can also be illustrated by politics.
Why do politicians, besides managers, have (on average) the worst reputation of all professions? This is probably because we think they are hypocritical; we elect them on the basis of the ideas and policies they publicly voice, but then they often do something else. This apparent hypocrisy is a consequence of the fact that politicians are subject to many strong interactions with lobbyists and interest groups, who have diverse points of view. All of these groups push the politicians into various directions. In many cases, this forces them to make decisions that are not compatible with their own point of view—a fact, which is hard to accept for voters. However, if we believe that democracy is not just about elections every few years, but also about a continuous consolidation between citizens and their elected representatives, it could be argued that it would be undemocratic for politicians to unreservedly place their own personal convictions above the concerns of citizens and businesses in the systems they are supposed to represent. Managers of companies find themselves in similar situations. They are exposed to many different factors they must consider. This kind of interaction-based decision-making extends far beyond boardrooms and parliaments. Think of the decision-dynamics in families: if it were easy to control, there would be probably less divorces…
Crime is another example of social systems getting out of control.10 This can happen both on an individual and collective level. Many crimes, including murders, are committed by average people, rather than career criminals (and even in countries with death penalty).11 A closer inspection shows that many crimes correlate with the circumstances individuals find themselves in. For example, group dynamics often plays an important role. Many scientific studies also show that the socio-economic conditions are a strong determining factor of crime. Therefore, in order to counter crime, it might be more effective to change these socio-economic conditions rather than sending more people to jail. I say this with one eye on the price we have to pay to maintain a large prison population—a single prisoner costs more than the salary of a postdoctoral researcher with a Ph.D. degree! It worries even more that prisons and containment camps have apparently become places where criminal organizations are formed and terrorist plots are planned.12 In fact, our own studies suggest that many crimes spread by imitation.13
2.5 Cascading Effects in Complex Networks
To make matters worse, besides the dynamic instability caused by amplification effects, complex dynamical systems may produce even bigger problems. Worldwide trade, air traffic, the Internet, mobile phones, and social media have made everything much more convenient—and connected. This has created many new opportunities, but everything now depends on a lot more things. What are the implications of this increased level of interdependency? Today, a single tweet can make stock markets spin. A controversial Youtube video can trigger a riot that kills dozens of people.14 Increasingly, our decisions can have consequences on the other side of the globe, many of which may be unintended. For example, the rapid spread of emerging epidemics is largely a result of the scale of global air traffic today, and this has serious repercussions for global health, social welfare and economic systems.
It is being said that “the road to hell is paved with good intentions”. By networking our world, have we inadvertently created conditions in which disasters are more likely to emerge and spread? In 2011 alone, at least three cascading failures with global impact were happening, thereby changing the face of the world and the global balance of power: The world economic crisis, the Arab spring and the combination of an earthquake, a tsunami and a nuclear disaster in Japan. In 2014, the world was threatened by the spread of Ebola, the crisis in Ukraine, and the conflict with the Islamic State (IS). In the following subsections, I will therefore discuss some examples of cascade effects in more detail.
2.6 Large-Scale Power Blackouts
On November 4, 2006, a power line was temporarily turned off in Ems, Germany, to facilitate the transfer of a Norwegian ship. Within minutes, this caused a blackout in many regions all over Europe, from Germany to Portugal! Nobody expected this to happen. Before the line was switched off, a computer simulation predicted that the power grid would still operate well without the line. However, the scenario analysis did not account for the possibility that another line would spontaneously fail. In the end, a local overload in Northwest Germany triggered emergency switch-offs all over Europe, creating a cascade effect with pretty astonishing results. Blackouts occurred in regions thousands of kilometers away.15
Can we ever hope to understand such strange behavior? In fact, a computer-based simulation study of the European power grid recently managed to reproduce quite similar effects.16 It demonstrated that the failure of a few network nodes in Spain could create an unexpected blackout several thousand kilometers away in Eastern Europe, while the electricity network in Spain would still work.17 Furthermore, increasing the capacity of certain parts of the power grid could worsen the situation and cause an even greater blackout. Therefore, weak elements in the system may serve an important function: they can be “circuit breakers” interrupting the failure cascade. This is an important fact to remember.
2.7 From Bankruptcy Cascades to Financial Crisis
The sudden financial meltdown in 2008 is another example of a crisis that hit many companies and people by surprise. In a presidential address to the American Economic Association in 2003, Robert Lucas (*1937) said:
“[The] central problem of depression-prevention has been solved.”
Similarly, Ben Bernanke (*1953), the former chairman of the Federal Reserve Board, held a longstanding belief that the economy was both well understood and in sound financial shape. In September 2007, Ric Mishkin (*1951), a professor at Columbia Business School and then a member of the Board of Governors of the US Federal Reserve System, made another interesting statement, reflecting widespread beliefs at this time:
“Fortunately, the overall financial system appears to be in good health, and the U.S. banking system is well positioned to withstand stressful market conditions.”
As we all know with the benefit of hindsight, things turned out very differently. A banking crisis occurred only shortly later. It started locally, when a real estate bubble burst, which had formed in the West of the USA. As it was a regional problem then, most people thought it could be easily contained. But the mortgage crises had spillover effects on stock markets. Certain financial derivatives could hardly be sold and became “toxic assets”. Eventually, hundreds of banks all over the US went bankrupt. How could this happen? A video produced by professor Frank Schweitzer and others presents an impressive visualization of the chronology of bankruptcies in the USA after Lehman Brothers collapsed.18 Apparently, the default of a single bank triggered a massive cascading failure in the financial sector. In the end, hundreds of billions of dollars were lost.
The video mentioned above looks surprisingly similar to another one, which I often use to illustrate cascading effects.19 It shows an experiment in which many table tennis balls are placed on top of mousetraps. The experiment impressively demonstrates that a single local disruption can mess up an entire system. The video illustrates chain reactions, which are the basis of atomic bombs or nuclear fission reactors. As we know, such cascading effects can be technologically controlled in principle, if a certain critical mass (or “critical interaction strength”) is not exceeded. Nevertheless, these processes can sometimes get out of control, mostly in unexpected ways. The nuclear disasters in Chernobyl and Fukushima are well-known examples of this. We must, therefore, be extremely careful with large-scale systems potentially featuring cascading effects.
2.8 A World Economic Crisis Results
As we know, the cascading failure of banks mentioned above was just the beginning of an even bigger problem. It subsequently triggered a global economic and public spending crisis. Eventually, the financial crisis caused a worldwide damage of more than $15 trillion20—an amount a hundred times as big as the initial real estate problem. The events even threatened the stability of the Euro currency and the EU. Several countries including Greece, Ireland, Portugal, Spain, Italy and the US were on the verge of bankruptcy. As a consequence, many countries are suffering from unprecedented unemployment rates. In some countries, more than 50% of young people did not have a job. In many regions, this caused social unrest, political extremism and increased rates of suicide, crime and violence.
Unfortunately, the failure cascade hasn’t been stopped, yet. There is a long way to go until we fully recover from the financial crisis and from the public and private debts accumulated in the past years. If we can’t overcome this problem soon, it even has the potential to endanger peace, democratic principles and cultural values, as I pointed out in a letter in 2010.21 Looking at the situation in Ukraine, we are perhaps seeing this scenario already.
While all of this is now plausible with the benefit of hindsight, the failure of conventional wisdom to provide an advanced understanding of events is reflected by the following quote, made by the former president of the European Central Bank, Jean-Claude Trichet (*1942) in November 2010:
“When the crisis came, the serious limitations of existing economic and financial models immediately became apparent. Arbitrage broke down in many market segments, as markets froze and market participants were gripped by panic. Macro models failed to predict the crisis and seemed incapable of explaining what was happening to the economy in a convincing manner. As a policy-maker during the crisis, I found the available models of limited help. In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools.”
Similarly, Ben Bernanke summarized in May 2010:
“The brief market plunge was just an example of how complex and chaotic, in a formal sense, these systems have become… What happened in the stock market is just a little example of how things can cascade, or how technology can interact with market panic.”
Even leading scientists found it difficult to make sense of the crisis. In a letter on July 22, 2009, to the Queen of England, the British Academy came to the conclusion22:
“When Your Majesty visited the London School of Economics last November, you quite rightly asked: why had nobody noticed that the credit crunch was on its way? … So where was the problem? Everyone seemed to be doing their own job properly on its own merit. And according to standard measures of success, they were often doing it well. The failure was to see how collectively this added up to a series of interconnected imbalances over which no single authority had jurisdiction. … Individual risks may rightly have been viewed as small, but the risk to the system as a whole was vast. … So in summary … the failure to foresee the timing, extent and severity of the crisis … was principally the failure of the collective imagination of many bright people to understand the risks to the systems as a whole.”
Thus, was nobody responsible for the financial crisis in the end? Or do we all have to accept some responsibility, given that these problems are collective outcomes of a huge number of individual (inter)actions? And how can we differentiate the degree of responsibility of different individuals or firms? This is certainly an important question worth thinking about.
It is also interesting to ask, whether complexity science could have forecasted the financial crisis? In fact, I followed the stock markets closely before the crash and noticed strong price fluctuations, which I interpreted as advanced warning signals of an impending financial crash. For this reason, I sold my stocks in late 2007, while I was sitting in an airport lounge, waiting for my connection flight. In spring 2008, about half a year before the collapse of Lehman brothers, James Breiding, Markus Christen and I wrote an article taking a complexity science view on the financial system. We came to the conclusion that the financial system was in the process of destabilization. We believed that the increased level of complexity in the financial system was a major problem and that it made the financial system more vulnerable to cascading effects, as was later also stressed by Andrew Haldane (*1967), the Chief Economist and Executive Director at the Bank of England.
In spring 2008, we were so worried about these trends that we felt we had to alert the public. At that time, however, none of the newspapers we contacted were ready to publish our essay. “It’s too complicated for our readers” was the response. We responded that “nothing can prevent a financial crisis, if you cannot make this understandable to your readers”. With depressing inevitability, the financial crisis came. Although it gave us no pleasure to be proven right, a manager from McKinsey’s UK office commented six months later that our analysis was the best he had seen.
Of course, some far more prominent public figures also saw the financial crisis coming. The legendary investor Warren Buffet (*1930), for example, warned of the catastrophic risks created by large-scale investments in financial derivatives. Back in 2002 he wrote:
Many people argue that derivatives reduce systemic problems, in that participants who can’t bear certain risks are able to transfer them to stronger hands. These people believe that derivatives act to stabilize the economy, facilitate trade, and eliminate bumps for individual participants. On a micro level, what they say is often true. I believe, however, that the macro picture is dangerous and getting more so. … The derivatives genie is now well out of the bottle, and these instruments will almost certainly multiply in variety and number until some event makes their toxicity clear. Central banks and governments have so far found no effective way to control, or even monitor, the risks posed by these contracts. In my view, derivatives are financial weapons of mass destruction, carrying dangers that, while now latent, are potentially lethal.
As we know, it still took five years until the “investment time bomb” exploded, but then it caused trillions of dollars of losses to our economy.
2.9 Fundamental (“Radical”) Uncertainty
In liquid financial markets and many other systems, which are difficult to predict, such as our weather, we can still determine the probability of close enough events, at least approximately. Thus, we can make probabilistic forecasts such as “there is a 5% chance to lose more than half of my money when selling my stocks in 6 months, but a 60% chance that I will make a profit…”. It is then possible to determine the expected loss (or gain) implied by likely actions and events. For this, the damage or gain of each possible event is multiplied by its probability, and the numbers are added together to predict the expected damage or gain. In principle, we could do this for all the actions we might take, in order to determine the one that minimizes damage or maximizes gain. The problem is that it is often impractical to determine these probabilities. With the increasing availability of data, this problem may recede, but it will often remain difficult or impossible to obtain the probabilities of “extreme events”. By their very nature, there are only a few data points for rare events, which means that the empirical basis is too small to determine their probabilities.
In addition, it may be completely impossible to calculate the expected damage incurred by a problem in a large (e.g. global) system. Such “fundamental” or “radical” uncertainty can result from cascading effects, where one problem is likely to trigger other problems, leading to a progressive increase in the overall damage. In principle, the overall losses may not be quantifiable in such situations at all. This means in practice that the actual damage might be either insignificant (in the best case) or practically unbounded (in the worst case), or anything in between. In an extreme case, this might lead to a failure of the entire system, as we know it from the collapse of historical empires and civilizations.23
2.10 Explosive Epidemics
When studying the spread of diseases, the outcome is highly dependent on the degree of physical interactions between people who may infect each other. A few additional airline routes might make the difference between a case in which a disease is contained, and a case which develops into a devastating global pandemic.24 The threat of epidemic cascading effects might be even worse if earlier damage reduces the ability of the system to withstand problems later on. For example, assume a health system in which the financial or medical resources are limited by the number of healthy individuals who produce them. In such a case, it can happen that the resources needed to heal the disease are increasingly used up, such that the epidemic finally spreads explosively. A computer-based study, which Lucas Böttcher, Olivia Woolley-Meza, Nuno Araujo, Hans Hermann and I performed, shows that the dynamics in such a system can change dramatically and unexpectedly.25 Thus, have we perhaps built global networks which we can neither predict nor control?
2.11 Systemic Interdependence
Recently, Shlomo Havlin (*1942) and others made a further important discovery. They revealed that networks of networks can be particularly vulnerable to disruptions.26 A typical example of this is the interdependence between electrical and communication networks. Another example, which illustrates the global interdependence between natural, energy, climate, financial and political systems is provided by the Tohoku earthquake in Japan in 2011. The earthquake caused a tsunami which triggered a chain reaction and a nuclear disaster in several reactors at Fukushima. Soon after this, Germany and Switzerland decided to exit nuclear power production over the next decade(s). However, alternative energy sources are also problematic, as European gas supply depends on geopolitical regions which might not be fully reliable.27
Likewise, Europe’s DESERTEC project—a planned €1000 billion investment in solar energy infrastructure—has been practically given up due to another unexpected event, the Arab Spring. This uprising was triggered by high food prices, which in turn was partially caused by biofuel production. While biofuels were intended to improve the global CO2 balance, their production decreased the production of food, making it more expensive. The increased food prices were further amplified by financial speculation. Hence, the energy system, the political system, the social system, the food system and the financial system have all become closely interdependent, making our world increasingly vulnerable to disruptions.
2.12 Have Humans Created a “Complexity Time Bomb”?
For a long time, problems such as crowd disasters and financial crashes have puzzled humanity. Sometimes, they have even been regarded as “acts of God” or “black swans”28 that we had to endure. But problems like these should not be simply put down to “bad luck”. They are often the consequence of a flawed understanding of the counter-intuitive way in which complex systems behave. Fatal errors and the repetition of previous mistakes are frequently the result of an outdated way of thinking. However, complexity science allows us to understand how and when complex dynamical systems get out of control.
If a system is unstable, we will see amplification effects, such that a local problem can lead to a cascading failure, which creates many further problems down the line. For this reason, the degree of interaction between the system components is crucial. Overall, complex dynamical systems become unstable, if the interactions between their components get stronger than frictional effects, or if the damage resulting from the degradation of system components occurs faster than they can recover. As a result, the timing of processes can play a key role in determining whether the overall system will remain stable. This means that delays in adaptation processes can often lead to systemic instabilities and loss of control (see Appendix 2.2).
We have further seen that an unstable complex system will sooner or later get out of control, even if everyone is well-informed and well-trained, uses advanced technology and has the best intentions. And finally, we have learned that complex dynamical systems with strong internal interactions or a high level of connectivity tend to be unstable. As our increasingly interdependent world is characterized by a myriad of global links, it is necessary to discuss the potential consequences. We must raise a fundamental question which has mammoth implications for the viability of our current economic and political systems: have humans inadvertently produced a “complexity time bomb”, i.e. a global system which will inevitably get out of control? [6]
In fact, for certain kinds of networks, the potential chain reaction of cascading failures bears a disturbing resemblance to that of nuclear fission. Such processes are difficult to control. Catastrophic damage is a realistic scenario. Given the similarity with explosive processes, is it possible that our global anthropogenic systems will similarly get out of control at some point? When considering this possibility, we need to bear in mind that the speed of a destructive cascading effect might be slow, so that the process may not remind of an explosion. Nevertheless, the process may be hard to stop, and it may ultimately lead to systemic failure.29
What kinds of global catastrophes might today’s complex societies face? A collapse of the global information and communication system or of the world economy? Global pandemics? Unsustainable growth, demographic or environmental change? A global food or energy crisis? A clash of cultures? Another world war? A societal shift, triggered by technological innovation? In the most likely scenario, we will witness a combination of several of these contagious phenomena. The World Economic Forum calls this the “perfect storm” (see Footnote 1), and the OECD has expressed similar concerns.
2.13 Unintended Wars and Revolutions
It is important to realize that large-scale conflicts, revolutions and wars can also be unintended outcomes of systemic interdependencies and instabilities. Remember that phantom traffic jams were unintended consequences of interactions. Similarly, wars and revolutions can happen even if nobody wants them. While there is a tendency to characterize these events as the deeds of particular historical figures, this trivializes and personalizes such phenomena in a way which distracts from their true, systemic nature.
It is essential to recognize that complex dynamical systems usually resist change if they are close to a stable equilibrium. This effect is known as Goodhart’s law (1975), Le Chatelier’s principle (1850–1936) or “illusion of control”. Individual factors and randomness only affect complex dynamical systems if they are driven to a “tipping point”,30 where they become unstable.
In other words, the much-heralded individuals to whom so much attention is afforded in our history books were only able to influence history because much bigger systems beyond their control had already become critically unstable.31 For example, historians now increasingly recognize that World War I was a largely unintended consequence of a chain of events. Moreover, World War II was preceded by a financial crisis and recession, which destabilized the German economic, social and political system. Ultimately, this made it possible for an individual to become influential enough to drive the world to the brink of extinction. Unfortunately, civilization is still vulnerable today and a large-scale war may happen again. This is even likely, if we don’t quickly change the way we manage our world.
Typically, the unintended path towards war is as follows: Initially, resources become scarce due to a disruption such as a serious economic crisis. Then, the resulting competition for limited resources leads to an increase in conflict, violence, crime and corruption. Human solidarity and mutual tolerance are eroded, creating a polarized society. This causes further dissatisfaction and social turmoil. People get frustrated with the system, calling for leadership and order. Political extremists emerge, who scapegoat minorities for social and economic problems. This decreases socio-economic diversity, which reduces innovation and further hinders the economy. Eventually, the well-balanced “socio-economic ecosystem” collapses, such that an orderly system of resource allocation becomes impossible. As resources become scarcer, this creates an increasing “need” for nationalism or even for an external enemy to unify the fractured society. In the end, as a result of further escalation, war seems to be the only viable “solution” to overcome the crisis, but instead, it mostly leads to large-scale destruction.
2.14 Revolutionary Systemic Shifts
A revolution, too, can be the result of systemic instability. Hence, a revolution is not necessarily set in motion by a “revolutionary leader”, who challenges the political establishment. The breakdown of the former German Democratic Republic (GDR) and some Arab Spring revolutions have shown that uprisings may even start in the absence of a clearly identifiable political opponent. On the one hand, this is the reason why such revolutions cannot be stopped by killing or imprisoning a few individuals. On the other hand, the Arab Spring took secret services throughout the world by surprise precisely because there were no revolutionary leaders. This created complications for countries which wished to assist these uprisings, as they did not know whom to interact with for international support.
It is more instructive to imagine such revolutions as a result of situations in which the interests of government representatives and those of the people (or particular societal groups) have drifted away from each other. Similarly to the tensions created by a drift of the Earth’s tectonic plates, such an unstable situation is sooner or later followed by an “earthquake-like” release of tension (the “revolution”), resulting in a re-balancing of forces. To reiterate, contrary to the conventional wisdom which assumes that revolutionary leaders are responsible for political instability, it is due to an existing systemic instability that these individuals can become influential. To put it succinctly, in most cases, revolutionaries don’t create revolutions, but systemic instabilities do. These instabilities are typically created by the politics of the old regime. Therefore, we must ask ourselves how well our society balances the interests of different groups today and how well it manages to adapt to a world, which is rapidly changing due to demographic, environmental and technological change?
2.15 Conclusion
It is obvious that there are many problems ahead of us. Most of them result from the complexity of the systems humans have created. But how can we master all these problems? Is it a lost battle against complexity, or do we have to pursue a new, entirely different strategy? Do we perhaps even need to change our way of thinking? And how can we innovate, before it is too late? The next chapters will try to answer these questions…
2.16 Appendix 1: How Harmless Behavior Can Become Critical
In the case of traffic flow, we have seen that a system can get out of control when the interaction strength (e.g. the density) is too high. Why can a change in the density make normal and “harmless” behavior become uncontrollable? To understand this better, Roman Mani, Lucas Böttcher, Hans J. Herrmann, and I studied collisions in a system of equally sized particles moving in one dimension,32 which is similar to Newton’s Cradle.33 We assumed that the particles tended to oscillate elastically around equally spaced equilibrium points, while being exposed to random forces generated by the environment.
The following summarizes the main observations made: If the distance between the equilibrium points of neighboring particles is large enough, each particle oscillates around its equilibrium point with normally distributed velocities, and all particles have the same small variance in speed. However, when the separation between the equilibrium points reaches the diameter of the particles, we find a cascade-like transmission of momentum between particles.34 Surprisingly, the variance of particle speeds rapidly increases towards the boundaries—it could even go to infinity with increasing system size. Due to cascading interactions of particles, this makes their speeds unpredictable and uncontrollable. While every particle in separation performs a normal dynamics, which is not excessive at all, their interactions can cause an extreme behavior of the system.
2.17 Appendix 2: Loss of Synchronization in Hierarchical Systems
When many socio-economic processes are happening simultaneously while having feedbacks on each other, a puzzling kind of systemic instability can occur, which is highly relevant for our complex societies, since many socio-economic processes happen at an increasing pace.
For the sake of illustration, let us first discuss hierarchically organized systems in physics. There, elementary particles form atoms, atoms form chemical compounds, these form solid bodies, and together they may form a planet, which is part of a planetary system, and a galaxy. Similarly, we know from biology and the social sciences that cells make up organs, which collectively form a human body. Humans, in turn, tend to organize themselves in groups, cities, organizations and nations.
Importantly, the stability of such hierarchies is based on two important principles. First, the forces are strongest at the bottom, and second, the changes are slowest at the top. In other words, adjustment processes in these systems are faster at lower hierarchical levels (such as the atoms) as compared to higher ones (such as planetary systems). This means that lower level variables can adjust quickly to the constraints set by the higher level variables. As a result, the higher levels basically control the lower levels and the system remains stable. Similarly, social groups tend to take decisions more slowly than the individuals who form them. Likewise, organizations and states tend to change more slowly than the individuals who form them (at least this is how it used to be in the past).
Such “time-scale separation” implies that the dynamics of a system is determined only by relatively few variables, which are typically located at the higher levels of the hierarchy. Monarchies and oligarchies are good examples for this. In current-day socio-political and economic systems, however, the higher hierarchical levels sometimes change so fast that the lower levels have difficulties to keep pace. Laws are now often enacted more quickly than companies and people can adapt. In the long run, this is likely to cause systemic instability, as time-scale separation is destroyed, so that many more variables begin to influence the dynamics of the system. Such attempts to make mutual adjustments on different hierarchical levels could potentially lead to turbulence, “chaos”, breakdown of synchronization, or fragmentation of the system. In fact, while it is known that delays in adaptation can destabilize a system, we are putting many of our problems on the long finger (e.g. public debts, implications of demographic change, nuclear waste, or climate change). This creates a concrete danger that our society will eventually lose control and become unstable.