We have seen in chapter 3 that complex systems, and in particular ecosystems and the climate system, can suddenly tip over into another state in the manner of a switch to which a constant and increasing pressure is applied. The unpredictability of these shifts is enough to baffle any decision maker or strategic expert because, in our societies, choices are usually based on our ability to predict events. However, without a high degree of predictability, it is difficult to invest financially, humanely or technically in the right places and at the right time.
The crucial challenge is therefore to detect the warning signs of these catastrophic changes so as to anticipate them and react in time. More precisely, we need to learn to recognize the extreme fragility of a system that is approaching a tipping point, the very same one that paves the way for the ‘little spark’. For example, in arid Mediterranean pastures, when vegetation shows irregular shapes in patches (in aerial views), this is because the ecosystem is not far from tipping over into a state of desertification that will be difficult to reverse.1 This field of study, which investigates early warning signals, is a rapidly growing discipline.
One of the most frequently observed characteristics of a system ‘on the edge of the abyss’ is that it takes longer to recover from a small disruption. Its recovery time after a shock lengthens – in other words, its resilience decreases. Researchers call this ‘critical slowing down’, identifiable by complex mathematical indices based on series of temporary data (autocorrelation, dissymmetry, variance, etc.) which reveal a system’s state of fragility and therefore the possibility that it is about to reach a tipping point.
In the field, after the collapse of an ecosystem, researchers collect masses of data (environmental variables) that bear witness to past events, and analyse them. Some have even gone so far as to trigger experimentally – in the laboratory – collapses in populations so as to test these indicators. For instance, in 2010, two researchers from the Universities of Georgia and South Carolina exposed populations of daphnia (zooplankton) to increasingly damaged conditions (a reduction in food availability) and clearly observed warning signals of population collapse: a critical slowdown in population dynamics appeared up to eight generations before the population collapse.2 Since then, similar findings have been observed for populations of yeast, cyanobacteria and aquatic ecosystems, but only in artificial and controlled conditions.3 In 2014, a team of British climatologists was even able to identify the warning signals that preceded the collapse of the Atlantic Ocean current over the course of the last million years, an event that, if it took place today, would drastically change our climate.4 But researchers still can’t say precisely whether these signals are currently being produced.
New indicators are regularly added to the list of existing ones and increase the power of prediction of catastrophic changes. For the climate, for example, it has been observed that, at the end of a period of glaciation, temperature variations start going haywire and flickering before abruptly tipping over into a hot period.5 This subtle index also works for lake ecosystems6 but, although very reliable (it really does harbinger catastrophic changes), it only appears when it’s too late to avoid such changes.
We cannot artificially disrupt a major ecosystem or a socio-ecological system for experimental purposes. So, researchers have so far contented themselves with observing natural or historical catastrophic changes without testing the predictions of these indicators in real life.
This method can nevertheless be used to classify systems according to the distance that separates them from a breakdown, i.e., according to their degree of resilience,7 and this could prove to be very useful in making decisions, especially in biodiversity conservation policies.
In 2012, the discipline of warning signals benefited from major advances made by specialists in interaction networks, who are starting to clearly define the behaviour of very heterogeneous complex networks subjected to disruption.8 For example, in a flowering meadow, imagine the immense web of relations between all species of pollinators (bees, flies, butterflies, etc.) and all pollinated plant species, where some species are specialists (in one flower) and others are generalists (they pollinate several species). This complex network of mutual interactions has a structure that makes it very resilient to disruptions (for example, the disappearance of some pollinators because of pesticides). On the other hand, observations, experiments and models show that these networks have hidden thresholds beyond which you must not venture on pain of seeing the network suddenly collapse.
Figure 7.1 Typical responses of complex networks to disruptions
Source: after M. Scheffer et al., ‘Anticipating Critical Transitions,’ Science 338(6105), 2012: 344–8.
More generally, it has been shown that complex networks are very sensitive to two factors: heterogeneity; and connectivity between their constituent elements9 (see Figure 7.1). A heterogeneous and modular network (weakly connected, with independent parts) will withstand shocks by adapting. It will suffer only local losses and will gradually become more and more damaged. On the other hand, a homogeneous and highly connected network initially resists change because local losses are absorbed through the connectivity between elements. But, then, if the disruptions continue, it will be subject to domino effects and therefore catastrophic changes. In reality, the apparent resilience of these homogeneous and connected systems is misleading as it hides a growing fragility. Like the oak, these systems are very resistant but break when the pressure is too great. Conversely, heterogeneous and modular systems are resilient; they bend but do not break. Like the reed, they adapt.
There are indeed parallels to be made between these natural systems and human systems, as we saw in chapter 5.10 These discoveries are fundamental when it comes to designing more resilient social systems, especially for finance and the economy. But even if the theory of networks adds greatly to our understanding of economic and social networks, there are still many obstacles to overcome before we can find reliable warning signals in them. Current indicators are not enough to predict the tipping points of social systems, given their complexity. Thus attempts to develop warning signals have for the moment failed, or no consensus on them has been reached.11 Of course, we still have relevant indicators based on economic fundamentals when the situation is ‘normal’ but, as thresholds approach, it becomes impossible to evaluate anything. Some people have looked for critical slowing-down signs for financial systems but have not found any. Instead, they have found other indices that are not yet generalizable.12 In short, for financial crises, the study of warning signals gives us a better grasp of how they work but does not make them any more predictable.
Science may make fantastic progress, but science will always come up against epistemological limits.13 In this race against the clock we will always be running late,14 as detecting a warning sign does not guarantee that the system has not already tipped over into another state.
To complicate matters further, warning signals may appear without being followed by a collapse and, conversely, collapses can arise without giving any warning signal. Sometimes, too, systems collapse ‘gently’, in a non-catastrophic way.15 So we’re dealing with a true ‘biodiversity’ of system collapse. This means that the best warning signals are generalizable but not universal: their presence is not synonymous with certainty but rather with a high probability of collapse.
Finally, and this is especially true for social and financial systems, it is very difficult and expensive to harvest good quality data in real time, and it is impossible to identify all the factors that contribute to the vulnerability of hyper-complex systems. It seems then that we are doomed, for the moment, to being able to take action only after catastrophes.16
For a complex system like the Earth system (see the study published in 2012 in the journal Nature and cited at the end of chapter 3), it is actually impossible, given our current knowledge, to state that the presence of global warning signals a collapse of ‘Gaia’ – and even less to give a date. But, thanks to these studies, we have gained the ability to ‘possibilize’ this catastrophe by referring to past geological events and by assuming there is a probability that this will happen.
But beware: the existence of uncertainty does not mean that the threat is any the less or that we have nothing to worry about. On the contrary, it is the main argument in favour of the enlightened catastrophist policy proposed by Jean-Pierre Dupuy: to act as if these abrupt changes were certain and so do everything to make sure they do not actually occur.
In fact, tools for predicting tipping points are very useful to show us that we have crossed boundaries (see chapter 3) and that we are entering a danger zone. Unfortunately, this very often means that it’s already too late to hope for a return to an earlier, stable and known state. These tools allow us less to anticipate a specific date than to know what kind of future awaits us.
In collapsology, then, we need to accept the fact that we are not able to predict everything. This is a double-edged principle. On the one hand, we will never be able to say with any certainty that a general collapse is imminent (before having experienced it). In other words, sceptics will always be able to object on this basis. On the other hand, scientists will not be able to guarantee that we have not already seriously crossed certain boundaries, i.e., we cannot objectively assure humankind that the space in which it is living today is stable and safe. So there will always be grist to the pessimists’ mill.
So what are we to do? Remember the 2009 earthquake in Aquila in Italy, when scientists were convicted by the courts for not having provided a clear estimate of the probabilities of a potential earthquake. The catastrophe happened in spite of the measuring instruments. Remember, too, the period leading up to the banking crisis of 2008, when some very insightful commentators sounded the alarm but were obviously not listened to. They were able intuitively to pick up many signs of an imminent crisis, such as speculative bubbles in the US real-estate market and the sudden increase in the price of gold that traditionally acts as a safe investment. But it was impossible for them to prove objectively and rationally what they were suggesting. The catastrophe happened without measuring instruments and in spite of the intuition of whistle-blowers. So how can we know? And who and what are we to believe?
Above all, not economic calculations or cost-benefit analyses – they’re useless! Because ‘as long as we are far from the thresholds, we can afford to mess with ecosystems with impunity’. There’s no cost, it’s all benefit! And as Dupuy points out, ‘if we approach critical thresholds, the cost-benefit calculation becomes derisory. The only thing that matters then is not to cross them. [… ] And we need to add that we don’t even know where the thresholds are.’17 Our ignorance, then, is not a question of the accumulation of scientific knowledge; it is consubstantial with the very nature of complex systems. In other words, in a time of uncertainty, it’s intuition that counts.