A Wittgensteinian case study in overcoming scientism
This chapter is an application of some of Wittgenstein’s thoughts on scientism to our knowledge of cases where we are threatened by potential devastation, and thus where clarity of thinking and action is most urgently needed – in particular, dangerous anthropogenic climate change. This project has not been attempted hitherto.
I argue that a properly ‘precautionary’ approach to ruinous threats escapes the criticisms normally made of it. By ‘ruinous’ I mean: where there is no ‘lid’ on the threat – no known (or even knowable) upper bound to it – that is, to threats whose downside could be indefinitely bad.
Such an approach, in the case of such threats, is frequently (1) necessary, rather than being a second-best to the attainment of ‘full scientific confidence’. I lean on Wittgenstein’s On Certainty (henceforth OC) to help show this.
Moreover, even in cases where one might in principle be disappointed not to have attained scientific confidence, then, perhaps surprisingly, and certainly crucially, a properly precautious approach makes strong action to pre-empt such threats (2) more urgent and important than in cases where such certainty has been attained. This is because in cases where full scientific confidence has been attained, risk can be calculated, but cases of incalculable risks are always potentially at least as harmful if not more harmful; for, ex hypothesi, one then doesn’t know just how bad the harm will get.1
I defend my argument – again drawing on Wittgenstein’s On Certainty, and also on Nassim Taleb’s important work on risk and uncertainty – from the possible objection that the Precautionary Principle (PP) would immobilise one because it would result in having to consider any logically possible threat, however absurd. I argue that a properly precautionary approach (3) does not have to worry about absurd threats (such as the invasion of Earth by the Giant Pumpkin).2 Only a crude formalistic approach, which expects, hyperbolically, a complete scientific answer to the question of what threats one has to take seriously, rather than allowing us to use our judgement and our common sense, would suppose otherwise.
My argument is thus that precaution is an alternative to scientism: an alternative to the expectation of science even where scientific reasoning alone is not called for. This chapter is in this sense as a whole an eminently Wittgensteinian enterprise. For Wittgenstein is the philosopher who, above all others, has shown us the necessary limits of the application of scientific methodology, and who has thus shown us the hubris and inappositeness of a scientistic approach: that is, an approach that assumes that the so-called ‘scientific method’ ought ideally to be applied to every problem.
Scientism is perhaps the dominant ideology of our time.3 It is so dominant, at least in most intellectual circles, that it is hard to see it at all. It is rather the sea we swim in.4 It is so hegemonic that it often fails even to register as a belief, as an ideology, as having live-option alternatives.
The dominance of this ideology registers in the extent to which scientism’s greatest foe, Wittgenstein, is marginalised in our culture in general and in philosophy in particular. It registers also in the extent to which it appears self-evident to most people in our culture that being ‘evidence-based’ is an unalloyed good. Of course, being ‘evidence-based’ is indeed good, if the alternative is being prejudiced, engaging in wishful thinking, organising medicine or government by means of anecdote, or superstition. But it is not necessarily as good as opposed to being grounded in ethics. Or in precaution. But: can we really intelligibly and intelligently imagine precaution without evidence for precaution? In some cases, yes: we simply practice a via negativa. We avoid or prevent engendering new forms of potential pollution, for instance, without requiring any evidence at all that such pollution is actually harmful.
Moreover, much of what is called ‘evidence’ simply isn’t. For such ‘evidence’ is frequently not statistically significant, relevant to the long timescales in which ‘black swan’ events may occur; and when they occur they are not untypically ‘dominant’: they matter more than everything else combined. Thus it is too easy for being ‘evidence-based’ to amount to a very narrow and short-termist view, ignoring the rare, ignoring the dominant catastrophic occurrence – ignoring, in short, our ignorance.
This is the possibility I explore in the current essay: that a precautionary ethic is a possible and indeed a necessary alternative to the dogmas of scientism. That it steps outside the harmful confines of the mantra of being ‘evidence-based’ is merely the latest cloak for a dangerous form of scientism. And it can find a kind of philosophical support in the work of Wittgenstein.
Of course, there is absolutely nothing wrong with science as such. I am a (Wittgensteinian, Kuhnian) philosopher of science, and like virtually everyone in our culture I praise and admire its special achievements. The problem is when science is taken to be the only game in town; or when it is supposed to be unchallengeable; or when its ‘products’ or ‘offspring’ are supposed to inherit its epistemic strengths.
To pick up the last point there: tacitly, or sometimes even explicitly, the legion advocates of scientism today suppose that there ought to be a default assumption not just in favour of (genuine) science but also in favour of the adoption of any new technology (or in favour of following up ‘curiosity’ about anything that has been cultivated in any context; or of seeking a technologically ‘sweet’ solution of any kind to any problem). This is a presumption that I find hubristic and dangerous. This default assumption was exposed by Heidegger,5 as well as Wittgenstein.6 Our society operates on the basis of a problematic default assumption in favour of (technological) ‘progress’.7
Overcoming this myth of progress involves overcoming the extreme ‘Prometheanism’ and the lack of precaution8 endemic to our current technocracy. We are held captive9 by a myth of progress, so long as we do not step outside the assumption that there ought to be a default assumption in favour of the adoption of new forms of engineering: for instance, social engineering, genetic engineering, geo-engineering.
In what follows, I expound the precautionary alternative to such reckless scientism. Aping science, in the practice of technological intervention and ‘management’, is reckless when what is actually needed is a search for wisdom and a humbler attitude to the limits of our knowledge and power.
Here is perhaps the most widely accepted version of the PP relevant to public policy, from the 1990 ‘Bergen Declaration’, made by Ministers at the UN Economic Commission for Europe:
In order to achieve sustainable development, policies must be based on the precautionary principle. Environmental measures must anticipate, prevent and attack the causes of environmental degradation. Where there are threats of serious … irreversible damage, lack of full scientific certainty should not be used as a reason for postponing measures to prevent environmental degradation.10
The first point to make here, a point implicit and explicit in Wittgenstein (most notably, throughout On Certainty), is that it is folly in general to expect science to produce certainty. The popular image that science gives us or ought to give us certainty is awry. It is itself a symptom of scientism. Science yields knowledge, but knowledge in the normal case is exactly what is not certain. At the research frontier, by definition, science does not yield certainty. Certainty only eventuates from science where the field in question ceases to be science altogether, and becomes engineering/technology. Kuhn helpfully gives an example: ‘no paradigm that provides a basis for scientific research ever completely resolves all its problems. The very few that have ever seemed to do so (e.g., geometric optics) have shortly ceased to yield research problems at all and have instead become tools for engineering’.11 In other words: when geometric optics ceased to yield scientific research problems altogether, it morphed into optical engineering. It was then permanently certain, a tool of such engineering about which there was no argument, no research frontier. Just because of that, it was no longer science.
One can also say that what the paradigm in Kuhn’s sense takes for granted is certain. Paradigmatic science usually seems to – and normally does – have the same status as the framework of engineering, for its practitioners. This is correct, is supported by the broad thrust of Wittgenstein’s On Certainty, and is important. It should not be forgotten in the grip of, say, a Popperian (over-) enthusiasm for scientific change. However, we also know that it is conceivable that the paradigm may indeed change.
Thus the PP as defined in the Bergen Declaration is problematic, because it is itself a dupe of scientism. It relies on a notion of ‘full scientific certainty’ – a notion that is misplaced, as I have just sketched. At the research frontier, which is where science is alive, there is no such thing as scientific certainty. And even science’s paradigmatic assumptions are never quite as certain as those things that can be truly, permanently taken for granted, because there might be a scientific revolution just around the corner. Whereas the kinds of things that Moore sought to talk about, in his concept of ‘common-sense’, and the kinds of things that engineers rely on, that are not subject to the vicissitudes of paradigm shift, can be taken for granted without having to wonder about what might be around the corner.
However, in what follows I am going to largely ignore this point. I think one can charitably re-read the talk of ‘full scientific certainty’ in this influential version of this famous principle, as something like ‘maximal scientific confidence’. Henceforth, when addressing the idea of certainty contained in this seminal definition of the Principle, I will typically scare-quote the word ‘certainty’.
The PP applies pre-eminently to situations in which cost-benefit analysis (CBA) is inadequate because there can be no strictly probabilistic calculation of risks, because there is not scientific ‘certainty’; and, in fact, even where such ‘certainty’ exists, there will still always be contradictions with CBA, as Larry Lohmann’s work teaches us: the PP can in fact be applied even in such cases (but I won’t press that point further here).12 The relevant situations furthermore are situations where the potential ‘downside’ is severe. This is one of the key claims made by Taleb and myself in our reformulation of the Principle:13 that it is decisive only in cases of catastrophic potential downside.
The PP has been exposed to sustained attack by philosophers and others,14 as well as by political opponents of precautious action in diverse fields (most notably, by so-called ‘climate-sceptics’, and also, especially recently, by advocates of genetically modified (GM) technology). I mean in this chapter to defend a version of it against all such attacks. I start by means of a discussion of the subtleties of the invocation of ‘lack of full scientific certainty’, in the above-quoted formulation of the Principle.
The lack of ‘full scientific certainty’ in such situations as are alluded to in the quotation above sounds as if it is something to be regretted. As already indicated, there is already something awry in this would-be regret, in that it neglects the extent to which science, if it is live science (with a research frontier) by definition lacks certainty in some respects.15 But, even bracketing that point, there is something deeper and awry with the wish always to rely on science if possible.
What I have just said might sound strange. Philosophers today (indeed, this is a feature of Western culture more generally), often think of science as the always desirable first port of call, as the obviously-preferred mode of knowledge or inquiry,16 compared to which other modes are always at best a regrettable default. But, as already implied in the Introduction, this is wrong, for multiple reasons, two of which I explicate here:
(i) There are forms of inquiry and generating knowledge that are not even in principle amenable to ‘scientisation’.
By ‘scientisation’ I mean being cast in the form of scientific enterprises genuinely rather than merely rhetorically. For example, a rhetorical guise might take place in attempts to appeal to research councils for funding. Whether a more than merely-rhetorical scientisation can always be in play is what my term aims to bring into prominence, and into question.
For instance, I have argued that virtually all of economics and ‘social science’ are not even in principle amendable to scientisation.17 To treat the objects of such inquiries as objects is to fail to appreciate their capacity to ‘answer back’: it is to fail to appreciate, for instance, that making an economic forecast (let alone acting on it) can end up being self-fulfilling or self-defeating; and one cannot know for sure which of these two possibilities will eventuate, in advance, if either of them. (One also cannot know that, e.g. a monetary policy that relies on the best extant theory of money cannot be deliberately bypassed by some societal members or forces seizing on something else to use as money.) It is to fail to appreciate that a human ‘observer’ cannot in principle predict the human future, as any such prediction would alter the very future it aims to predict. (The scare-quotes around ‘observer’ are essential: there is no such thing as neutrally, purely spectatorially observing a human situation (consider someone ‘observing’ a child drowning), whereas there is such a thing as so observing a natural one.) It is to fail to reckon with human creativity (it is conceptually impossible to predict the results of future creativity, because to do so would be already to have achieved that creativity). It is to fail to understand what Peter Winch (following Wittgenstein) has taught us:18 that there is no understanding of human affairs without an understanding of the human beings who make those affairs happen as subjects who understand themselves as acting in particular ways. And it is to fail to understand the huge influence of unpredictable ‘Black Swan’ events in largely determining human affairs: such events tend to be ‘dominant’, because massive or irreversible events matter much more, in the long term, than events from which we can recover and resume our previous path.
The types of ‘uncertainty’ being described and alluded to here are radically different from the uncertainty found in stochastic physical systems, or, famously, in quantum physics. They are sui generis, and utterly resistant to a programmatic scientisation – as economists, sociologists, futurologists, etc. have found out again and again, to their cost.
Thus point (i) places an insuperable limit on any programme of social or human science. It suggests instead the need for humility, for humanistic (including sometimes historical) understanding, for cultural immersion, etc., if we want to understand how people and societies may put themselves collectively at catastrophic risk, or counter such potential risks.
(ii) There are forms of knowledge that are too basic to be scientific.19
I shall argue for this point using some of Wittgenstein’s On Certainty, written during his very final years. The ideas in this work have not previously been applied to thinking about the PP. But it is easy to do so. Let us take this remark, penned by Wittgenstein just days before death took him:
If someone believes that he has flown from America to England in the last few days, then, I believe, he cannot be making a mistake. // And just the same if someone says that he is at this moment sitting at a table and writing.20
If such a person turned out, apparently, to be speaking in contravention to the facts of the matter, and if they sincerely believed what they themselves were saying, then we might speak instead, of (say) a ‘mental disturbance’.21 It would be, as it were, too ‘big’ to be a mistake. If someone believes that there have been a few flights from America to England in the last few days, he might well be mistaken; there have (probably) been hundreds. But if he believes that he himself was on one of those flights, then his being wrong about that would imply some serious delusion on his part, or something along those lines.
In On Certainty, Wittgenstein investigates the fundamental structure of our knowledge, the relationship we have to propositions such as ‘I have two hands’, or ‘The Earth has existed for more than fifty years’. He argues that it is a complete misunderstanding of this kind of basic knowledge (if it is to be called ‘knowledge’ at all), to think of it as being itself provable, or establishable, in any way. In particular, the idea that we could scientifically validate such knowledge is quite simply absurd. For any effort that we made to verify or indeed to disprove such knowledge would itself take exactly such knowledge for granted, in the process.22
Such ‘basic knowledge’ cannot be meaningfully contravened.23 But neither can it be meaningfully undergirded or confirmed.
The ‘climate system’, in terms of our understanding of it as an actual system, is partly constituted by the knowledge referred to in (i), above; i.e. there are limits to our knowledge about the prospects for our planetary climate because that climate is partly determined by human action which is subject to the ‘constraint’ on knowledge described under (i), above.24
Meanwhile, as I shall show in section 3, below, the knowledge referred to in (ii), as explicated by Wittgenstein, can be invoked to undermine the argument (frequently used against the PP) that the PP founders on an alleged need to consider all threats, however outlandish, and approach them all equally precautionarily. On the contrary: there are many things that necessarily have to be (and thus are) taken for granted in order for there to be any science, and (more broadly) any knowledge or inquiry (including for instance precautionary inquiry), at all.
What I have tried to establish so far is that there is a swathe of (two classes of) crucial cases where taking a precautionary approach is not just second-best to some hoped-for scientific ‘certainty’, but is rather a constitutively necessary alternative to a ‘scientistic’ approach.25 Everywhere, in fact, outside the extremely rare cases of human practice where calculability is possible (e.g. the odds in a casino), and outside the kinds of cases where physical scientific knowledge is feasibly more or less able to be completed (on which, see section 2, below), we are in a realm where, strictly speaking, there are no strict (objective) probabilities, and so where a precautionary approach may be called for. Where we are exposed to ruinous threats in such a realm, it is called for. And so we can add that there is or would often be a very strong ethical case for the PP even were there to be strict probabilities where CBA is technically feasible, in such cases – because it’s not acceptable to gamble with some kinds of harm. It’s just not acceptable to put others or all of us at risk of very serious harm, no matter what the alleged ‘benefit’ that might accrue if the risk be taken.
In the public sphere, a lack of knowledge about the climate system is almost invariably taken to enjoin inaction. ‘More research is needed’, we are told, until we know enough to be ‘entitled’ to act strongly. And we are told this in part because it is supposed that we will benefit from continuing to act as we are acting (e.g. that we will benefit from ‘economic growth’), and that such benefits will be put at risk if we act differently. Thus it is supposed that we need to know more before taking serious precautious or mitigatory action.
But this is wrong. Even if the climate system were a system of which in principle we could hope eventually to attain a precise deterministic physical knowledge26 – i.e. certainty – and we were thus regretful of not already having such knowledge of it, the lack of such knowledge would not be a good reason for inaction. First, for the simple and powerful reason, familiar to us from the PP, that we often cannot afford in such cases to wait for such knowledge to be attained, because the threat is too pressing. But also, for the following even deeper reason: the presence of ‘fat tails’, of which we lack any precise knowledge, should make us more inclined to strong precautious action. Lacking deterministic knowledge of climate tipping points, of ‘1 in 1,000-year-floods’, let alone of ‘1 in 1,000,000-year-floods’, (and of how what used to be a 1 in a 100-year-flood may now be more like a 1 in 10-year-flood) and so on, makes a precautious approach essential.
The less deterministic knowledge we have, the more scope there is for the worst-case scenario to be devastating. Thus my argument here is that (for example) even if ‘climate-sceptics’ are right that our climate models are unreliable, they ought (more strongly!) to cleave to the PP, to prevent climate disaster. Model unreliability cuts both ways: it means that things may be much less or more bad than we suppose.27
I strongly applaud the climate-modellers, including my world-renowned colleagues at the UEA Tyndall Centre, who are seeking to show us all what may happen to our planet’s atmosphere and ecosystem if greenhouse gas (GHG) emissions are unabated. It is especially valuable, in order to concentrate the mind, to have vivid scenarios (and here novelists, film-makers etc. may well be as vital as scientists)28 sketching what a climate-chaotic future, likely to be consequent on a business-as-usual GHG emissions scenario, might look like.29 But one of my fundamental points in this chapter is this: in a situation with a relatively small upside and an open-endedly large downside, detailed modelling is not needed. It is desirable, but not non-negotiably essential.
Pumping unprecedented amounts of gases such as CO2 and other pollutants into the atmosphere just isn’t very smart, irrespective of what the models tell us. The PP is a specification of the more general idea of a ‘via negativa’: live so as to simplify, not so as to make more complex and disturb. One should prefer to reduce disturbances to a system rather than to increase disturbances or even to add a new disturbance that will allegedly deal with that increase.
The decision of what to do (or otherwise) in terms of potentially disturbing our climatic system by pumping more and more GHGs into it comes out right simply as a ‘decision-theoretic’ one without benefit of the ‘knowledge’ that comes from modelling scenarios. This is really the deep lesson of the PP: the way it ‘translates’ into real-world action in a way that undermines crude assumptions about the alleged need for scientific knowledge.
You don’t need to know about the future at a level of detail, in terms of what models tell us ‘will’ happen, to know what to do (and, just as importantly, what not to do), if and when you are potentially exposed to ruin.30 Models (in the sense of scientific theories about how the climate system works) can obviously help show us that a potential downside is large. But if the science indicates that there is a real possibility – not merely a logical one – of a very large downside, and if the science itself cannot specify it or tell us how large, then the correct thing to do is to invoke the PP and act, rather than to spend millions on building ever more complicated computer models that will in any case never be able to predict the future, even if we spent every penny we have on them.31
The reason why climate modelling has taken up such a prominent place in the struggles to save our common future from climatic devastation is simple to sum up, in one word: scientism. In the present case and others like it, scientism is the (biased) belief that without evidence, and without modelling, one has nothing. My argument is that precaution alone is enough to justify strong action to prevent the loading up of our atmosphere with unnatural and unprecedented levels of (greenhouse) gases. As already stated above, I have of course nothing against climate modelling; it is valuable. Modelling and precaution complement each other; but even without reliance on modelling, precautionary reasoning alone would do the trick. And it has the advantage of not being fragile to ‘climate scepticism’ and climate change denial: the more uncertain the future, the stronger the precautionary argument becomes.32
To sum up this section: that a threat cannot be non-stochastically computed makes action to pre-empt it more urgent and important than if it could be. Because, in cases where scientific ‘certainty’ has been attained, then risk could be calculated, but cases of non-calculable risks are always potentially more harmful. Thus the widespread notion of regret at (e.g.) our lack of ‘complete’ knowledge of the climate, and most crucially the inference from this that this gives us an excuse for inaction in relation to mitigating anthropogenic climate change, is the inverse of the rational and morally responsible attitude that should be taken.
The PP is often criticised as being itself a recipe for inaction, when it is ‘thought through’ to its ‘logical conclusion’.33 (Some, notably Cass Sunstein, even call it ‘the Paralyzing Principle’.34) For example, it is said that any action at all might carry dire risk; thus perhaps it is precautious to stay at home forever and not take the risk of getting knocked down crossing the road outside one’s house. It can easily be seen that this is wrong, because of course such action would itself be unprecautious: it would expose one to new dire risks (from lack of exercise, from the build-up of multiple fragilities in one’s system, etc.). We have to think of precaution in the round: we have to think of what is on balance precautious, of what does the opposite of ‘fragilising’ us further.
Does this mention of ‘on balance’ re-introduce a strictly probabilistic balance of risks? Does it re-introduce a space for CBA with regard to big decisions, rather than for Precaution? Yes and no, and no. ‘Yes’, inasmuch as, very roughly, the risks in such cases as mentioned above are partially calculable. Actuarial tables may tell you something about the risk you set yourself up for each time you cross the road (and the risk that you set yourself up for by not exercising!).35 But ‘no’ too, and (I think) in a deeper sense. It is absurd to think that one can make all or even most of one’s decisions in life through calculation (and here we again return to the territory of section 1, above): these decisions have to be made, largely, on the basis of what Wittgenstein called ‘imponderable evidence’36 on the basis of heuristics, on the basis of values or ethics, and on the basis of a rationality that is not comprehensible in the way that Rational Choice Theorists37 or first-generation Cognitive Scientists, such as Kahnemann and Tversky,38 seem to like to think of rationality. (Crucially: when thinking of rationality, they typically fail to think seriously of meta-probabilities.)
And ‘no’ further, also, in that – crucially – while the argument that one should not expose oneself to unnecessary danger by leaving one’s house is desperately bad, the argument that we should not collectively expose ourselves to unnecessary danger by tampering with fundamental biology by means of genetic engineering technology, or by tampering with fundamental atmospheric physics by means of geo-engineering technology, for instance, is in my view relatively strong.39 Precaution itself militates against the former argument but, on balance, in favour of the latter argument.
What about the argument that one should stay at home permanently because one will then be less liable to be constantly surveilled by police et al. watching one constantly on CCTV cameras – or, for that matter, the CIA or invisible little men from Mars watching one constantly? In other words: what about the oft-quoted40 worry that the PP requires one to guard against a series of more or less paranoid or (more generally) absurd threats?
The PP is for situations that we might actually encounter. (And: that there is a grey area at the divide between what we might actually encounter and what we will certainly not, does nothing to undermine the force of this point.) It is required for real but non-calculable possibilities of serious harm (and, as mentioned earlier, it is relevant for plenty of cases even if and where harms can be calculated but with which we should not gamble). The PP is not required or relevant for situations so outlandish that we literally needn’t worry about them at all. Absurd threats – the threat for instance of the Giant Pumpkin wreaking a terrible revengeful havoc on the Earth, for some unknown slight – need not detain us. (For those as yet uncertain of this, we will turn to Wittgenstein momentarily, to undergird the thought.)
Nor need the alleged concern that there is no criterion to separate absurd threats from realistic threats detain us. It is true there is no algorithmic criterion; it is a matter rather of art or judgement. The distinction between absurd threats and credible threats is too fundamental for there to be any algorithmic criterion. It should itself be seen as an instance of the kind of ‘basic knowledge’ that, following Wittgenstein, we outlined in section 1, above. (Thus David Runciman’s popular rendition of the argument against the Principle, a rendition based on the notion that the PP cannot discriminate between absurd pseudo-threats and (non-absurd) threats, fails.41)
Again, Wittgenstein’s On Certainty can help us to understand – and to undergird – the main philosophical point I have made. A fundamental principle established in On Certainty is this: doubts not only come to an end somewhere,42 but they require grounds in the first place.43 And Wittgenstein goes further: compare the following important remarks from On Certainty:
If you tried to doubt everything you would not get as far as doubting anything. The game of doubting itself presupposes certainty. (§115)
A doubt that doubted everything would not [even] be a doubt. (§450)
If we seek to imagine the PP as applying to every conceivable contingency, however outlandish, then we are imagining just such hyperbolic doubt. We would then be engaging in a quasi-Cartesian enterprise. Such an enterprise does not need engaging in – nor answering. It can simply be ruled out, as not just unnecessary, not even just self-defeating, but in fact as chronically ill defined. It lacks a determinate sense. The distinction between absurd doubts or threats and credible doubts or threats is, following Wittgenstein, too basic for there to be any algorithmic criterion for it.
Human beings are inclined dangerously to under prepare for (i.e. we inadequately mitigate ahead of time against) dangerous contingencies that can be prepared for and mitigated against by a series of precautionary policies and decisions that simplify and (in Taleb’s phrase) ‘anti-fragilise’ the systems we construct and of which we are part. That is: we make those systems not only resilient against but able to profit from non-fatal shocks. Together, we can reduce our exposure to risk and uncertainty by reducing our level of fragility. But we believe this, of course, only for real contingencies. Philosophy, after Wittgenstein (and Popper, and Taleb),44 encourages us to prepare as we can against the (catastrophic) ‘black swan’, not against the Giant Pumpkin.
To sum up this section: endorsing the PP does not force one to have to take action against absurd threats. Only an absurd scientistic notion of being able to compute all potential threats (see section 2, above), and an absurd scientistic notion of our not being able to judge the difference between sane and insane concerns, would lead one to think that it did.
With absurd threats sidelined, we can focus our attention where sanity demands it be focused. Turning once more to the implications for the crucial case of existential risk upon which we have been focusing here, the risks consequent upon messing with our climate: we ought to be deeply worried by the unquantifiable risk of the breakdown of ecological systems consequent upon unrestrained economic growth terminally wrecking our climate system. Weighed in the balance against the comparatively trivial harms allegedly caused by loss of further economic growth, such uncertainties about a possible end to civilisation are, literally, overwhelming.45
In this chapter, I have sought, drawing on Wittgenstein and on philosophers who have learnt from him, to show how, far from being some recherché piece of philosophy exposed to damning objections, the PP should actually be seen as entirely defensible, in part because it is a kind of codification of (as we might risk putting it) moral common-sense at its best. It says that in the case of serious potential threats that are real and have grounds, one shouldn’t run catastrophic risks. One should search for a way of avoiding such risks without generating other such grave risks.
I have, as one might risk putting it, sought to naturalise the PP: to show how it simply falls out of a more general rational and moral attitude, of what I (following Taleb) have called ‘anti-fragility’, which is enjoined independently of particular, perhaps controversial – scientific modellings.
Scientistic renditions of climate science narrow our view of what to do about the climatic predicament we find ourselves in by limiting the kind of thinking we engage in when comprehending that predicament to: scientific thinking and pseudo-scientific thinking. Such renditions, furthermore, fixate us narrowly upon evidence and modelling. They inflate estimations of our predictive capacity and capacity for certainty. Dangerously, they thus run the risk of reducing the willingness to act to reduce our exposure to severe, irreversible harm, once there is any challenge to those predictions. I have tried to show how the PP can overcome these hazards and how Wittgenstein can help one to overcome the dubious resistance to the PP.
As is, I hope, obvious, one key reason for engaging in this whole exercise is to address the extraordinarily widespread mindset that seeks complete scientific unanimity prior to costly action. Speaking for myself, I have a good and increasing degree of confidence in the models that show how the danger of the build-up of GHGs escalates from two degrees of over-heating upwards, and those that show what kind of concentrations of GHGs are likely to lead to such over-heating effects. But the reader need not share that confidence in order to be convinced of the same conclusion that I have drawn in this chapter: that strong precautionary action, to build down the climatic sword of Damocles hanging over us all, is mandated. That broadening of appeal for the case for climate-action, via the PP, is a key conclusion of this chapter. We can and should all act on the climate threat, even if there isn’t even general agreement on the science that models our likely climate future(s). For we ought to think of dangerous climate change like a volcano that might well erupt and yet whose eruption we could make less likely by preventing the discharge of a certain pollutant. Even if we did not have a clear picture of how likely the eruption was and how bad it would be, it would be desperately irresponsible not to warn clearly of the possibility of eruption and of the possible extremely harmful effects of the eruption, and, more crucially still, not to desist from destabilising the volcano further by continuing to pump out the pollutant that we knew to be altering the state more or less of balance currently in place in the geophysical system.
‘We need more science’ is thus, in the context of the climate threat and threats alike to it, a dangerous, prevaricating move. One might most usefully see the real problems surrounding issues of precautious action and our failure to undertake it as questions of will: deep questions of the political will and the ethical integrity to face our actual situation and do what is necessary, even if that involves painful changes and the giving-up of various things or trends that we have grown used to.46 These thoughts bring to mind Wittgenstein’s desperately important thought that philosophical problems are, in the end, more problems of the will than of the intellect.47
I have been clearing the ground for facing up to these things. I have, in that sense, been creating the conditions for a more honest and more moral conversation about climate, risk, uncertainty, precaution, etc. In particular: the PP, properly understood, starts to put into question the dogmatic, still-hegemonic ideology of ‘progress’ that Wittgenstein famously and explicitly questioned.
When the ground is cleared of scientism, then and only then will precaution be able to take its rightful place at the very foundation of sane thinking about our common future, about technology, and so forth. This shows the stakes of establishing the cultural meaning of Wittgenstein’s philosophy, and of allowing that philosophy to be received. If Wittgenstein’s philosophy were to be received, inherited and the tide of scientism to turn – if it became possible to see scientism, and for it no longer to be merely transparent to us – then at last there would be a real chance of precaution rather than recklessness becoming our common-sense.48
1 On which matter, cf. Henry Shue’s recent talk, representing his forthcoming work in this area: www.youtube.com/watch?v=JHctUFWrqlc.
2 And after all, realistically, in policy-making humans cannot work without there being a margin of uncertainty. This has not however, paralysed decision-making in most areas of policy-making. So why must it seemingly impair decision-making in environmental policy? (Thanks to Vera van Gool for this point.).
3 Rival candidates for this ‘title’ include liberalism, individualism and managerialism. In fact, all these are natural bedfellows for scientism, as outlined in my work elsewhere, especially Wittgenstein among the Sciences (Farnham: Ashgate, 2012).
The PP itself might be viewed as a kind of translation of certain ‘commons’ norms into the quasi-scientistic ‘planning’ language of industrial capitalism – like all translations, this has certain consequences that have to be carefully explored. The instinctive attraction that the Principle has for a lot of we ecologically-minded people is, I suspect, at bottom an allegiance to a broadly commonsian understanding, and is also rooted in an appreciation of the dialectic between commons and capital. This is another way of saying that the deeper issues that the present chapter throws up might ultimately be better pursued not (just) through a defence or parsing and re-parsing of the Principle, but through movement-building based more explicitly and firmly in the commons politics that underlies our attraction to it, i.e. a more historical, anthropological and political as well as philosophical approach, informed by understanding the struggles of peasant and indigenous societies but also by struggles in industrial societies over the creation of new commons, etc. A place to start in this connection is to democratise the PP: to ensure that it is a matter of and a resource for debate and understanding among citizens, not merely for elites. That is why I think the thinking of Andy Stirling, Brian Wynne etc. on how to find and place precaution in the agora important.
4 Consider here the following pertinent remark of Wittgenstein’s to his student, Drury: ‘My type of thinking is not wanted in this present age, I have to swim so strongly against the tide. Perhaps in a hundred years people will want what I am writing.’ Drury, M. O’C., ‘Some notes on conversations with Wittgenstein’, in Rush Rhees (ed.), Recollections of Wittgenstein (Oxford: Blackwell, 1984), p. 94. That tide was, above all, scientism.
5 E.g. in ‘The Question concerning Technology’ (‘Die Frage nach der Technik’), in The Question Concerning Technology and Other Essays (tr. William Lovitt) (Garland: New York and London, 1977).
6 See e.g. my ‘Wittgenstein and the illusion of progress’, www.youtube.com/watch?v=hEPcQ6sIOTY.
7 I challenge the ideology of progress in depth in my ‘Wittgenstein and the illusion of progress’, in Philosophy, Royal Institute of Philosophy Supplement 78: 265–284 (2016).
8 On which, see my joint NYU School of Engineering Working Paper article on ‘The precautionary principle’ with Nassim Taleb et al., here: www.fooledbyrandomness.com/pp2.pdf.
9 Cf. Philosophical Investigations §105.
10 Quoted on p. 115 of Cameron’s ‘The precautionary principle in international law’, in Tim O’Riordan, James Cameron and Andy Jordan, Re-interpreting the Precautionary Principle (London: Cameron May, 2001). It might immediately be objected that this and other classic formulations of the PP don’t prevent use of CBA, so long as the potential loss can be in some way computed. But even if this is true, it is very unclear that the loss can really be computed in the case in question: Is it anything less than an exercise in obscenity and absurdity to seek to put a ‘price’ on the potential end of civilisation, for instance?
11 Structure of Scientific Revolutions (Second Ed.) (Chicago: University of Chicago Press, 1970 [1962]), p. 79.
12 See e.g. The Corner House, ‘The Cost-Benefit Analysis Dilemma: Strategies and Alternatives’, Economic and Political Weekly 36:21 (26 May–1 June 2001), pp. 1824–1837.
Some people (including e.g. Lord Stern in Stern Review, 2006) seem to view the PP and CBA as connected – either through thinking of CBA as amendable to be more ‘precautionary’ or (e.g. in the case of Dupuy and Grinbaum, see: ‘Living with uncertainty: from the precautionary principle to the methodology of ongoing normative assessment’ in Geoscience 337 (2005)) through thinking of the PP as usable only through using some version of CBA (I return to this latter point below). My view, which preserves the distinction between the PP and CBA, is close to that of Stirling 2001, ‘The Precautionary Principle in science and technology’, in Reinterpreting the Precautionary Principle, ed. Timothy O’Riordan, James Cameron, and Andrew Jordan (op. cit.).
It might be argued that a Bayesian stance, focusing on ‘subjective probabilities’, circumvents the problems with ‘standard’ CBA, in that such a stance is not committed to being able to compute numbers to objectively measure the risk. But a Bayesian approach of course moves further away from any direct concern with objective probabilities that can output numbers that can then actually represent the real level of a risk. It is the absence of the existence of such numbers that lies at the root of my stress on the ubiquity of uncertainty (and thus the relevance of the PP). Moreover, even in a Bayesian approach there is no progress without numbers, without computing results. But I am suggesting that the use of numbers to measure risk may itself be a founding delusion, hereabouts, except where there are objective probabilities (e.g. in a non-crooked casino). And that what is preferable – and available – is a precautionary approach that can work without the need to resort to probabilities, no matter of what kind. (The exception is only when, as in much of Taleb’s work, ‘probabilities’ are being represented only as the logic of a situation, not as something to which numbers can literally be ascribed.)
14 This can be clearly seen in for instance Stephen Gardiner’s influential piece, ‘A core Precautionary Principle’ in the Journal of Political Philosophy 14 (1): 33–60 (2006). Gardiner means to be defending the Principle against widespread criticisms that other philosophers have made, but, by my lights in the present piece, some of the concessions he makes amount to an attack on the Principle.
15 For discussion, see e.g. pp. 78–80 of the second edition of Kuhn’s Structure of Scientific Revolutions.
16 Even leaving aside here the valid point, implicit in my Wittgenstein among the Sciences, that even the idea of natural science as a unitary phenomenon is itself questionable. For the most compelling argument to that conclusion, see ‘The disunity of science’ by John Dupré (Mind, New Series 92(367) (July 1983), 321–346).
17 See especially my There Is No Such Thing as a Social Science (co-authored with Phil Hutchinson and Wes Sharrock) (Farnham: Ashgate, 2012) and Wittgenstein among the Sciences (Farnham: Ashgate, 2012). See also Nassim Nicholas Taleb’s Antifragile (London: Penguin; 2012), www.fooledbyrandomness.com/pinker.pdf, and The Black Swan (New York: Random House, Second Ed., 2010), works which have been a huge influence on this chapter.
18 See his The Idea of a Social Science (London: Routledge, 1990 (1958)).
19 I use the word ‘forms’ advisedly: in the sense, roughly, of (say) potter’s forms. These may, in other words, be modes of expression or of thinking that give knowledge a form, a shape, rather than themselves instantiating what is best called knowledge; it may well be that the word ‘knowledge’ is not ultimately the most helpful word, here. What Wittgenstein discusses at some length, in On Certainty, is the very status and nature of such basic ‘knowledge’.
20 Wittgenstein, On Certainty (Oxford: Blackwell, 1969) §675; emphasis in the original. Wittgenstein is of course explicitly contesting here the legacy of Descartes, the founder of modern Western philosophy, who sought to suggest that everything could potentially be doubted.
21 See e.g. OC §71.
22 A similar argument in the phenomenological tradition can be found in the early chapters of Michel Henry’s Barbarism (London: Continuum, 2012).
23 Which cannot, however, contra Moore and Descartes alike, be used as a foundation to build an epistemological edifice on.
24 It might seem that this is not so. It might seem that the question ‘What will the climate be like in X years time if we emit Y quantity of GHGs?’ has a definite answer, regardless of human action. But this isn’t the case. For the question can at best be answered on a ceteris paribus basis. But ceteris never is paribus, when human beings are to some extent or another aware of what is happening or reacting to it. For example, the climate, even at a given level of GHGs, will be radically different if we geoengineer; or if we cut down the Amazon rainforest; or if we restore it.
There is, moreover, a whole literature discussing the ‘climate determinism’ that is today in some senses furthered and elaborated by General Circulation Model methodology; see for instance Mike Hulme’s work.
25 I do not of course mean to imply that there is any conflict between science and the PP. Far from it. Indeed, Andy Stirling’s 2001 paper (op. cit.) claims that the PP is more scientifically appropriate than ‘narrow risk’ approaches (by which he means CBA), because it more genuinely reflects the complexities of uncertainty.
26 One reason why it isn’t, is given in section 1 above; another reason, which we shall not press here, is the broadly Mandelbrotian nature of the climate system even considered as a physical system. Furthermore, even deterministic systems are not necessarily predictable beyond a rudimentary, short-term level.
27 Cf. http://www.blackswanreport.com/blog/2015/05/our-statement-on-climate-models/. This only fails to follow if the model is total rubbish: e.g. if one has no good reason to believe in the greenhouse effect at all. But few ‘climate-sceptics’ are as bold as to be physics-deniers, who would be rendered unable to explain why (for instance) the Earth’s surface is warmer than the Moon’s.
28 A first-class example is Erik M. Conway and Naomi Oreskes, The Collapse of Western Civilization: A View from the Future (New York: Columbia University Press, 2014).
29 It is perhaps worth noting here the irony that there is a problem inherent in efficaciously using precautionary action: the problem of not being able to show that it has worked (even if – or rather, especially if – it in fact has done so). For, when it works, we can’t of course show people the counter-factual. That is to say that, sadly, we can only see just how badly we need to act precautiously by observing what happens in cases where, sadly, we don’t or didn’t. We need to learn from such cases in the past, before risking our entire future.
30 At most, you need to know that the kinds of harms you want to prevent are worse than the kinds of harms you might risk from taking ‘action’ or failing to do so.
31 Thus this itself is a point against scientism. For, that it might be assumed that such computers could model the future, when in principle they could not, illustrates the conflating of a conceptual problem with an allegedly scientific hypothesis. This is directly connected with Wittgenstein’s own criticisms of scientism, as he thought just this problem was endemic in philosophy, particularly in metaphysics (cf. the Blue Book (Oxford: Blackwell, 1969 [1958]), especially pp. 18 and 35). Add to this the ultimately spurious nature of the techniques used to estimate probabilities of outcomes in such models: spurious, most centrally for the kinds of reasons given in (1), (i) above.
32 See http://www.blackswanreport.com/blog/2015/05/our-statement-on-climate-models/ and http://www.theguardian.com/environment/climate-consensus-97-percent/2014/apr/04/climate-change-uncertainty-stronger-tackling-case.
33 For detailed discussion, see Chapter 2, section 3.4 (ii) of Ruth Makoff’s unpublished 2011 UEA Ph.D thesis, ‘Ethical criteria to guide an international agreement on climate change’.
35 And there are of course many globally ‘non-scalable’ cases where a rough-and-ready ‘balance of probabilities’ approach can be fairly harmlessly taken: e.g. the decision, as a woman over 40, of whether or not to have regular mammograms. As explicated by Taleb in the opening pages of Chapter 22 of Antifragile, it is by no means clear that such a calculation suggests that the rational thing to do, given the evidence, is to have regular mammograms. (I also note however that there are elements of this case too that are in any case not amenable to rational calculation: such as the question of being nagged by doubts in relation to a decision which might have serious consequences for one’s body-image, physical attractiveness, etc.) To generalise: randomised controlled trials and the like give one a useful evidentiary basis for making decisions in situations which, while not strictly calculable as odds in a casino are, can nevertheless generally be roughly calculated. Such are not however ‘fat-tail’ cases; they are not the kinds of cases most commonly encountered in the social/economic ‘sciences’ (which we explored in section 1, above), and often, similarly, in the environmental sciences.
36 See his beautiful discussion at p. 228 of section xi of ‘Part II’ of Philosophical Investigations (Oxford: Blackwell, 1997 [1953]).
37 Consider here for example the way in which the uber-Rational Choice Theorist Jon Elster gradually came to realise that for many decisions it is or would be irrational to make them by means of the canons of Rational Choice Theory.
38 Compare here Lakoff and Johnson’s critique (e.g. in Philosophy in the Flesh (New York: Basic, 1999)) of Kahnemann and Tversky et al.: the latter are not (as they think they are) showing us that we are all almost constantly more or less irrational; they are only ‘showing’ us that our idea of rationality as calculative, explicit, conscious, etc. is a prejudice that we need to overcome. And that that narrow idea of rationality does not properly encompass the way in which common-sense reasoning often implicitly includes an intelligent concern with meta-probabilities – that is, with reasons for being concerned about uncertainties present or inherent in calculations of risk. (Kahnemann and Tversky are also sometimes manifesting examples of inadvertently poor experimental design on their part and that of other psychologists.).
39 On this, see once more my joint article on the PP with Taleb et al.
40 See e.g. David Runciman’s London Review of Books piece: www.lrb.co.uk/v26/n07/david-runciman/the-precautionary-principle. See also David B. Resnik, ‘Is the precautionary principle unscientific?’, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 34(2), 329–344.
41 See again www.lrb.co.uk/v26/n07/david-runciman/the-precautionary-principle. Runciman’s approach is I think clearly derived from that of Sunstein. Sunstein’s book Laws of Fear (Cambridge: Cambridge University Press, 2009) and approach is highly problematic, but it does have one positive virtue: it allows some space for an ‘anti-catastrophe principle’, which is evidently not a million miles away from my interpretation of the PP. Runciman would have done well to have learnt from this moment in Sunstein’s book.
42 See OC §§110, 130 and 204.
43 See OC §§4, 516 and 122.
44 Here I am thinking especially of Taleb’s important broadly Popperian work of contemporary public philosophy, The Black Swan.
45 For amplification, see Larry Lohmann’s impressive arguments against CBA and for precaution, at www.thecornerhouse.org.uk; and compare Carl Cranor’s paper ‘Toward understanding aspects of the precautionary principle’ (Journal of Medicine and Philosophy 29:3 (2004), pp. 259–279), which (at pp. 267–272) rebuts the alleged advantage of CBA over precaution that the former is supposedly more normatively ‘neutral’.
46 These problems of course extend then to the problems of asymmetric effects (the ‘sacrifices’ needed to prevent the likelihood of runaway climate change come mainly now, the benefits come mainly much later), of the tragedy of open-access (the ‘sacrifices’ needed benefit the sacrificer themselves only very marginally, in material terms, thus leading to a potent ‘free-rider’ problem), of despair at what to do in light of dysfunctional political systems and systems of international governance, etc.
In relation to the last point: there is little basis for precautious action if one has no confidence that the threat facing us is one we can possibly succeed in fighting. If the Earth were to be faced, say, with the possible sudden coming of a devastating asteroid storm that, if it came, would be so vast as to be clearly beyond human powers to resist, then the PP would not apply. But of course to think that anthropogenic climate change is like that is surely itself to slip into the mode of thinking I opposed in (1), (i), above: it is to slip into a fatalism that is inappropriate, with regard to human action, where, as philosophers from Pascal to James have brought us to understand, the great danger of a belief that we cannot succeed in addressing a grave threat is not that the belief is true, but that the belief is self-fulfilling.
47 See p. 161 of Wittgenstein’s Philosophical Occasions (Indianapolis: Hackett, 1991; edited by James Klagge and Alfred Nordmann).
48 Many thanks to the editors and to Jenni Barclay, Vera van Gool, Nick Cameron, Alex Haxeltine, Vlad Vexler, Angus Ross, Larry Lohmann and Ruth Makoff for extremely helpful comments on earlier drafts of this chapter. And deep thanks to audiences of the Future of Humanity Institute (Oxford) and elsewhere, whose thoughts on this material have been invaluable. Thanks also to Tess Read. Thanks above all to Larry Lohmann and Nassim Taleb, whose work has directly led to and further stimulated key insights in this chapter.
Cameron, James, 2001. ‘The precautionary principle in international law’ in Tim O’Riordan, James Cameron and Andrew Jordan (eds), Re-interpreting the Precautionary Principle (London: Cameron May).
Conway, Erik and Oreskes, Naomi, 2014. The Collapse of Western Civilization: A View from the Future (New York: Columbia University Press).
Cranor, Carl, 2004. ‘Toward understanding aspects of the precautionary principle’. Journal of Medicine and Philosophy 29(3): 259–279.
Drury, Maurice O’C., 1984. ‘Some notes on conversations with Wittgenstein’ in Rush Rhees (ed.), Recollections of Wittgenstein (Oxford: Blackwell).
Dupré, John, 1983. ‘The disunity of science’, Mind New Series 92(367): 321–346.
Dupuy, Jean-Pierre and Grinbaum, Alexei, 2005. ‘Living with uncertainty: from the precautionary principle to the methodology of ongoing normative assessment’. Geoscience 337(4): 457–474.
Gardiner, Stephen, 2006. ‘A core precautionary principle’, Journal of Political Philosophy 14(1): 33–60.
Heidegger, Martin, 1977. ‘The question concerning Technology’ (‘Die Frage nach der Technik’), in The Question Concerning Technology and Other Essays (tr. William Lovitt) (New York and London: Garland).
Henry, Michel, 2012. Barbarism (London: Continuum).
Kuhn, Thomas, 1970 [1962]. Structure of Scientific Revolutions (Second Ed.) (Chicago: University of Chicago Press).
Lakoff, George and Johnson, Mark, 1999. Philosophy in the Flesh (New York: Basic).
Read, Rupert, 2012. Wittgenstein among the Sciences (Farnham: Ashgate).
Read, Rupert, 2015. ‘When the rise of the robots meets the limits to growth’. Green House think tank. Available at: http://www.greenhousethinktank.org/uploads/4/8/3/2/48324387/response_to_pearmain.pdf.
Read, Rupert, 2016. ‘Wittgenstein and the illusion of progress’, in Philosophy, Royal Institute of Philosophy Supplement 78: 265–284.
Read, Rupert, Hutchinson, Phil and Sharrock, Wes, 2012. There Is No Such Thing as a Social Science (Farnham: Ashgate).
Resnik, David B., 2003. ‘Is the precautionary principle unscientific?’, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 34(2): 329–344.
Runciman, David, 2004. ‘The Precautionary Principle’, London Review of Books, 26(7): 12–14. Available at: www.lrb.co.uk/v26/n07/david-runciman/the-precautionary-principle.
Stirling, Andrew, 2001. ‘The Precautionary Principle in science and technology’, in Tim O’Riordan, James Cameron and Andrew Jordan (eds.), Re-interpreting the Precautionary Principle (London: Cameron May).
Sunstein, Cass, 2009. Laws of Fear (Cambridge: Cambridge University Press).
Taleb, Nassim Nicholas and Read, Rupert, et al., 2016. ‘The Precautionary Principle’ (NYU School of Engineering Working Paper), available at www.fooledbyrandomness.com/pp2.pdf.
Taleb, Nassim Nicholas, 2010. The Black Swan (Second Ed.) (New York: Random House).
Taleb, Nassim Nicholas, 2012. Antifragile (London: Penguin).
The Corner House, www.thecornerhouse.org.uk/resource/cost-benefit-analysis-dilemma.
Winch, Peter, 1990 [1958]. The Idea of a Social Science (London: Routledge).