world there are no causes, so every sub-population is homogeneous in all causal factors, and the maximal homogeneous population is the whole population. So if there is any C such that Prob(E/C) > Prob(E), it will be true that C causes E, and this world will not be a Hume world after all.
Apparently the laws of association underdetermine the causal laws. It is easy to construct examples in which there are two properties,
P and
Q, which could be used to partition a population. Under the partition into
P and ┐
P,
C increases the conditional probability of
E in both sub-populations; but under the partition into
Q and ┐
Q, Prob(
E/
C) = Prob(
E). So relative to the assumption that
P causes
E, but
Q does not, '
C causes
E' is true. It is false relative to the assumption that
Q
E, and
P
E. This suggests that, for a given set of laws of association, any set of causal laws will do. Once some causal laws have been settled, others will automatically follow, but any starting point is as good as any other. This suggestion is mistaken. Sometimes the causal laws are underdetermined by the laws of association, but not always. Some laws of association are compatible with only one set of causal laws. In general laws of association do not entail causal laws: but in particular cases they can. Here is an example.
Consider a world whose laws of association cover three properties,
A, B, and
C; and assume that the following are implied by the laws of association:
(1) |
|
(2) |
Prob(C/B & A > Prob(C/A); Prob(C/B & ┐ A) > Prob(C/┐A)
|
(3) |
|
In this world,
B
C. The probabilities might for instance be those given in Chart 1. From just the probabilistic facts (1), (2), and (3), it is possible to infer that both
A and
B are causally relevant to
C. Assume
B 
±
C. Then by (1),
A
C, since the entire population is causally homogeneous (barring
A) with respect to
C and hence counts as a test population for
A's effects on
C. But if
end p.41
A
C, then by (2),
B 
±
C. Therefore
B 
±
C. But from (3) this is not possible unless A is also relevant, either positively or negatively, to
C. In the particular example pictured in the chart,
A and
B are both positively relevant to
C.
This kind of example may provide solace to the Humean. Often Humeans reject causal laws because they have no independent access to them. They suppose themselves able to determine the laws of association, but they imagine that they never have the initial causal information to begin to apply condition C. If they are lucky, this initial knowledge may not be necessary. Perhaps they live in a world that is not a Hume world; it may nevertheless be a world where causal laws can be inferred just from laws of association.
4. Conclusion
The quantity Prob(E/C.K
j ), which appears in both the causal condition of Part 1 and in the measure of effectiveness from Part 2, is called by statisticians the partial conditional probability of E on C, holding K
jfixed; and it is used in ways similar to the ways I have used it here. It forms the foundation
end p.42
for regression analyses of causation and it is applied by both Suppes and Salmon to treat the problem of joint effects. In decision theory the formula
SC is structurally identical to one proposed by Brian Skyrms in his deft solution to New-comb's paradox; and elaborated further in his book
Causal Necessity.
25 What is especially significant about the partial conditional probabilities which appear here is the fact that these hold fixed all and only causal factors.
The choice of partition, {
K
j }, is the critical feature of the measure of effectiveness proposed in
SC. This is both (a) what makes the formula work in cases where the simple conditional probability fails; and (b) what makes it necessary to admit causal laws if you wish to sort good strategies from bad. The way you partition is crucial. In general you get different results from
SC if you partition in different ways. Consider two different partitions for the same space,
K
1 , . . . ,
K
nand
I
1 , . . .
I
s , which cross-grain each other—the
K
iare mutually disjoint and exhaustive, and so are the
I
j . Then it is easy to produce a measure over the field (±
G, ±
C, ±
K
i , ±
I
j ) such that
What partition is employed is thus essential to whether a strategy appears effective or not. The right partition—the one that judges strategies to be effective or ineffective in accord with what is objectively true—is determined by what the causal laws are. Partitions by other factors will give other results; and, if you do not admit causal laws, there is no general procedure for picking out the right factors. The objectivity of strategies requires the objectivity of causal laws.
end p.43
Essay 2 The Truth Doesn't Explain Much
Abstract: The standard view of explanation in science—the covering law model—assumes that knowledge of laws lies at the basis of our ability to explain phenomena. But in fact most of the high-level claims in science are ceteris paribus generalizations, which are false unless certain precise conditions obtain. Given the explanatory force of ceteris paribus generalizations but the paucity of true laws, the covering law model of explanation must be false. There is, it is argued, a trade-off between truth and explanatory power.
Nancy Cartwright
0. Introduction
Scientific theories must tell us both what is true in nature, and how we are to explain it. I shall argue that these are entirely different functions and should be kept distinct. Usually the two are conflated. The second is commonly seen as a by-product of the first. Scientific theories are thought to explain by dint of the descriptions they give of reality. Once the job of describing is done, science can shut down. That is all there is to do. To describe nature—to tell its laws, the values of its fundamental constants, its mass distributions—is ipso facto to lay down how we are to explain it.
This is a mistake, I shall argue; a mistake that is fostered by the covering-law model of explanation. The covering-law model supposes that all we need to know are the laws of nature—and a little logic, perhaps a little probability theory—and then we know which factors can explain which others. For example, in the simplest deductive-nomological version,
1 the covering-law model says that one factor explains another just in case the occurrence of the second can be deduced from the occurrence of the first given the laws of nature.
But the D-N model is just an example. In the sense which is relevant to my claims here, most models of explanation offered recently in the philosophy of science are covering-law models. This includes not only Hempel's own inductive statistical model,
2 but also Patrick Suppes's probabilistic model of causation,
3 Wesley Salmon's statistical relevance model,
4
end p.44
and even Bengt Hanson's contextualistic model.
5 All these accounts rely on the laws of nature, and just the laws of nature, to pick out which factors we can use in explanation.
A good deal of criticism has been aimed at Hempel's original covering-law models. Much of the criticism objects that these models let in too much. On Hempel's account it seems we can explain Henry's failure to get pregnant by his taking birth control pills, and we can explain the storm by the falling barometer. My objection is quite the opposite. Covering-law models let in too little. With a covering-law model we can explain hardly anything, even the things of which we are most proud—like the role of DNA in the inheritance of genetic characteristics, or the formation of rainbows when sunlight is refracted through raindrops. We cannot explain these phenomena with a covering-law model, I shall argue, because we do not have laws that cover them. Covering laws are scarce.
Many phenomena which have perfectly good scientific explanations are not covered by any laws. No true laws, that is. They are at best covered by ceteris paribus generalizations—generalizations that hold only under special conditions, usually ideal conditions. The literal translation is 'other things being equal'; but it would be more apt to read 'ceteris paribus' as 'other things being right.'
Sometimes we act as if this does not matter. We have in the back of our minds an 'understudy' picture of ceteris paribus laws: ceteris paribus laws are real laws; they can stand in when the laws we would like to see are not available and they can perform all the same functions, only not quite so well. But this will not do. Ceteris paribus generalizations, read literally without the 'ceteris paribus' modifier, are false. They are not only false, but held by us to be false; and there is no ground in the covering-law picture for false laws to explain anything. On the other hand, with the modifier the ceteris paribus generalizations may be true, but they cover only those few cases where the conditions are right. For most cases, either we have a law that purports to cover, but cannot explain
end p.45
because it is acknowledged to be false, or we have a law that does not cover. Either way, it is bad for the covering-law picture.
1. Ceteris Paribus Laws
When I first started talking about the scarcity of covering laws, I tried to summarize my view by saying 'There are no exceptionless generalizations'. Then a friend asked, 'How about "All men are mortal"?' She was right. I had been focusing too much on the equations of physics. A more plausible claim would have been that there are no exceptionless quantitative laws in physics. Indeed not only are there no exceptionless laws, but in fact our best candidates are known to fail. This is something like the Popperian thesis that every theory is born refuted. Every theory we have proposed in physics, even at the time when it was most firmly entrenched, was known to be deficient in specific and detailed ways. I think this is also true for every precise quantitative law within a physics theory.
But this is not the point I had wanted to make. Some laws are treated, at least for the time being, as if they were exceptionless, whereas others are not, even though they remain 'on the books'. Snell's law (about the angle of incidence and the angle of refraction for a ray of light) is a good example of this latter kind. In the optics text I use for reference (Miles V. Klein,
Optics),
6 it first appears on page 21, and without qualification:
Snell's Law: At an interface between dielectric media, there is (also)
a refracted ray in the second medium, lying in the plane of incidence, making an angle θ
twith the normal, and obeying Snell's law:
where ν
1and ν
2are the velocities of propagation in the two media, and
n
1= (
c/ν
1 ),
n
2= (
c/ν
2 ) are the indices of refraction.
end p.46
It is only some 500 pages later, when the law is derived from the 'full electromagnetic theory of light', that we learn that Snell's law as stated on page 21 is true only for media whose optical properties are
isotropic. (In anisotropic media, 'there will generally be
two transmitted waves'.) So what is deemed true is not really Snell's law as stated on page 21, but rather a refinement of Snell's law:
Refined Snell's Law: For any two media which are optically isotropic, at an interface between dielectrics there is a refracted ray in the second medium, lying in the plane of incidence, making an angle θ
twith the normal, such that:
The Snell's law of page 21 in Klein's book is an example of a ceteris paribus law, a law that holds only in special circumstances—in this case when the media are both isotropic. Klein's statement on page 21 is clearly not to be taken literally. Charitably, we are inclined to put the modifier 'ceteris paribus' in front to hedge it. But what does this ceteris paribus modifier do? With an eye to statistical versions of the covering law model (Hempel's I-S picture, or Salmon's statistical relevance model, or Suppes's probabilistic model of causation) we may suppose that the unrefined Snell's law is not intended to be a universal law, as literally stated, but rather some kind of statistical law. The obvious candidate is a crude statistical law: for the most part, at an interface between dielectric media there is a refracted ray . . . But this will not do. For most media are optically anisotropic, and in an anisotropic medium there are two rays. I think there are no more satisfactory alternatives. If ceteris paribus laws are to be true laws, there are no statistical laws with which they can generally be identified.
2. When Laws Are Scarce
Why do we keep Snell's law on the books when we both know it to be false and have a more accurate refinement available? There are obvious pedagogic reasons. But are there serious scientific ones? I think there are, and these
end p.47
reasons have to do with the task of explaining. Specifying which factors are explanatorily relevant to which others is a job done by science over and above the job of laying out the laws of nature. Once the laws of nature are known, we still have to decide what kinds of factors can be cited in explanation.
One thing that ceteris paribus laws do is to express our explanatory commitments. They tell what kinds of explanations are permitted. We know from the refined Snell's law that in any isotropic medium, the angle of refraction can be explained by the angle of incidence, according to the equation sin θ/sin θ
t= n
2 /n
1 . To leave the unrefined Snell's law on the books is to signal that the same kind of explanation can be given even for some anisotropic media. The pattern of explanation derived from the ideal situation is employed even where the conditions are less than ideal; and we assume that we can understand what happens in nearly isotropic media by rehearsing how light rays behave in pure isotropic cases.
This assumption is a delicate one. It fits far better with the simulacrum account of explanation that I will urge in Essay
8 than it does with any covering-law model. For the moment I intend only to point out that it
is an assumption, and an assumption which (prior to the 'full electromagnetic theory') goes well beyond our knowledge of the facts of nature. We
know that in isotropic media, the angle of refraction is due to the angle of incidence under the equation sin θ/sin θ
t=
n
2 /
n
1 . We
decide to explain the angles for the two refracted rays in anisotropic media in the same manner. We may have good reasons for the decision; in this case if the media are nearly isotropic, the two rays will be very close together, and close to the angle predicted by Snell's law; or we believe in continuity of physical processes. But still this decision is not forced by our knowledge of the laws of nature.
Obviously this decision could not be taken if we also had on the books a second refinement of Snell's law, implying that in any anisotropic media the angles are quite different from those given by Snell's law. But laws are scarce, and often we have no law at all about what happens in conditions that are less than ideal.
end p.48
Covering-law theorists will tell a different story about the use of ceteris paribus laws in explanation. From their point of view, ceteris paribus explanations are elliptical for genuine covering law explanations from true laws which we do not yet know. When we use a ceteris paribus 'law' which we know to be false, the covering-law theorist supposes us to be making a bet about what form the true law takes. For example, to retain Snell's unqualified law would be to bet that the (at the time unknown) law for anisotropic media will entail values 'close enough' to those derived from the original Snell law.
I have two difficulties with this story. The first arises from an extreme metaphysical possibility, in which I in fact believe. Covering-law theorists tend to think that nature is well-regulated; in the extreme, that there is a law to cover every case. I do not. I imagine that natural objects are much like people in societies. Their behaviour is constrained by some specific laws and by a handful of general principles, but it is not determined in detail, even statistically. What happens on most occasions is dictated by no law at all. This is not a metaphysical picture that I urge. My claim is that this picture is as plausible as the alternative. God may have written just a few laws and grown tired. We do not know whether we are in a tidy universe or an untidy one. Whichever universe we are in, the ordinary commonplace activity of giving explanations ought to make sense.
The second difficulty for the ellipsis version of the covering-law account is more pedestrian. Elliptical explanations are not explanations: they are at best assurances that explanations are to be had. The law that is supposed to appear in the complete, correct D-N explanation is not a law we have in our theory, not a law that we can state, let alone test. There may be covering-law explanations in these cases. But those explanations are not our explanations; and those unknown laws cannot be our grounds for saying of a nearly isotropic medium, 'sin θ
t
k(
n
2 /
n
1 )
because sin θ =
k'.
What then are our grounds? I assert only what they are not: they are not the laws of nature. The laws of nature that we know at any time are not enough to tell us what kinds of explanations can be given at that time. That requires
end p.49
a decision; and it is just this decision that covering-law theorists make when they wager about the existence of unknown laws. We may believe in these unknown laws, but we do so on no ordinary grounds: they have not been tested, nor are they derived from a higher level theory. Our grounds for believing in them are only as good as our reasons for adopting the corresponding explanatory strategy, and no better.
3. When Laws Conflict
I have been maintaining that there are not enough covering laws to go around. Why? The view depends on the picture of science that I mentioned earlier. Science is broken into various distinct domains: hydrodynamics, genetics, laser theory, . . . We have many detailed and sophisticated theories about what happens within the various domains. But we have little theory about what happens in the intersection of domains.
Diagramatically, we have laws like
and
For example, (
ceteris paribus) adding salt to water decreases the cooking time of potatoes; taking the water to higher altitudes increases it. Refining, if we speak more carefully we might say instead, 'Adding salt to water while keeping the altitude constant decreases the cooking time; whereas increasing the altitude while keeping the saline content fixed increases it'; or
and
But neither of these tells what happens when we both add salt to the water and move to higher altitudes.
Here we think that probably there is a precise answer
end p.50