Bertrand Russell is reported to have said that there are two kinds of philosopher: one who sees the world as a bowl of jelly and another who sees it as a bucket of shot. Russell considered himself to have undergone a conversion from the former view to the latter in 1898, when he parted ways with his Hegelian friends and began to focus on quantificational logic.1 He came to believe that in Hegel’s jelly-like world, philosophical analysis did not stand a chance, because things and facts and language were so holistically interconnected and susceptible to dialectical change that no one could get a firm grip on whatever matters might be important. Instead, he came to think the world had to be comprised of shot-like pellets, which could be picked up piece by piece, examined in all their atomistic detail, and represented in clear and truthful language. Besides possibly marking analytic philosophy’s moment of conception, Russell’s conversion can be considered the immediate cause for that tradition’s long-standing boycott against all Hegelian imports.
Over the last hundred years, this restrictive policy has not been insignificant to discussions of truth. For instance, owing to what Hegel might have likened to a professionally imposed determinate negation, for a time it was nearly impossible to follow his lead in molding both alethic and axial matters into the same jelly. What truth is and why truth matters had to be considered, not holistically, but as separate questions. This way of dividing up truth has served as a keystone in what has been called the fact/value dichotomy, which lumps the world into at least two distinct pellets, and it has encouraged scientists like me to accept as a given the separability of two important issues. The suggestion has been that we should maintain separate understandings of what we are doing in science and why we are doing it, or that we should not confuse what science is with what science is for.
In this chapter, I will suggest that, if these issues overlap and become mutually dependent (as Hegel supposedly would have allowed), then scientific practices can be more satisfactorily taken into account and scientists can gain a more unified understanding of their work. I will begin and end by introducing and then drawing upon certain themes in the pragmatist tradition in philosophy. Between these philosophical sections, I will give three non-technical illustrations of how quantum physicists use and exemplify pragmatist modes of reasoning. So I offer a kind of sandwich here: two starchy slices of philosophy with three meaty (or maybe cheesy) physics-flavoured layers in between.
To begin, there are some indications that Hegel’s chances are starting to look good. In a recent book titled Analytic Philosophy and the Return of Hegelian Thought, philosopher Paul Redding argues that the spirit of Hegel is indeed on the move.2 As Redding describes, the revival takes many of its cues from Wilfrid Sellars, who criticizes both empiricism and rationalism for perpetuating myths about what is given in perception and in empirical logic. Sellars affirms what he understands to be a Kantian view – that knowledge rests upon understanding and judgment – and argues that judgments themselves must be grounded in reason. That is, for a judgment to be justified, it has to enter into what Sellars calls “the space of reasons,” where it can be affirmed or challenged on the basis of rules and norms.3
Some reasoning is a matter of perceptual experience and inner episodes in the minds of individuals, but much of it involves inter-subjective coordination. In giving each other supporting reasons for their judgments, people appeal to logical, grammatical, and ethical norms, and in demanding reasons of each other, we impose these same norms. Through this sort of peer-review process, judgments (and actions based upon them) come to instantiate and reinforce versions of what Sellars calls a world story.4 According to pragmatist interpretations of Hegel, such as those of Robert Pippin and Robert Brandom,5 a similar dynamic of socially embedded and normative reason determines the Hegelian absolute spirit.
Whether we discuss it in terms of spirit or story, pragmatists point to the system of thought that is borne in the linguistic articulations, evaluative judgments, and decisive actions of people who live in society. In order to make a move in the world story, a person has to think, speak, or act in a way that makes sense in terms of the story’s implicit rules or norms. A person can remain true to the story only by making correct moves. Regarded in this sense, truth is not simply a matter of correspondence between propositions and worldly states of affairs, but is more generally a matter of propositions that fit into or have implications for the world story. Sellars thinks along these lines when he defines truth as semantic assertibility.6 He thinks that while an objective correspondence relation is a necessary condition of truth, it neither defines truth nor explains what truth is. In his view, truth is the appropriateness of a sentence’s appearance in the context of a world story. In order to be true, a sentence not only has to stand in correspondence relations with objects but also has to matter in the world story. When understood as semantic assertibility, truth is this mattering; it is the property of having implications under the rules of a world story. Later I will take this account of truth seriously, and to do so, it will not make sense to dissociate the question of what truth is from the question of why truth matters, so we can expect to find ourselves in the thick of the jelly that Russell found so distasteful.
My general interest in pragmatism is focused precisely on this blending of why questions with what questions, but I also have a specific interest in what a pragmatist philosophy of science has to say about my own field of quantum physics. To serve both levels of interest, I want to keep an eye on certain Kantian and Hegelian themes that Sellars distils into his work on language and empirical reason. Sellars uses the methods of quantificational logic to register an immanent critique against mainstream interpretations of that logic. In fact, his concerns about interpretation would not be discordant with what we hear from continental philosophers. Unlike that group, however, Sellars challenges the dominant analytic interpretations not by dismissing, but rather by using, the analytic tools of his time. One of his specific concerns, which has to do with the issue of quantification itself, will get some play in the illustrations I will get to shortly. For my present purposes, I suggest thinking of quantification as a kind of linguistic move that we can make in order to go from a claim like “physicists are philosophers” to others like “all physicists are philosophers” or “some physicists are philosophers” or “no physicists are philosophers.” The problematic issue is this: categorical definitions have to be assumed as givens in quantified statements, whereas they need not be assumed in unquantified statements like “physicists are philosophers.” The latter may actually draw categorical definitions into question. This problem with quantification will surface at several points in the following illustrations.7
A second issue that will come up is the logic of material inferences, which relies heavily on the (Hegelian) principle of determinate negation. Whereas there is only one way to accomplish a formal negation, there can be lots of ways to produce a determinate negation. For example, the formal negation of the predicate “circular” is the predicate “not circular.” However, the possible determinate negations of “circular” include “triangular,” “square,” and “pentagonal.” The idea is that if a property (such as shape) is determined, or has a determinate value, then it is impossible for it to have a different value. Note that the language of possibility here reveals that the powerful tools of modal logic are at hand. But the most important point is that the logic of determinate negation respects not only the formal properties of predicates and propositions but also the non-logical contents of sentences – the concepts and meanings.8 Only through the content-respecting material logic of determinate negation can we make so simple an inference as, “The pavement is dry, so it must not have rained recently.” To reason in accordance with such an inference is to understand and endorse the concepts as befitting the material circumstances and to commit oneself to the consequences of the inferred conclusion. There is a bracing realism in this way of reasoning – it is a realism with respect to material consequences, and it accepts physical necessities and puts available possibilities to use. It is a realism for pragmatists.
In the framework of material inferences, physical laws have an important logical role to play. But laws come with strings attached: to use them, we have to make a lot of implicit ceteris paribus assumptions about the world. Imagine, for instance, that we infer from the sound of thunder that there was a flash of lightning in the very recent past. In order to make this inference, we assume that laws governing meteorology, electricity, optics, and acoustics are in force. Moreover, along with these assumptions about physical necessity, we rely on a further assumption that the laws (and concepts) we use are sufficient in the given situation.9 We assume that no disturbances will change the situation as we have conceived it.
But what is to be done when our inferences go wrong? To take another everyday example, imagine that we hear rain on the roof at night, and so we expect to find worms on the pavement in the morning. But then we find birds on the pavement instead. We have to conclude that something went wrong in our thinking; some of our assumptions must have been materially invalid. But what was our error? Was it a sin of omission? Did we leave something out when we thought it sufficient to regard worms as free from certain kinds of disturbances (like carnivores)? Or was it a sin of commission? Did we overreach by assuming it necessary that a worm has to be a certain kind of thing (the kind of thing that never is inside a bird)? We are faced with the question of where to focus our theoretical reconstruction. Should we rethink our basic conceptions of objects, or should we rework our law statements to express modified relations between known objects? As the illustrations will suggest, this kind of question turns out to be unusually difficult to answer in the context of quantum physics.
Before jumping into the illustrations, I should say a bit more about what they are for and how they should be interpreted. They will all be metaphorical representations of actual physics experiments that can be done with particles such as atoms or electrons. You may have heard it said that quantum particles display wavelike characteristics that the everyday particulates in soil and smoke do not seem to have. My first illustration will magnify this kind of quantum strangeness by describing how our macroscopic world might work if grains of sand displayed wavelike properties. But I do not want this mental image to convey the wrong idea about Russell’s jelly and shot taxonomy. There may be inviting parallels between particles and shot on the one hand and waves and jelly on the other, but Russell’s point is not primarily about the physical structures of the world’s fundamental constituents; it is about the logical and linguistic starting points of philosophical analysis. Similarly, these illustrations are supposed to provide not pictorial templates for understanding the “furniture of the world” but, rather, logical and operational templates for understanding the slippery conceptual basis of quantum reasoning.
So, on to the first example, which involves a quantum hourglass that has two channels at its waist.10 The channels are arranged side by side and can be separately opened and closed. The amazing thing about this hourglass is how the sand piles up at the bottom: the shape of the pile changes dramatically if both channels are open instead of just one. If just one of the channels is open, the result is the same broad pile of sand usually found at the bottom of a normal hourglass. However, if both are open, the result is not just the sum of two normal piles. Instead, there is a set of finely spaced ridges, as if some gardener has built up a set of parallel mounds and raked away the sand from the furrow in between. This is a strange result because, somehow, opening an extra channel has eliminated the pile-up of sand in the furrow. Now, we might imagine that sand grains in the two parallel streams can collide after they fall through the different channels, and that these collisions are the cause of the pattern in the sand pile. So we might consider what happens when we throw sand into the top of the hourglass one grain at a time; this way, as each grain falls, it cannot collide with any others. But even if we do this, the grains gradually build up the same, furrowed pattern. What is more, if we were to close one of the slits and start over, then the usual, smooth pile would result. Somehow, it is the presence of the two channels that leads to the furrow.
There is still another mystery in this hourglass. If we try to monitor which channel each grain of sand traverses, we find that the furrows always disappear – we just get the usual, broad pile of sand. So, it is not just the presence of two channels that leads to the furrows; there is a more general, operational law that can be stated as: Were it possible for us to tell which of the two channels each grain of sand traversed, the furrow would not appear. If we want the furrow, we have to make sure there is no way to answer the which-channel question. We cannot even use familiar quantificational grammar to assert, “Each grain goes through a channel.” It is as if we are unable to think at all in terms of the specific paths of individual grains anymore; we are restricted to thinking of things that can traverse two channels at once, something spread out, like a wave crashing across a wide stretch of beach. Indeed, if we follow this wave metaphor and employ wave-related mathematics, we can understand and predict the pattern of furrows. So, depending on how we set up and monitor the two channels in this quantum hourglass, we are forced to think in terms of either sand grains or sand waves. The bottom line for this example is simply this: the appropriate way of thinking – that is, the way to predict the correct kind of sand pile – is intimately related to whether or not we can say which path each sand grain takes as it falls through the hourglass.
The second illustration involves a fictional bowl of alphabet soup.11 Imagine that we have filtered the soup so that the only pieces of pasta in it are B-shaped, and imagine that the pieces are indistinguishable from each other. There is no way to tell them apart. We may be able to arrange them to have, say, ten pieces lined up from left to right; but we cannot keep track of the order of the lineup; we cannot say which piece is in which position. Of course, in everyday alphabet soup, we could keep track of the order by labelling the pieces of pasta with bits of black pepper or other physical markers. But in quantum soup this is impossible. The pieces of quantum pasta cannot be marked in a way that allows us to tell which one we are looking at, or which one is in which place.
Once again, as with the quantum hourglass, the which question becomes problematic. In fact, we cannot hold onto our usual ways of using pronouns like which, each, this, and a, because we cannot keep track of or even define the identities of the pasta pieces. The quantum soup has a strange mereological quality and leaves us with linguistic recourse only to plural quantifiers and universals. The pieces of pasta can be grouped and aggregated, but they are not individually identifiable in the sense of being distinguished one from another.12 Statements about them must apply generally to any of the pieces or must relate to groups of such pieces. If we wished to speak about a specific piece (this one, the one that is here, now), we would have to point at it and indicate its relative position in the group. But even then there would be no way to tell whether that particular piece was, say, rapidly trading places (or identities) with other pieces. Nor could we tell if, at any given time, we were actually pointing at a composite mixture of pasta pieces, each one having its identity split up and distributed over various spatial locations. In the end, as we attempt to point at this piece of pasta in the soup, all we can be sure of is that we are pointing at a place defined by an overall spatial pattern formed by objects of a particular kind. There are no quantified factual statements that can specify which place a certain object holds; such statements have become semantically impossible to assert.
The payoff for taking this approach is that it allows my theory to account for the quantum soup’s unusual behaviour. For example, under certain conditions adding one more B-shaped piece of pasta to the bowl might prompt all of the other pieces to collapse into a tightly packed ball. This would be an unusual kind of phase change that we would never see outside of the quantum world, and to predict it we have to use an unusual grammatical logic. The bottom line in this example is that the logic of object identification is a linchpin that holds together the counting measurements that we do in the laboratory and the theoretical (statistical) calculations that allow us to predict what we will observe. How we account theoretically has to lead us to the same numerical answers that we get when we count experimentally. In other words, the grammar that we use in statistical reasoning has to lead to statements that are “empirically adequate”; and only the strange grammar of indistinguishability results in a theory that matches up with observation.
The third illustration gets at the strangest feature of all in the quantum world.13 Imagine that we study twins who were separated at birth and adopted by different families, and imagine these families do not know each other and never interact. Our aim is to follow up on the work of well-known researchers in the field of neuropsychology, who have claimed that there is a correlation between a child’s aptitude for spatial reasoning and the 2D:4D ratio of the child’s right hand. The 2D:4D ratio, or what I will just call the “finger ratio,” is the number that you get when you divide the length of your index finger (the second digit) by the length of your ring finger (the fourth digit). Now, there is a surprising body of literature related to this ratio, but I am not staking this example on any actual studies of the correlation between finger ratios and spatial reasoning aptitudes. For now, just imagine that some big-time researchers claim to have found the following statistical correlation: those children who have finger ratios of less than one when they are younger than three years old (their ring fingers are longer than their index fingers) also have, on the whole or as a group, above-average aptitude for spatial reasoning in later childhood. Imagine that our study is a follow-up to that earlier study – we want to see whether the conclusion is right, and if it is, we want to know more about it. To keep things simple, we will describe the finger ratio as being either “high” (greater than one), or “low” (less than one); and we will measure spatial reasoning aptitude by way of a standardized test, assigning scores of either “A” for above average or “B” for below average.
What the previous study predicts is a trend that involves above-average aptitude for people with low finger ratios and below-average aptitude for people with high finger ratios. Now, given that adoptive parents might know about this prediction, it is possible that children could be given special attention based on their parents’ measurements of the finger ratio during early childhood. Parents could make special efforts either to foster or to compensate for the predicted aptitudes of their children. Or, maybe, finding out what is predicted about their children would make some parents eager and others reluctant to subject their children to aptitude tests. In order to ensure against the possible biases that these factors could introduce in our study, we decide to track thousands of sets of separated twins, only a fraction with parents who bothered to measure the finger ratio in early childhood. When we administer our test for spatial reasoning aptitude (once children reach age six), we do so only for those children whose parents never measured the finger ratio. Thus, by the end of the study, we have the result of exactly one measurement for each child – either the finger ratio measurement at an early age or the aptitude measurement at a later age.
This seems like a test that we might actually be able to do in the real world, and maybe we could use the results to learn something about a predicted correlation. But if we did the study in a quantum world, where twins had certain characteristics of what physicists call “entangled” pairs of quantum particles, we might examine our data and come away with the following three observations:
1 In considering pairs of separated twin siblings for whom both adoptive families measured the finger ratio, we find that the measured ratio was low for both twins in 9 per cent of the cases.
2 For cases in which siblings were subjected to the two different kinds of measurements, whenever one sibling exhibited a low finger ratio in early childhood the other sibling later scored an A on the spatial-reasoning test.
3 In no case did both twins score As on the spatial-reasoning test.
These results are downright baffling. It seems impossible for all three observations to be valid. From the first observation, we naturally want to conclude that in roughly 9 per cent of all sets of twins, both children will have low finger ratios, regardless of whether their parents measured this quantity. We also want to extend the second observation to say that any twin with a low finger ratio will have a sibling with high spatial-reasoning aptitude, regardless of whether any measurements were performed. From the combination of these two inferences, we naturally expect that in at least 9 per cent of all sets of twins, both siblings will have high aptitude for spatial reasoning. Yet, in our vast set of data, there is not even a single instance of both twins scoring an A on the aptitude test. Once again we seem to be having trouble using pronouns. This time we stumble when we attempt to use the quantifiers any and all to generalize our statistical results.
In all three of these illustrations, the quantum world forbids certain kinds of assumptions that we usually make in classical physics by way of explicit or implicit ceteris paribus clauses – assumptions that are often encapsulated in the all-things-being-equal shorthand of pronominal grammar. For instance, the laws of Newtonian physics rely on the assumption that – ceteris paribus – every particle has a unique spatial position at any given time. However, as the hourglass and alphabet soup examples show, we can run into trouble in the quantum context if we attempt to track a particle’s trajectory by saying, “it was here then.” The problem arises when the subject we are talking about becomes something to which certain quantifiers and demonstratives cannot be applied. As I mentioned above, this semantic issue is connected with traditional concerns about laws in philosophy of science. In order to use a scientific law, one generally has to make ceteris paribus assumptions of various kinds, and now it should be clear that some of these are embedded in our use of pronouns.
Many of our basic assumptions have been challenged by a century of quantum physics, and especially by the twin experiment. That result so defies traditional reasoning that we have had to scramble to conceive of new explanatory laws. Some have held on to the hope of finding an uncontrolled variable that was overlooked in the experiment – a hidden mechanism that can alter one twin’s spatial reasoning ability but is triggered only when the separated sibling’s parents perform a finger-ratio measurement. Others have abandoned the notion of causal triggers and have begun to regard correlated pairs of twins, rather than individuals, as fundamental ontological entities. The list of interpretations goes on, along with much discussion of what makes the twin experiment come out as it does.14
Nevertheless, the equations of quantum mechanics allow very accurate predictions of the final statistical outcomes. This fact allows many physicists to shake off their anxieties about conceptual explanations and turn their attention toward technologies that can use the unusual results of quantum experiments. Such practical applications have become possible not because of explanations about what individual particles do, but because of the operational laws that connect the equations of quantum theory with the statistical sampling procedures of the laboratory. Only these operational laws, and no proposed explanatory laws, have been found empirically adequate, materially consequential, and technologically reliable.
So, shall we make a clean break from the tradition of taking explanatory laws seriously? While much of the world has emerged from the shadow of positivism, this question has kept quantum physics in the penumbra. A positivistic operationalism seems to be the only obvious recourse in the quantum world. After quantum objects slip through the fingers of our quantificational grammar, all we can do is shovel them into statistical distributions. These distributions are all that the equations of quantum theory seem to be about. But what about the things themselves? What about particles and the explanatory laws that govern them? What about the scientific ideal of understanding causes and effects at the level of particulars? Is there a grammatical scheme that can help us recover these things? Or is hoping to recover them just nostalgic, wishful thinking?
These questions must be understood in their social context – the context of a scientific practice that I have been calling quantum physics. Quantum physics is the space of reasons where physicists offer and demand justifications for their inferences and test the assertibility of scientific claims. The participants are normatively bound to endorse good theoretical arguments and accept the outcomes of experiments. In this context, an individual can offer reasons for following certain lines of inquiry, and by accepting those reasons the scientific community recognizes the individual as one of its own. Thus, a small minority of physicists finds justification for focusing on foundational questions in quantum theory; and within this group, a few still hope to recover explanatory laws. These scientists face the stiff challenge of discovering new rules for the language of physics and simultaneously reconstructing two levels of theoretical discourse: the object level and the law level. Such radical developments are not without precedent, but they are few and far between. One can point to the momentous shifts that Maxwell and Einstein introduced into the world story of physics in the nineteenth and twentieth centuries. I think our historical awareness of such previous revolutions is what allows us to remain open to the idea that future revolutions are possible.
At this point it is possible to see how axial questions have a rightful place in scientific practices. Rendering judgment on why and what for questions is a part of what scientists do qua scientists. They ask themselves whether they should pursue this speculative theory or that strategic application. But where does the “should” come from? How does conceptual development become normative? To outline an answer to this question, I will look again at the intersubjective dynamics of scientific practices, particularly at the deliberative processes through which ideas and proposals are evaluated.
When scientists admit or deny the appeals and proposals of others, our ethical and ontological commitments become unavoidably entangled. A proposal asks us to render judgments that tie our understanding of what is to our understanding of what is right. To splice these understandings together, we make inferences of two kinds. One is a counterfactual inference, in which we conclude that a possible decision or action would be appropriate if we were to endorse a particular interpretation of an actual situation. By way of this first kind of inference we establish a rule and commit to following it. The rule maps our descriptions of the world onto our responsibilities in the world. Of course, in order to generate the descriptions in the first place, we have to make inferences of different sort. Some form of reasoning – an empirical inference – has to relate our perceptual experiences to our concepts. These inferences, too, require our judgment and endorsement. In order to defend the descriptive claim that “the world is so and so,” we have to be able to justify the normative claim that “it is appropriate to describe the world as so and so.” By way of our judgments we reach two important conclusions: that we should accept certain counterfactual conditional statements as normative rules, and that we should endorse certain descriptions of the world as appropriate premises for our conditional rule statements.
In Sellarsian terms, all of this suggests that people have two inseparable sets of evaluative judgments to make when a proposal is placed before them in the space of reasons.15 First, we have to decide what rules and norms we should follow. Second, we have to adopt an appropriate description of the world, so that we can articulate possible verdicts that might be rendered – verdicts that can be thought of as possible moves in our world story. There is an apparent chicken-and-egg problem here. To lay down rules, one needs to describe the world. But to describe the world, one needs to understand its rules. Rather than regard this situation as a problem, however, one might take it to be a basic linguistic fact, so that rule statements and objective descriptions are, simply, so interdependent as to be mutually constitutive. This seems to be the view that Sellars adopted. I take much of his work to warn against the philosophical misstep of assuming quantificational grammar to be the handmaiden of logical atomism – for this assumption could have disastrous consequences. An atomistic separation of rules from descriptions, or normativity from objectivity, could derail the process of inferential reasoning and threaten the coherence of a world story.
My quantum illustrations help to justify concerns like the ones Sellars had. The breakdown of familiar pronominal grammar in the quantum context is telling. Not only does it reveal our tacit reliance on quantificational logic in scientific practice; it also shows that our interpretations of this logic give shape to our conceptual formulations, explanations, and inferences. We have discovered that our usual object concepts and law-based explanations can fail to support an inferential framework for keeping track of particular things in the world. However, we need not insist that this is anything more than a linguistic discovery. It is not necessarily a discovery of new extra-linguistic things, properties, or kinds. We may only be recognizing that our traditional grammatical methods, which we use to chop our thoughts into specific kinds and objects, can fail to provide practical traction in certain contexts. After all, what I have been calling the quantum world is no different than the everyday world; only our pragmatic use of language differentiates between the two. There is, no doubt, one “real world” in which we encounter material consequences. But now the logic embedded in our traditional ways of speaking about the world has proven insufficient as a universal material logic. If we follow our usual quantificational paths, we can wind up making inferences that are materially invalid. This is a matter of practical concern.
To answer the question of how we should describe the world, we have to make normative judgments about possible ways of being objective. What kinds of things and rules should we use as the (onto)logical basis for our reasoning? I will venture to suggest that, in actuality, we usually answer this question in terms of what I will call a pragmatic teleology. We define things in certain ways so that they will do what we need them to do – that is, so that they function in our inferential framework. As long as deliberations can be sustained, we are happy to let sleeping ontologies lie. Of course, we do not define ourselves as if we were impersonal objects; persons are not things to be put to use. Rather, all objective reasoning is done by and for persons as we face material consequences in the real world. Only those logical and linguistic structures that serve our needs will ever be considered adequate or right.
Where does this leave the concept of truth? Sellars maintained that the truth (or semantic assertibility) of a sentence is determined by whether the sentence fits within the inferential framework of a language. However, he suggested that only a perfect language user would be able to discern whether the fit is actually perfect.16 Thus, without insisting on universal truths translatable into all languages, he used a robust concept of ultimate truth that relied on the ideal nature of an all-knowing judge. Mere mortals, left to their own best judgments, would be able to make only provisional or conditional claims. Thus, Sellars did not think that deliberations in the space of reasons could decide the ultimate truth or assertibility of statements and proposals. Rather, he thought their main function was to develop and convey conceptual content, to reveal logical implications and material consequences, and to allow for the development of language and pragmatic understanding.
If we wish to understand what is meant when people refer to “scientific truth,” we can draw from Sellars’s conception of truth as semantic assertibility. This conception allows truth both to serve as a regulative ideal in scientific practice and also to function as an epistemological concept in meta-linguistic semantics. In scientific practice, to make a claim is to introduce a statement into the space of reasons with the intention that it be assertible within an object/rule framework of scientific reasoning. Whether we make a claim while implementing laboratory procedures or thinking through abstract models, we intend for the claim to fit into the formal and material logic of the context. However, if the fit is drawn into question, the claim can become the object of meta-linguistic debate. Thus, a statement intended for use at one (practical) level of language can become a statement intended for analysis at a higher (theoretical) meta-level. The point of reasoning at the theoretical level is to understand practical language in terms of a formal logic and to determine whether statements are consistent with that logical framework. Such a meta-linguistic theory about the truth of statements is what Sellars calls epistemology.17 This approach provides a helpful way to understand epistemological reasoning as a key – but by no means the only – element of scientific practice. The approach is pragmatic at its core, for a practical intention to use statements is what motivates the theoretical concern about truth in language. We analyze the formal constraints on what makes something assertible only because we want language to align with the material constraints that we encounter in practice. So in science, as in pragmatism, epistemology is the servant of practical philosophy.
Scientists in general do not aspire to be perfect language users. While we have to use language in order to clarify concepts and make inferences, we do not aim to speak as God would speak. Nor do we feel the need to do so. Truth is not what we are after in science; it is what we rely upon. We are bound by it, but we can never be convergent upon it. While the regulative ideal of truth puts important constraints on our reasoning, we cannot zero in on truth like we might zero in on the conceptual contents and implications of our understanding. These latter, materially consequential modes of thought are the central concerns of our scientific inquiries and judgments.
So, finally, whether you ask me what it is to work as a physicist or why I would do such work, I can offer this single answer: to articulate and heed the demands of reason in a context wherein humans face material consequences. Of course, there are other, broader contexts in which similar questions may be asked about the whats and the whys of work outside of physics. If, in these non-scientific settings, we ask ourselves what we are doing and why we are doing it, perhaps the answer I have just given will suffice.
But there is one last point to make, for you might object that my suggested bivalent response trades on a linguistic ambiguity, because “to articulate and heed” could be interpreted as a short form of “in order to articulate and heed.” In other words, my phrasing does not make a distinction between a matter-of-fact definition of scientific work and an axiological explanation of the aim of that work. To conclude, I offer a possible refutation of this objection, albeit a speculative one. Why should we take this possible distinction to be a necessary one? Maybe we should consider our work to be defined by its aims, its essence to be determined by its telos. Maybe the assertibility of my single answer suggests that there is a context in which what and why questions are not separable. Were this the case, we would have to develop a conceptual language for that context, so that we could formulate a single question in order to inquire about our work – a single, assertible question for which we already have a single, assertible answer. Like others who have taken this dialectical route (Maxwell and Einstein, perhaps), we would be responding not to physical necessity but to conceptual possibility. For we would not have felt the force of a decisive negation of any inference; we would only have felt the internal stresses and strains of concepts on the verge of either fusing together or breaking apart – like atomic nuclei or globs of jelly.
1 Bertrand Russell, Portraits from Memory (New York: Simon and Schuster, 1956), 17.
2 Paul Redding, Analytic Philosophy and the Return of Hegelian Thought (Cambridge: Cambridge University Press, 2007).
3 Wilfrid Sellars, Empiricism and the Philosophy of Mind (Cambridge, MA: Harvard University Press, 1997), 76.
4 Wilfrid Sellars, “Realism and the New Way of Words,” Philosophy and Phenomenological Research 8 (1948): 601–34; Naturalism and Ontology (Reseda, CA: Ridgeview, 1979), 128–30; also, Empiricism and the Philosophy of Mind.
5 Robert Pippin, Hegel’s Practical Philosophy: Rational Agency as Ethical Life (Cambridge: Cambridge University Press, 2008); Robert Brandom, Tales of the Mighty Dead (Cambridge, MA: Harvard University Press, 2002), 178–234.
6 Wilfrid Sellars, Science and Metaphysics (London: Routledge and Kegan Paul, 1968), 91–115.
7 For more on Sellars’s concerns about quantificational logic and grammar, see, for instance, his Science, Perception, and Reality (London: Routledge and Kegan Paul, 1963), 106–26, 247–81; also, Naturalism and Ontology.
8 Wilfrid Sellars, “Inference and Meaning,” Mind 62 (1953): 313–38. See also Robert Brandom, Articulating Reasons (Cambridge, MA: Harvard University Press, 2000), 53–5.
9 Wilfrid Sellars, “Concepts as Involving Laws and Inconceivable without Them,” Philosophy of Science 15 (1948): 287–315. See also Marc Lange, Laws and Lawmakers (Oxford: Oxford University Press, 2009).
10 The hourglass illustration is meant to convey the results of double-slit diffraction experiments, which are described in most physics textbooks.
11 The soup example gives a taste of the combinatorial logic of indistinguishable quantum particles. This logic seems to be required in all reliable models of many-particle systems, such as those used to compute probability distributions of electrons in atoms or atoms in cold, dense clouds.
12 For a more technical discussion of this point, see Paul Teller, “From Particles to Quanta,” in An Interpretive Introduction to Quantum Field Theory (Princeton: Princeton University Press, 1995), 16–36.
13 This example illustrates the effects of quantum entanglement. Early on in the development of quantum mechanics, the notion of entanglement was the focus of an important theoretical debate surrounding the famous EPR paper. See A. Einstein, B. Podolsky, and N. Rosen, “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?” Physical Review 47 (1935): 777–80. That debate simmered for three decades before the physicist John S. Bell came up with a formal argument predicting that entanglement must have strange, empirically observable consequences. See J.S. Bell, Speakable and Unspeakable in Quantum Mechanics (Cambridge: Cambridge University Press, 1987). My illustration of the strangeness of entanglement’s consequences is structurally parallel to another illustration that I have long appreciated. The latter is found in P.G. Kwiat and L. Hardy, “The Mystery of the Quantum Cakes,” American Journal of Physics 68 (2000): 33–6.
14 For more on this subject, see J.S. Bell, Speakable and Unspeakable in Quantum Mechanics. There are also more recent books on the subject, including Gregg Jaeger, Entanglement, Information, and the Interpretation of Quantum Mechanics (Berlin: Springer Verlag, 2009).
15 Wilfrid Sellars, “Language, Rules, and Behavior,” in Pure Pragmatics and Possible Worlds, ed. Jeffrey F. Sicha (Reseda, CA: Ridgeview, 1980), 125–56. Originally published in John Dewey: Philosopher of Science and Freedom, ed. Sidney Hook (New York: Dial Press, 1950).
16 Sellars, “Realism and the New Way of Words,” 426–9.
17 Ibid., 426.