An important tendency in the modern era, particularly in cultures shaped by individualism, is to turn to science to make sense of alleged environmental ills, to ask scientists to give guidance on them if not in a sense to arbitrate disputes about them. Journalists routinely seek out scientific experts to comment on possible dangers. Lawmakers do the same, not just when holding hearings but when specifying how regulatory agencies should implement broadly phrased statutes. Under the US Clean Air Act, for instance, the Environmental Protection Agency is told to set maximum air-pollution levels based strictly on science. Similarly, the US Fish and Wildlife Service, in deciding whether to extend legal protections to a species under the Endangered Species Act, is told to make its decision based entirely on science and other factual data. Science provides at once the vocabulary and the arena in which environmental policy often plays out.
This high status for science is consistent with a more encompassing cultural tendency on matters of public interest to embrace objectivity, to focus on facts and logical reasoning and keep emotions and personal preferences out of the picture. Scientists are viewed as the most objective and are raised up accordingly, even as they get attacked when the facts they generate are unwelcome. Indeed, the attack on science by defenders of the status quo—the denial of climate change at the moment most prominent—only highlights the importance of science; if critics can topple the scientific proof they might just carry the day.
Objectivity is deemed a virtue in the modern age, at least in public affairs. It is not, of course, regarded the same in artistic and other expressive realms. Indeed, when it comes to personal spheres therapy is one of the age’s leading tropes. The psychological professions, social work, personal counselors, even many churches: all are about helping people identify their subjective choices and become comfortable with them. Modern advertising, it hardly needs saying, is all about subjective yearnings and stimulating more of them. Objectivity still plays a role in personal matters; there’s little room for subjective expression when fixing a leaky water pipe. Nonetheless, the contrast between public and private spheres is rather stark. Objectivity dominates (or is supposed to) in the public arena. Subjective choices are given greater rein on the private side.
The still-lingering debate about human-induced climate change offers a case in point. Three elements of this debate are particularly illuminating in cultural terms. First, scientists are put front and center in it, both the thousands of scientists assisting or agreeing with the Intergovernmental Panel on Climate Change (IPCC) and, disproportionately, the vastly fewer scientists who question the dominant consensus. Atmospheric scientists are expected to tell us whether our modes of living are problematic. Second, the issue as framed publicly is whether or not human-induced climate change has been proved as a matter of fact. As raised, the issue calls for factual evidence, collected and weighed. Finally, when it comes to proof the preferred standard is that of scientific proof. Has it been factually proven, in the scientific sense, that humans are materially changing the climate?—that’s the question.
On all three of these points, today’s climate-change debate reflects distinctive cultural traits, ones that are, in this setting and others, rather confusing and unhelpful. Indeed, most of the reasons why the modern age has trouble coming to terms with ecological change can be teased out of this slanted, three-part framing of the climate issue. To see this, though, we need to back up. We need to explore what science is and what it is not. We also need to consider the origins of morality, or more generally the origins of normative values—the values or standards used to distinguish the wise from the foolish, the ethical right from wrong. Where do we get the raw materials to make such determinations, and what gives them legitimacy? Both of these inquiries—on science and morality—lead to rather firm ending points in that the foundations of both are reasonably clear. We need to gain better awareness of these foundations if we are to make sense of our ecological plight.
When people talk about science, scientists included, they typically have one or both of two meanings in mind. Nonscientists typically refer to the body of factual knowledge that has arisen from or been confirmed through the scientific processes. Science is what we know pretty much for sure. In the case of natural science it is what we know about the natural world, its constituent elements, how it functions, and how it changes over time. Scientists use the term this way but they also use it to refer to the methods and standards by which new knowledge is generated and tested. Science is a process, guided by professional standards. It entails formulating and testing hypotheses and gathering and interpreting data under circumstances that are sometimes controlled, sometimes not.
What is important about science as thus defined is that it is all about facts and their interpretation (setting to one side science as engineering and technology). Science as method or process is a purely descriptive enterprise in the sense that it seeks understanding about what is, what was, and what will be. The sought-for descriptions can cover dynamic processes, not just static conditions. They can look back in time and forward, and include predictions based on stated assumptions. What they cannot do is pass judgment on the goodness or badness of any particular state of affairs, not without drawing upon at least a vague normative standard pulled from elsewhere.
We might consider, for instance, how scientists would describe two equal-sized fields, one covered by a tallgrass prairie of the type that once dominated east-central parts of the United States, the other a familiar expanse of soybeans planted in rows. Scientific descriptions of these fields would differ substantially in terms of their resident species (macroscopic and microscopic), the functioning and interactions of the species, nutrient flows, hydrology, and more. The descriptions would vary in both static and dynamic terms. Yet, the scientists doing this work could not, if pressed, tell us whether one field was better than another, or whether the condition of one field was more morally right, beautiful, or even useful. These questions, as noted, can’t be answered without drawing on standards of evaluation that come from outside science. The scientific data are, of course, essential to any evaluation; normative standards alone are hardly enough. Answers require that the two parts be brought together. Scientists might be the best people to do this work; they’ll have a better grasp of the often-complicated scientific facts. But it is work that reaches beyond science as such, and the conclusions of any assessment, whoever does it, are sound only insofar as the right evaluative standard is used along with good facts.
To see this is to see why it is problematic for atmospheric scientists to be expected to explain whether human-induced climate change is worrisome. The science part they can take on, challenging though it is. But the judgment about whether ongoing change is problematic requires use of a standard of evaluation. What is the best one to use and who gets to pick it? We could, of course, view any human-caused climate change as stupid or immoral, embracing a zero-tolerance policy. But to take this strict approach is, once again, to turn against the idea that people belong on the planet and can legitimately use it. It is to assume that all change is abusive and the less of it the better. Perhaps such a strict standard does make sense when it comes to climate; our knowledge of climate change is distinctly partial, and we don’t understand in particular how climate change, once it gets going, can feed on itself. But the absolute, no-change-to-nature standard is typically unhelpful in dealings with nature. It is not immediately apparent why it would make good sense in this setting.
Our tendency to treat climate change this way, treating the issue as one of science and expecting scientists to give advice, says much about where we are culturally. The prospect of human-caused climate change triggers profound moral concerns. Our tendency, though, is to view moral questions as properly lodged in the personal sphere of life, not as matters for public judgment—at least in the case of issues such as climate change that seem to pose new quandaries. Former vice president Dick Cheney expressed this perspective when he defended a new energy policy that only considered ways to increase energy supplies. The policy didn’t consider energy conservation, he asserted, because conservation was a personal virtue, not a matter of public policy. No doubt Cheney’s oil-industry ties had something to do with his stance. But his reasoning likely resonated with many people. Morality was like religion, a matter for people to sort out and implement in private life so long as they didn’t harm anyone else. Subject to a no-harm rule and with due regard for the equal rights of others, individuals can make their own subjective choices.
With this cultural slant, society as such has real trouble framing the climate-change issue sensibly. When morality is mostly about one-on-one interactions and individual rights, how can we talk about what is good for us collectively? How can we talk about moral obligations that we should bear not as individuals but as a people acting together? As taken up in the next chapter, public debate does often focus on economic growth, viewed as a desirable (normative) collective goal. And depending on how growth is measured, that goal can incorporate normative elements linked to collective welfare. But as we shall see, economics at root also places emphasis at the individual level, on the preferences people have as individuals. It is no substitute for direct public engagement on the wisdom or folly, the rightness and wrongness, of particular public policies. Our willingness to talk (to perseverate really) about economic growth doesn’t deviate much from the broader tendency to leave moral issues for individuals alone to resolve.
This public cult of objectivity is by no means normatively neutral. At first glance it may appear so, that a government that leaves people free to make their own normative choices is simply acting as referee, keeping the peace but not taking sides. But even a bit of probing disproves this claim. For instance, a government that takes no stance on abortion in effect allows it to take place. Permitting abortion is no more morally neutral than banning it. In the same way, a property-rights regime that allows landowners to destroy critical wildlife habitat on their lands is no more neutral on the matter of species protection than a system that obligates landowners to protect habitat. In both settings, public inaction is a value-laden choice. In both settings, the moral issue is resolved, not by addressing it directly at the communal level, but by turning it over for individuals to address separately. This route might aid public harmony. It might promote human welfare by empowering individuals to assert control over their lives, their bodies, and their immediate surroundings. But it is by no means normatively neutral on an issue where one person’s choice affects other people or the community as such.
When this happens, when an important policy issue is simply turned over to individuals for their free choice—based (for instance) on some strong preference for maximum individual liberty—then the normative issue too often is never really engaged and fleshed out in the public sphere. What gets missed, what often goes unidentified, is the initial, normatively charged question: whether decision-making on the topic should occur at the public level or whether instead it should be reserved for individual choice.
The strong push to protect if not enhance individual liberty, so pronounced in the United States and increasingly so elsewhere, is thus a political stance with considerable implications. This is particularly so when it comes to dealings with nature. Many normative elements of right living in nature can only be achieved by people acting together—protecting rivers as systems, for instance, and migratory wildlife. They require implementation by government, acting as agent of citizens. To push decision-making down to the individual level in such settings is effectively to rule out critical options, often without realizing it. The bias of the approach is toward options that people can act upon individually. That typically means slanting choices toward courses of action in which the resulting benefits are ones that individual actors can enjoy personally, pretty much without sharing with others. Why pursue a costly course of action when the benefits go in significant part—maybe almost entirely—to other people unless the other people are obligated to act the same?
The piece of the climate-change story not yet covered has to do with the issue of proof. The modern tendency, as observed, is for scientists to be asked for answers and the question posed thus is factual: Are we in fact changing the climate in a significant way? Merely to pose the question to scientists is implicitly to ask for an answer using scientific methodology. Has it been scientifically proved that we are changing the climate?
Countless environmental issues are reduced to this same question. It is a common practice, one that needs unpacking to expose its cultural meanings and social consequences. What does scientific proof mean, is use of the standard sensible, and what does it say about modernity that we so regularly and instinctively employ it?
The issue of proof is linked to the definition of truth, a matter taken up in the first chapter. For scientists, the gold standard, the aim of all inquiry, is to establish truth using the correspondence definition. A statement about the physical world is true if it corresponds accurately with the physical conditions of the world, if it accurately describes reality. To the nonscientist, the standard seems plain enough and easily satisfied many times. But scientists bring more critical judgment to bear. Philosophers of science are even more demanding, so much so as to assert that no facts are every fully proved, particularly facts about nature. Instead they are simply established to very high degrees of probability, always leaving open the possibility that new data will call for modification.
As already noted, explanations of scientific proof typically distinguish between two modes of reasoning, deduction and induction. Deductive reasoning begins with initial axioms, presumed to be true, and proceeds by logical steps to conclusions that are, in a sense, implicit in the axioms. Mathematical reasoning is of this type. When the logic is sound the conclusions reached are said to be proved, though of course their accuracy depends on the validity of the starting axioms. In the study of nature, including the study of ills such as climate change, deductive proof plays a distinctly secondary role.
More central to factual claims about nature is inductive reasoning, which begins by gathering sensory data about the world and inferring conclusions from them. For instance, if numerous balls are dropped and they consistently fall toward earth at the same rate, a conclusion can be drawn about what will happen when the next ball is dropped. Data about the falling balls are brought together and an inference drawn that the next ball will move in the same way. This reasoning seems sound enough and very likely is. But it depends upon a key assumption, first prominently explained in the eighteenth century and accepted ever since. The reasoning depends upon what is termed the Uniformity of Nature, the assumption that nature exists and behaves in uniform ways. Without that assumption, an inference about the next ball’s motion is not logically sound. So far as scientists can tell, nature does act uniformly in many respects but patterns in nature are by no means always uniform. Particularly when it comes to actions involving living creatures they can vary in ways we might easily miss. On the other side, nature might be acting uniformly without our knowing it because it follows a pattern more complex than we have observed. Nature’s dynamism comes into play, so a pattern that prevails for a time may also shift due to natural causes.
The illustration of the falling ball is a simple one in that experiments are easily undertaken under controlled circumstances. Balls can be dropped thousands of times, just as coins can be tossed thousands of times, with the results recorded and tabulated. In many settings, however, experiments are costly and time-consuming if they can be undertaken at all, and they may take place under conditions that are erratic. Often data must be collected simply by observing events as they unfold outdoors with no ability to control the operative forces.
Here we might consider a study of forest logging and its effects on the nesting success of various bird species. When a solid block of forest is disrupted by the logging of wooded patches here and there, what happens to the ability of forest-dwelling birds to raise their young? Realistically scientists can’t take control of multiple forests and conduct experiments in them again and again. Even if they could, the forests would differ in species composition, climate, hydrologic flows, and more. Inevitably researchers must gather real-world facts as they can and infer conclusions from the facts. To the extent the data show material consequences from the logging, the researchers then must formulate theories to explain them. As more studies are conducted, necessarily under varied conditions, more data come together and explanatory inferences (often revised) can gain strength. But scientific conclusions in such a setting are never as solid as in the case of balls being dropped. This is so because the assumption about the Uniformity of Nature seems less secure. It is so also because study conditions are uncontrolled, the likely relevant facts are vast, and the possible explanatory factors and forces so numerous, more numerous than the simple process of gravity at work on the falling balls. As a result, conclusions necessarily are more tentative. In the language of the philosophy of science, a conclusion in such a research setting is often termed an Inference to the Best Explanation. Scientific judgment is needed to decide how solid an explanation seems to be. In any event, a conclusion cannot be proved in any airtight sense. New data might call the inference into question and future scientists, reviewing the same data, might propose different theories of causation.
Given these limits faced by scientists, it is often said that science cannot prove claims to 100 percent certainty. Instead it can offer claims with varied levels of confidence behind them, based on probabilities that can approach 100 percent but never reach it. The point has been known for centuries, and it formed a core element of American Pragmatism at the turn of the twentieth century. Pragmatists added to this insight when they put forth the third of the definitions of truth by which the truth of a particular claim or proposition was characterized by widespread acceptance of the claim by scientists in a field and by the consequences that flowed from accepting it and taking action based on it. As the prominent philosopher of science Karl Popper forcefully explained in the twentieth century, science had the power to disprove propositions conclusively but could never prove anything completely. It had to get by with elements of uncertainty that could and did vary considerably among scientific disciplines based on the challenges they each faced and the limits on available research methods.
These observations about scientific proof are especially pertinent to the case of long-term climate change. Plainly, atmospheric scientists cannot conduct whole-planet experiments in which they study the effects of human activities over very long periods of time. We have only one planet, people are living on it and changing it, and nature’s ongoing processes are dynamic. We can’t put the planet on hold while we conduct thousand-year experiments. Further, the relevant data awaiting collection are essentially infinite and scientists must get by with well-chosen pieces. Their goal can only be to draw conclusions by inference both on what the facts are—whether change is taking place and if so how much—and on why the change is taking place. Necessarily any conclusion would be a matter of probability. Necessarily also the accuracy of conclusions will be evidenced in the same way as other scientific truths: by widespread acceptance among climate experts and by the good consequences that come by assuming their validity (in the form, for instance, of newly collected data that conform to predictions). Scientists working in and with the IPCC know all of this full well. From the beginning they have presented their factual conclusions only as matters of probability. Over time, with each new multiyear assessment, the IPCC’s conclusions have typically (although not on all points) been set forth with higher levels of probability. Sadly, journalistic summaries of their work often omit the words of probability, or mention them quickly and then push them aside, with rarely an insightful comment on the methods and constraints in science of this type.
Public uncertainty about scientific proof has hampered the public’s understanding of climate change. The poor understanding opens the doors to demagogues who assert that if climate change were true all data would support it and the proof would be certain. More troubling than that is the fact that the burden of proof being publicly used is a scientific one. Scientific standards play essential roles in the scientific process. But is it wise to use them outside of that arena? Is it wise to insist that facts be accepted in public affairs only when established to an extremely high confidence level?
What is commonly termed “scientific proof” is only one of many burdens of proof in regular daily use. Burdens of proof are the stuff of law practice and legal systems. In civil courtrooms in the United States, facts are accepted as proved if they are supported by the preponderance of evidence adduced at a trial, which is to say supported by 51 percent of the evidence. The standard is higher in the case of criminal trials. There, the prosecution is obligated to prove key facts beyond a reasonable doubt, a standard that defies translation into numerical terms but is certainly below 100 percent. In other legal settings different standards of proof are used. An intermediate one between these two is proof by “clear and convincing evidence.” An even lesser standard is one in which factual conclusions are treated as adequately supported unless the underlying evidence is so insubstantial that the conclusions seem not just unlikely but arbitrary and capricious. No judicial proceeding requires that facts be supported to the level of scientific proof. Indeed, criminal defendants in the United States are put to death on lesser proof than that.
For further comparison we can turn to daily life. We routinely exercise caution to avoid dangers that are unlikely to happen. Often we are unwilling to assume even small risks of harm. Who would get on an airplane, for instance, if told there was a 50 percent chance of the plane crashing, or even a 5 percent chance? It is hardly sensible to ignore such a danger. It is hardly sensible to brush it aside on the ground that the factual prediction of an upcoming crash has not been scientifically proven. Who would eat food that was 10 percent likely to cause serious illness? As the extreme of caution we might consider the case of the US Secret Service, charged with protecting the president. The Secret Service, we can presume, takes a death threat seriously and acts on it even if its chance of happening is 1 in 1,000, or 1 in 100,000.
In this light we can reconsider our social tendency to frame climate change in terms of scientific proof. Is it not more ethical and sane to pose a much different question: Is the evidence in hand indicating that a problem looms ample enough to merit a remedial response? If that were the question, how much evidence of possible harm would we require? How likely would the danger need to be to prompt corrective action? Presumably our answer would take into account the costliness of the correction, assuming (as we do) that actions to reduce climate change would entail net costs, that is, costs greater than the non–climate related benefits they would also generate. (As an aside, we can note how critics of climate change, insisting on yet higher levels of proof of harm, are at the same time often inclined to grab tight to highly speculative claims about the net costs of halting fossil-fuel usage.) Aside from the cost issue, important normative factors are also highly pertinent when asking how much danger is too much—factors of morality, social justice, and the wisdom of precaution.
At root, it is hardly possible to defend the use of scientific proof as the appropriate burden when talking about potentially catastrophic harm. A much lower burden seems manifestly in order. So why do we still talk about scientific proof? How did the public issue get framed like this to begin with?
To the extent this burden was pressed upon society by climate skeptics—mostly industry funded—it amounted to an extraordinary rhetorical victory. The use of the scientific standard strongly skews debate to favor inaction and the status quo. It also means a scientist hired by critics can publicly claim that climate change has not been proved in the scientific sense while privately admitting that it was in fact highly likely to occur. The strategic value of this mode of resistance, centered on burden of proof, has been known for generations. In her bestseller Silent Spring, from 1962, Rachel Carson made the case that we should, as a matter of prudence, exercise greater restraint when using deadly pesticides given the ample evidence of their dangers, evidence that Carson reviewed at length. (She also contended that the use of pesticides, directly on people on their lands, violated individual rights since people had not given informed consent.) Carson’s industry-funded critics, however, immediately pushed forward a different frame for the debate. In their responses, they contended that she had not scientifically proven all of her allegations about pesticides. The issue, they replied, was one of scientific proof. Carson’s language of prudence and caution (and rights violations) got shoved to the side.
In terms of climate change, it is revealing and dismaying that public discussion (particularly in the United States) shows little awareness of the vital normative considerations that go into selecting a burden of proof. This lack of awareness, though, is consistent with the many other ways in which normative issues are pushed out of the public arena, usually into the private realm or, as in this instance, off the table completely. It is a significant intellectual failing. It compromises our collective ability to make sense of climate change and explains, better than any other factor, why the reality of climate change remains contentious. Had the facts of climate been submitted to a criminal jury, the kind that hands out death penalties, the claim of climate change would long ago have been proved beyond a reasonable doubt. A civil jury using a preponderance of the evidence standard would have drawn the conclusion a generation ago.
Before turning from science to the foundations of morality, two further points might usefully be made about our cult of objectivity and exaltation of science. The first has to do with the longstanding confusion about safety and what it means for something to be safe. (The same confusion surrounds the word “risk.”) What does it mean to say something is safe—genetically modified food, for instance, or hormones or antibiotics fed daily to animals destined for human consumption? As widely used the term has several quite different meanings. Debates over safety are often confused and degraded by a failure to get clear on them.
Safe can mean a zero risk of any bad consequence. Using that definition almost nothing is safe. It isn’t safe to get out of the bed in the morning, nor is it safe to stay in bed. It certainly isn’t safe to drive a car or ride a bicycle. Safe can also mean that the risks associated with something are so minor or trivial that they can be ignored. Beyond that, it can mean that the benefits associated with something are more significant than the expected harms so that there is, on balance, a net gain. As should be apparent, the second and third of these definitions require an assessment and weighing of dangers and costs, or an assessment of both benefits and costs—an assessment that entails, again, the use of normative standards. Safety in the second and third senses is thus not simply a matter of fact. It is not a matter upon which science alone can pass judgment. It is perhaps plausible enough for a Monsanto to contend that its genetically modified crops are safe, but in fairness it needs to release all of the evidence that it has used in making the assessment and also explain clearly the normative standard (including the burden of proof) that it has employed.
Second, the push to have science as our public arbiter is sometimes associated with an insistence that the only evidence relevant to an inquiry—the only evidence that can be taken into account when assessing the truth of a claim—is evidence that takes the form of scientific studies in peer-reviewed journals. On its face this insistence would seem to improve the accuracy of ultimate findings. Sometimes it does. But it will do so only when and if peer-reviewed studies cover the relevant data with considerable thoroughness. When that is not the case, when published studies have not (yet) taken into account much relevant data, when they cover only scattered or spotty aspects of an issue, then conclusions from peer-reviewed studies might rightly and usefully be supplemented.
Here again we can consider the courtroom, where evidence is admitted so long as relevant and not plainly unreliable or prejudicial. There is no insistence that evidence be limited to scientific studies. Similarly, in everyday life risks are immediately processed based on experience and judgment. In the case of climate change, for instance, we have reams and years of recorded testimony by birdwatchers about how migration patterns of many bird species have shifted in ways consistent with a changing climate. Evidence of this sort, though, is criticized as anecdotal and unreliable. Climate critics insist that it be ignored. But evidence of this sort from hundreds or thousands of sources has much value. As important, it is the only kind of evidence that ordinary concerned citizens can contribute to the public discussion. To insist on using only peer-reviewed studies is not just to push aside valuable, pertinent evidence but also to push aside ordinary citizens and keep them from playing a role.
For many centuries people generally, Western philosophers included, largely took for granted that the world was structured or guided so that particular actions could be objectively right or wrong, or good or bad. (Some prominent ancient philosophers had said otherwise.) Actions or states of affairs might be wrong or bad because they clashed with some transcendent ideal of goodness and justice. They might instead be inconsistent with the purposes or end goals immanent in a creature or thing. They might disobey the wishes of a god or spirit or conflict with revealed religious wisdom. The variations were many, particularly when the known world included creatures and particular places that were inhabited by potent and demanding spirits.
To trace how we got to where we are today, in the ways we think about morality and its origins, we can take up the storyline with William of Ockham, a fourteenth-century English friar who as a Franciscan worked with people and nature outside the cloisters. Ockham became an early, prominent critic of the prevailing belief that ideals such as goodness and justice had real existence, that they were universals or Forms, in the language of Plato. To the contrary, Ockham contended, ideals such as these existed only within human minds as mental concepts. They were words that we used to express things within our consciousness, not labels linked to real things that existed apart from us.
In Ockham’s day morality was chiefly the province of religion, based on instruction from God. It was thus significant when Ockham asserted that it was not possible—either by studying God’s creation and gathering sense impressions from it, or by reasoning from those sense impressions—to draw conclusions about God or about religious matters. One could only know about God by means of revelation, Ockham insisted, by way of scripture or through direct spiritual insight. This division of sources of knowledge in effect separated the world into two realities: the reality of the empirical world given by direct experience, and the reality of God and God’s teachings known only through revelation. Put simply, there was religious truth and scientific truth, and the two were not directly linked.
Ockham’s reasoning had many consequences. In time his view helped free scientific inquiry of nature from the strictures of the church. If nothing learned from the study of nature told us anything about God, then no scientific conclusions could be viewed as blasphemous. As important, this separation detached data collection and inductive reasoning from any inquiry into the moral order. To grasp morality, some other means of inquiry was needed.
Ockham’s two-part division of reality and knowledge worked fine so long as religious faith held up and knowledge by revelation was deemed as sturdy as empirical knowledge (as he believed). But with the continued advance of scientific thinking the two realms of reality became increasingly less equal, particularly by the seventeenth century with its rising commitment to empiricism and induction. By the following century, the Enlightenment Era, faith in man’s powers and knowledge had risen high while scriptures and other revelation seemed less and less believable, in part because they lacked support from the senses and science. As science continued to rise, it became ever harder to keep the two forms of reality separate. Step by step Enlightenment leaders challenged revelation, moving further down the road to secularization. Yet, as historian Carl Becker pointed out in his influential study, The Heavenly City of the Eighteenth-Century Philosophers, Enlightenment figures nonetheless mostly retained a firm commitment to Christian morality. They continued to anticipate movement toward some form of heaven on earth. Thus, even as they pushed aside the Nicene and Apostle’s creeds (and on to deism in some cases) they retained faith in an overriding order, structured by both moral and physical laws and, contra Ockham, discoverable by human reason. With God collapsed into nature one could, by studying nature, gain wisdom about the transcendent moral order and how people were supposed to live.
The trouble with this stance was soon evident. The practical study of nature and the use of reason simply didn’t yield much in the way of moral instruction, at least much that was unambiguous. As early as the mid-century, Becker concluded, leading thinkers were admitting the feebleness of human reason on moral issues and were softening their caustic attacks on tradition and church. Nature and Nature’s god didn’t seem to have much guidance to offer. Various Scottish philosophers were among the first to shift ground and propose a new foundation for morality. Our moral senses, they asserted, arose not within our rational minds as long believed but instead from the emotions or sentiments we experienced as we engaged with the world. By this they meant not transient feelings but our more deep-seated, long-term sentiments about right and wrong, sentiments that, they thought (or at least hoped), were strong and stable enough to support something close to real, binding moral standards.
Such sentiment-based claims were listened to attentively even as the defenders of science and reason still believed that they held the keys to progress. Before the century had ended, however, another disruptive element had gained enough power to unsettle moral thought. This was the rise of liberal individualism, the rising belief (fueled by the late-century revolutions) that individuals as such were not just morally worthy in the eyes of God but were endowed in some manner with individual rights.
The idea of individual moral value, shared equally by all people, had become a central element of Christian thought in the Middle Ages, even as economic and social orders remained highly stratified. By stages the reasoning gained ground, breaking forth initially with claims of religiously grounded natural rights and then with rights that secular realms needed to respect. Public morality, it was claimed, had much to do with the recognition of and respect for these rights. It had to do also with the rule of law, similarly gaining strength. Moral thought should take the form of rules that bind and apply equally to all people, so said the late century’s leading philosopher, Immanuel Kant. And those rules, Kant contended, should be ones that respect the worth of individuals as such, rules that treat each person as a morally worthy subject rather than merely as a tool or object. Moral thinking, then, properly begins with the individual human considered in isolation, not with the overall natural world and with an effort to understand its inherent order and how people rightfully fit into it.
This new moral reasoning in retrospect was based on human egoism, even as early adherents (Kant included) tended to retain the moral principles of Christianity (Lutheranism, in Kant’s case). Kant framed his moral reasoning in terms of the moral duties borne by individuals; it was duty-based reasoning. Soon this reasoning came to be thought about as rights-based given that an individual’s duties chiefly related to moral obligations, first, to respect other people as morally worthy subjects, not as mere objects, and, second, to live according to moral rules that one would want to apply equally to all other people.
Kant’s moral reasoning spread widely. In time it would become one of the two dominant forms of Western moral thought, referred to as the deontological (duty-based) approach. It contrasted with modes of reasoning that judged moral rightness and wrongness based on the consequences of an action, particularly the effects on human happiness or flourishing. Kant’s new reasoning, though, was not immune to the problems that Hume had earlier identified. Kant had to assume that humans were morally worthy subjects, just as Christian tradition taught. It was an axiom that he took to be, in Jefferson’s terms, self-evident, not one drawn from facts or pure reason. Yet if humans were worthy, why not other creatures as well, and why didn’t moral worth reside in families or villages or tribes along with, or instead of, individuals? Neither facts nor reason could explain why one starting point was sounder than another. Further, Kant similarly carried forward the focused morality of Christian tradition that honored individuals as autonomous beings, as independent moral agents, rather than (as pre-Christian traditions typically did) as embedded members of families, clans, and tribes.
A more significant problem for Kant came from his admonition that people abide by rules that they would have apply to everyone. It sounded stern enough, a version of the Golden Rule, but it said little about the content of such rules. It allowed a person to act quite selfishly and ruthlessly so long as he was prepared to have other people act in the same way. As for the rights that emerged out of the recognition of individual duties, their content varied greatly based on the rules of conduct that were crafted. So which rules should prevail?
What became clear in time was that the content of Kant’s rules, and what it meant to treat another person as subject rather than a mere object, couldn’t be shaped by reason alone, just as Hume had earlier pointed out. The content had to come from somewhere else. Kant believed in God and asserted that individuals should act out of a spirit of goodwill. For Kant these starting points (augmented by speculative logic) seemed adequate. But by the next century, with religious belief on the wane, Kant’s religious foundation seemed less sure. The more solid grounding for morals, the only sturdy grounding perhaps, seemed to come from some form of moral sentiments. It came from a deep-seated sense within people about right and wrong, doubtless shaped to varying degrees by genetic inclination, experience, and inherited culture.
Kant’s legacy, though, would remain strong, not just in his stress on duties/rights and on the importance of moral rules as such, but in two other important ways: in his insistence that individuals as such were free and fully responsible for their own choices, and in his contention that humans played active roles in interpreting the surrounding world. Human senses and knowledge were limited, Kant asserted, which meant individuals could rightly embrace understandings that went beyond the empirical facts. With knowledge limited we were free to believe, and indeed, he proclaimed, had to believe in order to live morally.
Over the course of the nineteenth century, philosophers in various ways largely came to agree that moral principles simply had to arise in some manner, direct or indirect, out of human action, conscious or not. They could not simply be found in nature and could not arise from pure reason alone. Nor were philosophers willing to concede authority to the church or to scriptures or other forms of revelation. A person might simply choose to embrace the church’s teachings, as Danish philosopher Søren Kierkegaard would. But it was the individual choice, then, that gave authority to the church. With this stress on individual choice morality increasingly came to seem subjective and personal, a matter of individual opinion based on individual experiences. The liberty rhetoric of the revolutionary era pushed in this direction. So did Kant with his insistence on individual freedom and will to believe. It was an appealing line of reasoning and a venerable one, too, with a heritage reaching back to the ancient Greek Sophists.
Yet, even as they increasingly stressed individual freedom and the power if not duty to choose, philosophers did not lose track of the reality that individuals participated in a social order and had to get along with one another. People formed communities. Somehow the moral order had to sustain the welfare of these communities. Writing in the eighteenth century, Jean-Jacques Rousseau believed that the higher self was one who would (or should) identify with the good of society as a whole. The mature moral being was one whose personal desires and happiness blended with those of the community as such, so that no conflict between the two existed. Writing at the turn of the century Georg W. F. Hegel also retained emphasis on the larger social whole by insisting that the world’s parts were all connected, humans included, and that parts could not be understood in isolation. It was essential to consider also their relationships and interactions. The larger issue here—the parts and the whole—soon became central in utilitarian moral thinking, which arose in the first half of the nineteenth century in the writings of Jeremy Bentham and of James and, especially, son John Stuart Mill.
For the new utilitarians, the morality of conduct was best judged not by reference to abstract moral principles or Kantian rules but instead by calculating the effects of an action on human welfare (originally on human pain and pleasure, later on happiness more broadly understood). A moral act was one that brought the greatest net gain in human welfare compared with alternative acts that might be performed. Some versions of utilitarianism would insist that actions comply with rules, and that utilitarian calculations should focus on the comparative consequences of different rules rather than of distinct individual acts. Yet, all versions looked to the consequences of acts to judge their goodness. What was quickly apparent, though, was that this approach seemed plausible only if it took into account the happiness or welfare of everyone; it wasn’t sensible for an individual simply to maximize his own happiness and ignore the effects of his conduct on other people. It was clear, too, that one person’s happiness often arose in circumstances that diminished the happiness of someone else. How, then, to align the happiness of the individual with the happiness of humankind as a whole? Bentham further complicated his calculations by contending that the happiness of certain nonhuman species ought to be considered as well.
These concerns about the larger social order, about humankind as a whole, tempered the push to expand individual rights and liberties. In some way people had to act as good community members. But where was one to find the moral limits that bound individual freedom, and what made them binding? Bentham’s original calculation, based on individual pleasure and pain, seemed to be grounded in empirically testable facts. It was an objective standard, whether a person did or did not experience pain. Bentham merely had to assume as a starting axiom that pleasure was good and pain was bad, nothing more. With the shift, though, from pleasure to happiness and to welfare generally as the operative unit, utilitarian calculations seemed more and more to defer to individual preferences. What made people happy or promoted their welfare? Answers seemed subjective, not objective. This new focus on happiness or welfare also made it harder to compare and sum up the consequences actions had on different people. How did one add up the good and bad consequences of a particular act or proposed rule of conduct when the consequences were based on individual subjective responses and there was no metric to use in measuring and comparing them. And what about actions that made some people happy and others unhappy?
Like Rousseau, John Stuart Mill, the greatest of the utilitarians, hoped that people would progress morally to the point where they aligned their personal preferences with the well-being of the larger community. If that happened, the conflict would disappear. Writing soon thereafter, Karl Marx similarly hoped that the desires of individuals would in time merge with the welfare of the community as a whole as basic human needs were met. Ideally this moral uplift would lead to the disappearance of distinct social classes and even to the end of government (a tool, Marx said, used by the stronger class to exploit the weaker and thus not needed when classes disappeared). Marx, though, was far from firm in making a prediction on this; he merely hoped it would happen. Mill, too, understood that his vision of harmony was based mostly on hope and on his admitted inability to see any other way for individual happiness and group happiness to line up.
At its root, utilitarian thought was chiefly a mechanism that turned decision-making authority on moral issues over to individuals. Their happiness or welfare was what typically counted. Consequentialist moral thought generally only worked when some normative standard existed to judge the consequences. Which consequences were good and which were bad? However up-to-date and mathematical utilitarianism might be, it had no good answer except to leave individuals to decide for themselves and then to sum up their answers. But this was merely a procedural approach to morality. It did not decide what was moral; it simply prescribed who would get to decide. And this was true even when utilitarianism was put to use—as it was, quickly and often—to criticize institutions and laws and to promote reforms to augment overall happiness. Reformers still had to look to individuals and find out what they wanted.
As for the individuals themselves, utilitarian thought left them free to develop their preferences as they might choose, using various modes of reasoning, religious faith, passions, or mere whimsy. It provided little in the way of guidance except as it implied that they should think of themselves and what made them happy. Plainly it made moral discussion at the social level more constrained. The aim of government and public policy was simply to help individuals as such gain happiness. The good of the whole was merely the sum of the good of the parts in isolation. But where did that leave the idea of a common good, the idea of larger moral or prudential goals that society as a whole might pursue? Where did it leave the age-old idea that morality was a matter of obligations that imposed external constraints on individuals without regard for their wants and wishes?
Looking back, the patent incompleteness of this moral reasoning as it came together in the nineteenth and twentieth centuries—both the Kantian deontological reasoning and utilitarian moral reasoning—all arose from the rejection of religion and revealed moral knowledge and from what has come to be called the is/ought dichotomy. The basic claim was that the empirically learned facts of the world simply did not offer moral instruction, even when the facts were manipulated using reason. The physical stuff in the world merely existed as such. There was no goodness or badness about it. Accordingly, one could not draw normative conclusions by studying the world. One could not go from the “is” of an existing thing or condition to the “ought” of what should be. One could not go from facts of the world to values; facts and values were categorically different. This dichotomy would be challenged on the ground that human fact-collecting itself was not value-free, which meant that facts as people understood them were necessarily infused with human values. But it was largely agreed that facts, to the extent they could be gathered objectively, were not themselves (that is, taken alone) the source of values. Values required at least some engagement of the human will. Human engagement in turn was largely grounded in human feelings and sentiments, guided and pruned by reason, that is, by the complex mechanisms of the human brain that Freud and others would soon open to the world.
This separation of facts and values hardly meant that facts were irrelevant in making moral judgments. It meant more modestly that the basic values that were used to pass judgment had to come from some other source, even if the values were simple principles (for instance, that humans have moral value, or that human pain is bad). By the twentieth century important moral philosophers were questioning Hume’s dichotomy, claiming that the social embeddedness of individuals, their primary existence as social beings, played a key role in shaping moral values. Individuals were not simply autonomous actors. They were parts of larger systems and morality had to do with their roles in the systems. The facts of this embeddedness, they asserted, were themselves infused with values. The is and the ought were not, after all, so distinct.
This stress on the social roles of humans appeared prominently in the work of the leading American philosopher John Dewey. Dewey didn’t deny that people were individuals but he insisted on challenging and blurring the presumed line between the individual and society, much as did his slightly older contemporary, pragmatist William James. Individuals were embedded in society, Dewey (and James) claimed. Solidarity—fraternity, others would call it—was as important a value as independence. Dewey also believed, like others before him, that the parts of the social order could not be understood without first grasping the whole and seeing how the parts related to it. Further, the self could be realized only in and through its communal roles. Dewey’s thought resonated with important contemporary lines of thinking then understood as conservative, thought that similarly embedded humans in a traditional order (often hierarchical) and that held people accountable for acting responsibly within that order. People fulfilled roles and roles were governed by status rules and expectations.
Dewey’s process-oriented thought highlighted the reality that moral thinking could not be detached from thinking about the nature of existence, from the subject of ontology. The drift of Western intellectual thought since the eighteenth century had been in the direction of greater individual autonomy. The American and French revolutions proclaimed it. Kant helped give it a philosophical grounding that Bentham and others built upon, even as they disagreed with many of Kant’s claims. Both Kantian and utilitarian thought began with the individual as such and moved outward, exploring how the moral worth of the individual might best be translated into a more encompassing scheme of moral thought. Both approaches stumbled, for the reasons given. But they gained dominance nonetheless because they fit so well with calls for ever-greater liberty (particularly in economic realms). They fit, too, with pushes to expand human rights to cover groups of people long on the lower rungs or social fringes.
By the late twentieth century, more and more observers were emphasizing how individual welfare depended on the types and health of a person’s social roles, many following lines of reasoning termed “communitarian.” Stark individualism just didn’t fit the ontological facts. In moral thought this would translate into a renewed interest in the moral writings of Aristotle, who had similarly portrayed humans chiefly as social beings. Aristotle’s ontological understanding led to moral reasoning that emphasized a life of virtue or, more broadly, flourishing or excellence. Morality was not chiefly about abiding by particular binding rules (Kant), or about promoting the sum of individual human happiness (Bentham), but instead about people living virtuously in and among their neighbors and fellow citizens, honorably carrying out their social responsibilities. Late in the century, Aristotle’s moral reasoning would gain an articulate proponent in philosopher Alasdair MacIntyre, whose writing directly challenged modern liberalism in its politically varied forms.
This interest in questioning individualism, in renewing the links between morality and ontology, would appear prominently in environmental philosophy. It was in the environmental area where the physical connections among people were most readily apparent. As ecologists had long pointed out, humans were embedded in larger natural systems. They were, as Aldo Leopold had famously put it, not conquerors of the land community but plain members and citizens of it, as dependent on nature for their survival as any other living creature.
These realities about nature’s functioning and human dependence have direct moral relevance. They relate also to our ways of dealing with one another given that one person’s actions in nature inevitably affect other people. This new ecological ontology has become even more striking as we have learned more about nature’s functioning and how human welfare depends upon it, upon nature’s “ecosystem services,” as the dependence is sometimes termed. Moral living requires respect for these natural processes if only because they make human life possible.
These ecological realities would lead environmental philosopher J. Baird Callicott to issue a direct challenge to the is/ought dichotomy, much as social ecology had earlier led John Dewey to do so. The facts of the natural world, the ecological realities of interdependence, do have direct moral significance, Callicott urged. Humans in fact are parts of natural communities without regard for what they know or prefer. The facts of interconnection directly challenge both Kantian moral thought and standard utilitarian thought. Both presume the basic autonomy of the individual and construct a moral scheme by teasing out the implications of recognizing individual moral worth. Both lines of reasoning show serious deficiencies, however, the moment the abstract individual is reconnected to nature’s lifelines. Looking back, Callicott contended, moral thought veered off course after the mid-eighteenth century by overemphasizing the autonomous individual. Particularly when it came to dealings with nature, the better approach was the older one, the one traceable from Plato and Aristotle up to Adam Smith, the one that began ontologically with humans embedded in communities.
Ecological facts, then, did play a direct role in giving rise to values. Yet ecological facts and social facts were not alone enough to flesh out a moral system. It remained necessary to draw upon values from deep-seated sentiments. It was necessary to turn to sentiments about the value of life; about the rightness of taking care of nature for future generations; and about the value of the whole of nature as such along with the special value of its human members. Nature’s ways were exceedingly complex. Right ways of living in it necessarily called for serious scientific efforts to learn and make sense of this complexity. In short, reason, facts, and sentiments all had their moral roles to play.
The wide scope of this chapter makes it helpful to organize the concluding points in summary form. They provide foundations for later parts of the inquiry.
The modern age highly values objectivity when it comes to addressing public issues. Objectivity typically means sticking with facts and logical reasoning while pushing subjective feelings and preferences off to the side. This stress on objectivity appears in various forms, including a tendency on environmental issues to turn contentious matters over to science and to expect scientists to explain whether a problem exists. Without question, the need for good scientific facts is quite high. But science is regularly called upon to answer questions that go beyond it, beyond science aptly understood as a skilled effort to gather, test, and interpret facts. When it comes to normative issues, science cannot give answers and should not be asked to do so unless expressly supplied with normative standards to use. This overuse of science extends to scientific methodologies, particularly burdens of proof and scientific standards for accepting evidence.
Looking ahead, the work of finding our place in nature requires that we make sense of science and then put it in its rightful place. Normative issues need to be identified and understood as such, not treated as factual questions. This work includes thinking clearly about which moral and prudential questions should be resolved and acted upon at the community and which are better reserved for individual choice. In dealings with nature, many policy options must be selected and implemented at the community level. They require people to work in concert.
As for moral thought, our predicament today is fairly plain. We’ve followed an intellectual journey over the centuries to a place where we have curtailed our capacity to engage and resolve issues at the community level. Our Enlightenment-derived commitment to objectivity leaves us without sufficient tools to exchange moral views and visions and, through rough consensus, to embrace new axioms leading to better public policies. Moral principles, we now realize, are not simply out there waiting to found. Yes, facts are highly relevant in moral thinking and reasoning plays a critical role. But ultimately our moral thinking will be grounded in our sentiments and intuition, which means, far from being pushed away, sentiments and feeling should be given a central place. They should be aired, exchanged, discussed, critiqued, and refined.
Moral orders begin by people formulating and embracing axioms that simply cannot be proven scientifically or logically. Thomas Jefferson’s self-evident truths were not supported in that way, nor could they have been. He offered them rhetorically, as moral philosophers had always done and always must do. They gained their legitimacy as axioms, not when Jefferson pronounced them, but later, through the legitimate means of public embrace. People of Jefferson’s age and thereafter agreed with his moral assertions; they accepted his self-evident truths and by their choice turned them into axioms.
Along with Jefferson’s self-evident truths we might illustrate the nature of moral axioms by considering briefly the claim that human life is morally worthy. This, too, is a rhetorical claim, ungrounded in facts or logic, which has arisen out of Christian teaching and slowly gained acceptance over time. Animal-welfare advocates would have us broaden this now-fundamental moral axiom. Moral value, they claim, should extend beyond humans to other life forms. Members of certain other species are also morally important and we ought to make room for them in moral reasoning. Such a claim, of course, is not really grounded in facts or reason alone (though both are certainly used). Instead it is a moral assertion offered for acceptance as a new collective axiom, much as Jefferson offered his claims. The moral claim can of course be challenged. But it cannot rightly be dismissed on the ground that it is not scientifically valid or not based in facts or logically compelled. If that were the test, the claim that humans have value would similarly fail.
Moral reasoning ultimately arises out of sentiments and feelings mixed together with facts and clear thinking. We might think of this mixture as a heady soup, its elements running together in ways that make it impossible to specify their precise contributions. Enlightenment Era philosophy was right: facts do not alone give rise to moral values. But facts play key roles not just in implementation but in the original formulation of the values. Reason, too, must be in at the beginning, if only to clarify sentiments, to put them into sensible form, and to expose them to the realities of the world.
Ultimately values gain legitimacy by social choice, as philosophers have long emphasized. Without a human valuer to create or otherwise recognize it, moral value does not exist in a meaningful sense. That process could attribute value to individual members of other species or to future generations. It could recognize value also in entities—in species as such, biotic communities, and specific landscape features. Value that arises in this way is intrinsic or inherent, value that exists independently of any contribution to human well-being. But this value still rests on human choice, however inspired and encouraged by nature. That reality must be understood, just as it must be known that, because all morals arise from human choice, it is appropriate and essential for humans to be and remain choice-makers.