David Hume (1711–1776) looked more “like a turtle-eating alderman” than a philosopher. When this cheerful Scot became popular in the French salons, the philosophes poked fun at his stoutness. Once, as Hume entered a room, d’Alembert quoted from the beginning of St. John’s Gospel: “And the word was made flesh.” A lady admirer of le bon David riposted “And the word was made lovable.”
After Hume achieved his ambition of literary fame, his friends urged him to update his best seller, History of England. “Gentlemen, you do me too much honor, but I have four reasons for not writing: I am too old, too fat, too lazy, and too rich.” Hume preferred supper parties. At one he hosted, a guest complained about the spitefulness of the world. Hume replied “No, no, here am I who have written on all sorts of subjects calculated to excite hostility, moral, political, and religious, and yet I have no enemies, except, indeed, all the Whigs, all the Tories, and all the Christians.” (Fadiman 1985, 293)
Hume thought reason offered almost-zero support to common sense and subzero support to religion. Empiricism is about setting limits. Rationalists claim that pure reason can demonstrate substantive facts about the world. Empiricists say that only experience can reveal what exists and how it works. Pure reason can only tell us what follows from what. The revolution in physics seemed to vindicate empiricism by requiring all claims about nature to be backed by observation and experiment.
Starting from these scientific, no-nonsense premises, Hume organizes a feast of paradoxes. First, just as there is no arguing over taste, there is no arguing over ultimate ends. “Ought” judgments are always relative to some stipulated goal. Reason and experience tell us only which means secures which ends. “Reason is and ought to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” (1739, 415). A cannibal can rationally prefer to eat his children rather than feed them.
Turning our attention to the main course of nature, we cannot justify the belief that the knife forces the flesh to separate from the lamb’s thigh bone. We only observe correlations, never causation. Nor can we observe the future. Thus, we have no basis to believe that the future will resemble the past. Yes, bread has nourished in the past. But in the yet-to-be-sampled future, bread is just as apt to be poisonous.
Hume’s bad news does not relent if we remain cautiously in the present: We cannot justify beliefs that our ideas faithfully portray the nature of external objects. We cannot even demonstrate that the bowl of cherries continues to exist when everyone leaves the room to sip sherry. Indeed, we have no basis to believe that there are things that exist independently of thoughts.
What goes for objects goes for subjects. The thinkers supposedly responsible for thoughts are no more observable than plates and pudding. Consequently, the belief in substantive selves is a mere prejudice. Descartes’s “I think, therefore I exist” overinterprets the data. He is only entitled to observe “There are thoughts.”
At the end of this heady evening, all that remains are ideas. Ideas have gradually consumed what they were assigned to represent—including their thinkers.
Hume was pleased by the power of skeptical arguments to damp down superstition and temper religious fanaticism. However, like many circumspect individuals, Hume partly envied the credulous. He regularly attended church services led by a sternly orthodox minister. When Hume was reproached for being inconsistent, he answered, “I don’t believe all he says, but he does, and once a week I like to hear a man who believes what he says.” (Fadiman 1985, 293)
Hume believed that even a master skeptic such as Sextus Empiricus can have only a fleeting effect on ordinary convictions. Common sense and perception are essential to survival. Nature ensures that reason cannot suppress them. While in the study, the philosopher may entertain doubts about whether there are external objects. But once he returns to the company of his friends and society, nature and custom reassert themselves. Aside from religion, the aspiring skeptic winds up believing what his neighbors believe.
Hume’s bemused solidarity with the great herd of humanity did not appease his enemies. They perceived Hume as abdicating his responsibility as a philosopher. Instead of justifying our central convictions or extending our knowledge, Hume said that we must content ourselves with psychological explanations. Questions about what we ought to believe were to be replaced by a description of how we in fact form beliefs.
David Hume was provocatively slow to distinguish between sages and madmen. This led the ministers and town council to vote against Hume’s appointment as a professor of moral philosophy at the University of Edinburgh in 1745. He had to instead become the tutor of a nobleman—who turned out to be insane. Henry Thomas relishes the poetic justice:
The philosopher and the lunatic lived together in a secluded house. The laird’s moods were unpredictable. One day he pressed Hume to his heart. The next day he drove him out of the room. He purred like a kitten and barked like a dog. He leaped over the sofas and scrambled down the banisters. He crept stealthily over the carpets and sprang upon his tutor with a ghostly laugh. Finally they locked him up. He begged to see his tutor and discussed with [him] the perplexing questions of human reason.
(1965, 124)
What a twisted ending! Let’s try to understand what went wrong by retracing the steps of British empiricism.
John Locke (1632-1704) inaugurated the British tradition of empiricism with An Essay Concerning Human Understanding. He had earlier discussed human understanding with five or six friends. But they bogged down in verbal disputes, untestable assertions, and circular reasoning. Locke concluded that “before we set our selves upon Enquiries of that Nature, it was necessary to examine our own Abilitys, and see, what Objects our Understandings were, or were not fitted to deal with.” (1975, 7:14–33) Instead of aspiring to be one of the master-builders of physics or chemistry, Locke resolves to work as an “Under-Labourer . . . clearing the rubbish that lies in the way of knowledge.” (1975, 9:34–10:26)
Locke is an anti-authoritarian who thought it was demeaning “to live lazily on scraps of begg’d Opinions.” His guiding principle was that everything we learn about the world is through experience. If you have never tasted a pineapple, you cannot have a just idea of how it tastes. Experience is essential for the formation of concepts and therefore for knowledge of the propositions expressed with concepts. He notes that when rationalists have trouble finding any observations to support a principle or to account for our possession of the concept, they do not surrender. Instead, they say the concept or principle is innate. This is a lazy man’s method of philosophizing. Locke denies that we have any innate ideas. Babies do not believe that parallel lines never meet. The New World Indians are innocent of the principles touted as universal by the rationalists.
Locke believed that we enter the world as blank slates. As the moving finger of experience writes, we note repetitions of the same type of thing (the fire of a candle and the fire of the hearth) and correlate the occurrence of different types of things (fire and smoke). The rationalists noted that we need general concepts to make these judgments. Thus, they inferred that conceptual knowledge is logically prior to empirical knowledge. Locke tried to simultaneously solve this paradox of inquiry and the problem of universals with a theory of abstraction. According to Locke, we derive the general idea of a cat from a particular idea of a cat by deleting features that are not shared by other cats. The result is a kind of schema that is the meaning of “cat.”
You learn that there is a cherry in front of you when light bounces off the cherry and into your open, healthy eyes. The stimulation causes your brain to form an image of the cherry. Since the image can exist without the cherry, you must infer the cherry. Illusions and hallucinations show that we never directly perceive external objects. This point was also reinforced by the discovery that light has a finite speed. There is a time lag between the cause of perception and its effect. The Andromeda nebula is visible to the naked eye but the light has taken two million years to reach us. Since the perceiver can only be directly seeing what he is now seeing, he is only indirectly perceiving the external causes of his perceptual images.
Locke’s theory of perception is known as representative realism: the perceiver must infer external reality from his internal stock of intermediate entities. The “way of ideas” gives rise to the problem of the external world. How can I tell how well my representation of a cherry corresponds to the cherry? This is a serious question of physics. When Galileo constructed his telescope, he had to test the fidelity of its images. At a port, he had a delegation of leading citizens look at distant ships through his telescope. When the ships docked, the citizens could verify that the ships had the details conveyed by the telescopic images two hours before. Telescopic images are mildly distorted. Microscopic images were very distorted. Observations based on microscopes were long subject to reservations that were not extended to telescopes.
Physicists were already suspicious of the fidelity of our visual images. Robert Boyle introduced a distinction between primary and secondary qualities. Primary qualities (shape, mass, distance, motion) are really possessed by objects. Secondary qualities (color, warmth, beauty) are produced by a combination of the object’s primary qualities and the psychological makeup of the observer. The cherry is not intrinsically tasty or pretty. The cherry does have an objective chemical constitution that interacts with a tongue to produce the sensation of sweetness. Physicists study primary qualities and so frame their laws in terms of mass and shape, not beauty and sweetness.
Many physicists were (and are) prepared to attribute a massive illusion to human observers. For most of them regard color as a secondary quality. The cherry reflects light in a way that gives rise to a red visual image, but that redness is not a property of the cherry itself. The physicists invidiously compare this deceptiveness to our idea of the roundness of the cherry. Cherries really are round, and our visual image of the cherry faithfully portrays that roundness.
Locke eventually notices that he has trouble vindicating the distinction between primary and secondary qualities. Doubts about the objectivity of secondary qualities seem to spill over to the primary qualities. I can tell how well an engraving of a hippopotamus corresponds to the hippopotamus by visiting the creature itself. But I cannot compare my ideas of a hippopotamus with the hippopotamus itself. I am confined inside a “veil of ideas.” While in this claustrophobic mind-set, I am ripe for more radical worries. How do I know whether others experience the world as I do? Perhaps my neighbor experiences colors exactly opposite to my colors (where I see green, she sees red, and so on all around the color wheel). This systematic inversion would not be revealed by how my neighbor sorts ripe tomatoes from green tomatoes—or anything else she does or says.
Locke responds to the problem about the external world by stressing the involuntary nature of perceptual ideas. When a daydreamer envisages a tree in the college quadrangle, he can dictate the size and species of the tree. But when seeing the tree, the images are not under his control. The best explanation of the visual pattern is that it is caused by features of the tree out there in the quad. Physicists engage in the same kind of inference when inferring entities from experimental data.
Locke did become progressively uncomfortable with substances. A substance seems like a vague “something I know not what.” Accordingly, he preferred to analyze important concepts in psychological terms—without relying on substances. For instance, Locke analyzed “x is the same person as y” as x having the same memories as y. If a prince awoke with the memories of a cobbler, then the prince would be that cobbler. Locke’s worries about substance were to be dramatically elaborated by subsequent empiricists.
The Irishman George Berkeley (1685–1753) believed that Locke’s materialistic empiricism deprived God of any explanatory role. This weakens empiricism’s ability to withstand skepticism and inadvertently promotes godlessness. Yet, Berkeley was drawn to the intellectual honesty of empiricism. He suspected that Locke had adulterated empiricism with science worship. Locke tried to make empiricism serve as a foundation for the corpuscular physics of his era. Berkeley believed the two were incompatible. True, empiricism is roughly suggested by Newton’s physics. But after clarification, Berkeley believed that this same empiricism condemns important elements of Newton’s physics: material objects, absolute space, infinitesimals, subvisible matter, atoms, vacuums, etc. The Newtonians’ persistent loyalty to these transcendental, mysterious, and downright incoherent objects shows that unscientific Christians have no monopoly on dogmatism.
Berkeley is especially troubled by Locke’s theory of abstract ideas. Locke says the general idea of a cat is of an animal that has a color but no particular color, weight but of no particular weight, gender but is neither male nor female. This indeterminacy is unimaginable. What cannot be conceived cannot exist. There is no abstract idea of a cat. The illustrious Locke is babbling.
There are only particular ideas that are used abstractly. Consider a woman who has moved into an empty house and is deciding where to deploy her furniture. Instead of moving the furniture around, she places rags on the floor to represent respectively her sofa, cabinet, chair, clock, and mirror. She compares various layouts by attending only to the locations of the rags, not their size or shape or colors. Her abstract reasoning is evident from the manner in which she handles particular rags, not her possession of entities that are intrinsically general.
Berkeley directs equal suspicion against Locke’s “ideas” of material objects. There certainly are sofas and mirrors and cherries. We sit in sofas, gaze into mirrors, and eat cherries. But material objects are supposed to be things that underlie these experiences. According to Locke’s distinction between primary and secondary qualities, the cherry looks red but is not red. It tastes sweet but is not sweet in itself. The cherry itself only has the primary qualities of mass, shape, etc. But Berkeley objects that ideas can only resemble other ideas. When Galileo compared telescopic images of the ship to the ship as viewed close up, he was comparing images with images. Arguments for the mind-dependence of color apply and are equally effective in showing the mind-dependence of shape, size, and distance.
Berkeley also opposed the distinction between primary and secondary qualities on the more radical grounds that the idea of a material object is incoherent. Our idea of a cherry is necessarily of something that is sweet and red. The true spirit of empiricism is to go with appearances. Hence, an empiricist should reject material reality as a fabrication of philosophers. To be is to be perceived.
Berkeley could not fail to realize that his idealism seems to fly in the face of common sense. A bishop condemned young Berkeley for his vain pursuit of novelty. Upon reading Berkeley’s Principles of Human Knowledge, a physician diagnosed the author as insane. Even one of Berkeley’s great allies, the satirist Jonathan Swift, instructed his servants not to open the door for the visiting Berkeley on the grounds that Berkeley believed that he could walk through doors.
Since one of the ideas associated with the door is impenetrability, Berkeley had a ready explanation of why he could not pass through a door. He could also explain why all was not a dream by appealing to the orderliness of waking experience.
Berkeley was more heavy-handed when asked why objects continue to exist when we no longer perceive them. Berkeley’s answer was that God continues to perceive objects that no human being observes.
Critics insisted that Berkeley’s slogan “To be is to be perceived” must be mistaken because they could imagine an unobserved tree. What can be conceived, can possibly exist. So the existence of the tree does not imply the existence of a perceiver.
Berkeley characterized this thought experiment as self-defeating. The person who claims to be imagining an unperceived tree envisages it from a particular angle and in color. In any case, the very act of imagination is a kind of perception.
Is it? In 1827 the pious Swiss painter Leopold Robert offended some viewers of his “Two Girls Disrobing for Their Bath.” He assured them that “I have placed the figures in a completely secluded spot so that they would not possibly encounter any observation from onlookers.” (Fadiman 1985, 471) The embarrassed artist Robert is reaching for a genuine distinction between observing a depiction of disrobing girls and depicting the observation of disrobing girls. A rejoinder to Berkeley can be fashioned in the same syntax: Someone can imagine a tree without imagining someone imagining the tree.
Berkeley presents himself as the defender of common sense. It is his opponents who are postulating unobservable entities that skulk beneath the ordinary world of appearances. Occasionally, Berkeley concedes that even ordinary people believe in material objects: “It is indeed an opinion strangely prevailing amongst men, that houses, mountains, rivers, and in a word all sensible objects have an existence natural or real, distinct from their being perceived by the understanding.” (1986, 1, 4)
If there is no idea corresponding to “material object,” how can people form the belief that mountains are material objects? How can Locke mistakenly believe that there are abstract ideas if he cannot even conceive of them? Berkeley poked fun at Newton’s notion of an infinitesimal. But how could he joke about Newton’s notion if the idea does not exist to be ridiculed?
David Hume accepts the bulk of Berkeley’s criticisms of Locke. As an admirer of Cicero’s mix of skepticism and Stoicism, Hume has no interest in fighting skepticism (especially if reason is to be suited in the armor of theology). Hume is willing to let the chips fall where they may. He accepts empiricism as a distillation of the only resources for justifying beliefs. If a belief cannot be put in accord with the way of ideas, then Hume is not willing to engage in heroic measures to save it. The Stoic empiricist must be detached enough to accept bad news. If the belief is still ambulatory after public decapitation, Hume concludes that its presence never depended on reason.
Hume began his career with comparative optimism. He hoped to do for the mind what Newton did for the physical universe. He increased the empiricist’s resources by characterizing Locke’s opposition to innate ideas as a confused overreaction to rationalist excesses. The empiricist is free to admit that we are born with many ideas. He need only deny that these ideas justify any beliefs. Hume distinguished between concept empiricism (all our ideas come from experience) and judgment empiricism (all justified propositions are justified by experience). The empiricist can further admit that children mature in stages. Thus, their cognitive development need not be characterized as a continuous accumulation of experience. When the child’s ability to process experience suddenly expands, so will his empirical knowledge.
Hume liberalized ethics and aesthetics by allowing more room for emotion and feelings. The empiricist does not need to construe moral justification as the product of observation and experiment. Right and wrong come down to what would please an ideal judge. This mild idealization takes away prejudice and ignorance but is intended to preserve the humanity of the judge. The ideal judge is still animated by emotions—just untainted emotions.
Hume enforces this emotive approach by contending that there is an is/ought gap. Past ethicists began from premises describing empirical realities and then moved on to claims about what ought to be the case. According to Hume, moral arguments require moral premises. These premises about what ought to be the case cannot be deduced solely from premises about what is the case. Consequently, ethicists cannot answer “Why be moral?” For Hume, the question of moral motivation never arises because morality is a matter of feeling.
He argues that there is also a gap in inductive reasoning. How do we know that the sun will rise tomorrow? True, the sun has risen repeatedly in the past. But that does not entail the sun will rise tomorrow. We can conceive of the earth standing still or the sun exploding. Thus, there is no deductive justification for “The sun will rise tomorrow.” Could there be inductive justification? Only if we are justified in believing that the future will resemble the past. We can imagine this proposition being false, so it cannot be established deductively. But any inductive argument for “The future will resemble the past” will rely on that principle itself and so be circular. For instance, some say the future will resemble the past because past futures have always resembled past pasts. But this does nothing to remove the possibility that there is a discontinuity between past futures and future futures.
Hume realized that the empiricist has trouble inferring causes from correlations. We can observe that when a moving billiard ball strikes a stationary billiard ball, then the ball that was at rest begins to move. But we do not witness the first ball forcing the second ball to move. The common sense notion of causation includes this notion of natural necessitation. Thus, the empiricist is not in a position to save the initially plausible idea that people observe some things causing other things.
To make this verdict more palatable, Hume offered a psychological explanation of why we mistakenly believe ourselves to directly perceive the first billiard ball causing the second ball to move. When we see an event of type A followed by an event of type B, force of habit leads us to expect B. We project this inner sense of necessity onto the event.
One consolation of banishing causality is that it no longer menaces free choice. If natural necessitation is a myth, then one is not compelled to do anything. Hume thinks of freedom negatively, as freedom from duress and obstacles. Thus, we can be free regardless of how thickly scientists weave their web of correlations.
Hume thinks another kind of projection occurs when our perception is interrupted. If I am viewing Edinburgh’s mountain, Arthur’s Seat, and then briefly shut my eyes, Arthur’s Seat looks the same after I open them. This steadiness of appearance leads me to a steady expectation of the same appearance. I project the continuity onto the mountain itself. A more complicated interpolation occurs with things that change in predictable ways. I know how a log will appear as it burns down. This pattern of alteration in my expectations gets projected onto the log itself. “The imagination, when it is set into any train of thinking, is apt to continue even when its object fails it, and, like a galley put in motion by the oars, carries on its course without any new impulse.” (1739, 198) We feign a continuous existence even though we are not justified in these interpolations.
The empiricist who has such trouble establishing the external world might be expected to find a welcome contrast when searching for the self. What could be more accessible to you than your own self! But Hume reports that, as far as his own case is concerned, his harvest only yields more ideas.
The best empirical sense Hume can make of the self is that it is a bundle of ideas. The bundle theory has the virtue of forestalling some paradoxes about substance. If, as Aristotle believes, substances have priority over properties, then we can ask what the world would be like if Julius Caesar had all the properties of Mark Antony and Mark Antony had all the properties of Julius Caesar (even to the extent of Caesar being called by “Mark Antony” and vice versa). This world would look exactly like ours. The indistinguishability inclines many to deny that we have described a distinct possible world. We have only described our own world in different language. Leibniz solves this paradox with the principle of the identity of indiscernibles: no two substances have exactly the same properties. But Hume’s bundle theory solves the theory at a more radical level by saying that there are no substances of the sort that have priority over properties. If one pictures substances as cushions into which one can plunge pins (properties), then it makes sense to ask what pure substances are. It also makes sense to exchange every pin from one cushion with every pin of another cushion. But if substances are just collections of pins, then these questions about substance cannot arise.
But the bundle theory has its own paradoxes. The basic problem is that bundles are too arbitrary to sustain intuitive distinctions about selves. What makes one bundle of ideas me and another bundle you? Not the fact that I thought of the ideas constituting my bundle. For that reintroduces a substance that has priority over its properties. Not the implications between the ideas in my bundle. For ideas are “loose and separate”; the existence of one idea never necessitates or precludes the existence of another idea. Given that there are only ideas, there is no justification of my conviction that there has been only one self associated with my stream of ideas rather than a succession of momentary selves, one for each idea. Hume concedes that he is “invol’d in such a labyrinth, that, I must confess, I neither know how to correct my former opinions, nor how to render them consistent.”
In short, there are two principles, which I cannot render consistent; nor is it in my power to renounce either of them, viz. that all our distinct perceptions are distinct existences, and that the mind never perceives any real connexion among distinct existences. Did our perceptions either inhere in something simple and individual, or did the mind perceive some real connexion among them, there would be no difficulty in the case. For my part I must plead the privilege of a sceptic, and confess, that this difficulty is too hard for my understanding.
(1739, 636)