America, though but a child of yesterday, has already given hopeful proofs of genius … which arouse the best of feelings of man, which call him into action, which substantiate his freedom, and conduct him to happiness …
—Thomas Jefferson, Notes on the State of Virginia (1781)
The whole function of philosophy ought to be to find out what definite difference it will make to you or me, at definite instants of our life, if this world-formula or that world-formula be the true one.
—William James, Pragmatism (1907)
“I think that in no country in the civilized world is less attention paid to philosophy than in the United States.”
When Alexis de Tocqueville published this sentence in his Democracy in America, it was the heyday of Hegel, August Comte, Victor Cousin, and half a dozen leading lights of classical liberalism in both Scotland and England, including Tocqueville’s twenty-five-year-old friend John Stuart Mill. Compared to the heady debates being waged in lecture halls, periodicals, and bookstalls, intellectual life in the young republic must have seemed very placid—and far removed from that great tradition reaching back to Plato and Aristotle.
Yet America was Aristotle’s offspring in more ways than one. Born in the age of Sir Francis Bacon, it grew under the double engines of commerce and slavery in the era of John Locke and achieved independence under the tutelage of the Enlightenment. The most famous saying from America’s most important constitutional architect, James Madison—“If men were angels, no government would be necessary”—is a sentiment torn from the pages of Aristotle and Machiavelli if ever there was one.
Tocqueville himself noted that “Americans are more addicted to practical than to theoretical science,” and that “they mistrust systems” of the usual Platonic-Hegelian pattern. “They adhere closely to facts, and study facts with their own senses.” In fact, Tocqueville concluded that they had invented their own philosophical method without realizing it, one that “accepts tradition only as a means of information and existing facts only as a lesson to be used in doing otherwise, and doing better.” In short, a bias toward progress was built into the American character, along with a love of liberty both political and personal, in which the American “is a subject in all that concerns the duties of citizens” in a self-governing republic, but “is free, and responsible to God alone, for all that concerns himself.”1
On one side, “I know of no country,” Tocqueville wrote, “where the love of money has taken stronger hold on the attention of men,” but on the other, none where reverence for religion was so widely and evenly spread. In America “liberty regards religion as its companion in all its battles and triumphs—as the cradle of its infancy, and the divine source of its claims.” Indeed, “it considers religion as the safeguard of morality, and morality as the best security of law, and the surest pledge of the duration of freedom.”2 Certainly since 1789, the relations between liberty and religion in Tocqueville’s own country had been very different. Yet on nearly every issue that seemed to split Europeans apart—democracy versus aristocracy, religion versus science, commerce versus established tradition—Americans seemed to have struck a harmonious balance.*
For America was born under the shelter of Plato’s legacy as well. From the moment the first Pilgrims alighted at Plymouth Rock, it was the stepchild of the Reformation and became home to a peculiarly fervent evangelical Christianity. The utopian hope of making America a “shining city on the hill” and one nation under God—a historically grounded version of Augustine’s Heavenly City—has supplied as much drive and energy to America’s development as its more practical Aristotelian side.3
But at its core was that practical desire “to tend to results without being bound to means,” said Tocqueville, “and to aim at the substance through the form.” Although the Frenchman noted that Americans had no philosophical school of their own, they certainly had a philosophical method—one that a pair of American thinkers in Gilded Age Boston would sum up as Pragmatism. Often abused, and just as often misunderstood, the Pragmatism of William James and Charles Sanders Peirce was in fact an attempt to capture that peculiar balance that is the heart of what is called American exceptionalism—and that forty years later would be summoned forth to save the rest of Western civilization.
This, indeed, is what makes America so striking. More than any other Western nation, its history has been characterized by a dynamic, if often unstable, balance between these two conflicting legacies. That balance lies at the core of the American “genius” for high-minded purposeful action that Jefferson noted; that caught the attention of Alexis de Tocqueville some fifty years later, when he pronounced America “a land of wonder”; and that has dazzled, puzzled, and exasperated foreign observers ever since.
Balance was of course the leading hallmark of the American Constitution itself—and the goal of all self-governing polities since Aristotle.
It’s worth recalling the sharp contrast between Aristotle’s view of governance and the one embodied by the Platonic tradition. The focus was not on the One (or the Absolute) but on the One balanced by the Few and the Many. There is no Philosopher Ruler or king standing in the image of God’s plenitude of power. He is sent packing to the realm of the perfect Forms—and out of the realm of reality.
What rules instead are concrete constitutional arrangements based on real-life experience, distilled into a code of laws. Politics is above all a real-time partnership, requiring people’s participation more than obedience, one in which the good life is found in living the process, not necessarily in the final result.
It was this view of constitutional “mixed” government that the Founding Fathers inherited from their Florentine and British forebears.4 Achieving that classic balance between the One, the Few, and the Many, however, had always implied the goal of stability. If the mixture was allowed to change in any way, then Polybius’s and Machiavelli’s cycle of decay and doom would kick in—and change was the one thing Western man could not avoid. Even in the age of Enlightenment, men like John Adams sensed that liberty, no matter how desirable, was still fated not to last.5
William James (1842–1910). “There is absolutely nothing new in the pragmatic method,” he wrote. “Socrates was an adept at it. Aristotle used it methodically.”
The genius of the Constitution’s chief author, James Madison, was to conceive of this constitutional balance as dynamic, not static. In Madison’s vision, the legislative, executive, and judicial branches of American government would have their powers separated out, so that instead of cooperating they would be locked in permanent but dynamic competition. No group of cunning and unscrupulous men could seize control of one branch to dominate the others, as the Medici had in Florence (and as, many Americans felt, King George’s ministers had taken control of the Parliament in Britain), because other groups of cunning and unscrupulous men would naturally use the other branches to fight back.
It was in its way, a breathtaking proposition. But “ambition must be made to counteract ambition,” Madison wrote. In this way, “through supplying opposite and rival interests,” the separation of powers in the federal Constitution would “supply the defect of better motives”—a phrase that landed him in trouble with those, like John Adams, who preferred a more high-minded approach to republican government.6 Indeed, it still provokes some resentment from those who believe that government, even in a free society, still has some Platonic duty to cultivate the virtues of its citizens.7 Madison, however, was an admirer of David Hume as well as Aristotle. He understood better than some other Founding Fathers the tenacity of self-interest—and the lack in real life of enough better motives to go around.
In what David Hume had called the “perpetual intestine struggle between Liberty and Authority,” Madison had concluded that the best way to preserve liberty in a modern society like America was to hobble authority through what he called countervailing interests. We have another term for it: gridlock. Through gridlock, Madison predicted, “the private interest of every individual may be a sentinel over the public rights.…” In this way, he wrote, it will be “very difficult, either by intrigue, prejudice, or passion, to hurry … any measures against the public interest.” Except the “he” in this case wasn’t Madison, but Hume himself.8
Madison saw the same healthy gridlock emerging in the looming battles between the federal government and the separate states, as well as between the new states and the old. In fact, in Madison’s mind, the more the better. As settlers began moving beyond the Appalachians to the Mississippi and Great Lakes, and new states like Kentucky and Tennessee and Ohio came into the Union, the growing diversity of sectional interests would work in favor of everyone. “The society will be broken into so many parts, interests, and classes of citizens,” Madison explained in The Federalist Papers, “that the rights of individuals, of the minority, will be in little danger from interested combinations of the majority.”9
At the stroke of their pens, Madison and the Constitutional Convention of 1787 had at last devised a system that overcame the oldest objection of all to free government, that it could work only on a scale the size of Aristotle’s Greek polis, perhaps five thousand people in all. The United States was now designed in such a way that the larger it grew, the freer it became. Democracy was finally able to embrace diversity and conflict instead of shunning it. What Rousseau feared as modern liberty’s greatest weakness, its reliance on self-interest, turned out to be its hidden strength. And Plato’s shame, the relentless pace of temporal change, was revealed to be Aristotle’s glory.
For if the statesmen of the early American Republic anticipated the perpetual clash of interests within the halls of government, they expected just the opposite outside. The ruling American public ethos promised a more coordinated meshing of men’s private desires and their public obligations, in the dynamic balance between the American instinct for individualism and the impulse for voluntary associations. It struck John Stuart Mill’s friend Alexis de Tocqueville when he first visited the United States in 1831. Tocqueville saw how America offered strange proof of how Aristotle’s zoon politikon, left to his or her own devices (he noted that the social freedom of American women was one of the glues of American culture), could create a sense of virtuous community purpose equal to anything Plato wanted to cultivate.
“Americans make associations to give entertainments,” Tocqueville wrote in Democracy in America, “to found seminaries, to build inns, to construct churches, to diffuse books,” to create hospitals, schools, and prisons, and to send missionaries to the tropics. Americans had learned that in a democracy, “individuals are powerless, if they do not learn voluntarily to help one another.”10 However, once they join together, “from that moment they are no longer isolated men, but a power seen from afar, whose actions serve for an example, and whose language is listened to.” One such voluntary national movement, the temperance movement, would bring about the Eighteenth Amendment. Another, the abolition movement, would trigger the Civil War and end slavery in the United States.11
Tocqueville saw that some in Europe like Hegel argued that the way to help the individual overcome a sense of powerlessness in modern society was to increase the powers of government. Tocqueville believed this was a mistake. Such a move would destroy the motivation for volunteerism and the impulse for drawing together for a common purpose. “If men are to remain civilized,” he concluded, “or to become so, the art of associating together must grow and improve in the same ratio in which the equality of conditions is increased.”12
Certainly nothing displayed the power of this “art of associating” better than American business. “Today it is Americans who carry to their shores nine-tenths of the products of Europe,” Tocqueville wrote in 1832. “American ships fill the docks of Le Havre and Liverpool, while the number of English vessels in New York harbor is comparatively small.” Already by 1800 the United States had more business corporations than all of Europe put together. In short, American business was business almost before its founding.13
For Tocqueville and other foreign observers, American business embodied an energy that pulled down barriers social as well as geographic, as commerce and industry spread from New England across the Northeast; wealth was “circulating with inconceivable rapidity, and experience shows that it is rare to find two succeeding generations in the full enjoyment of it.” Business was the new republic’s most valuable renewable resource: indeed, “Americans put something heroic into their way of trading” that Tocqueville found fascinating and that he saw as in deep contrast to the slaveholding society of the South, where slavery “enervates the powers of the mind and benumbs the activity of master and slave alike.”14
Tocqueville was struck by another important balance Americans had achieved, between the push for material progress and enlightenment and their evangelical Protestant roots. Never had Tocqueville seen a country in which religion was less apparent in outward forms. Still, it was all-pervasive, “presenting distinct, simple, and general notions to the mind” and culture, including to American Catholics.15 This supplied a sense of moral solidarity borrowed from Luther and Calvin, which a democratic society built on the Enlightenment pattern might otherwise have to do without. “Belief in a God All Powerful wise and good,” Madison wrote in 1821, “is so essential to the moral order of the World and to the happiness of man that arguments that enforce it cannot be drawn from too many sources”—including those, like Newton’s, that found in nature the existence of nature’s God.16 Thomas Jefferson agreed. The author of the Declaration of Independence is the famed progenitor of the idea of separation of Church and State. He was a confirmed Deist, and while he deeply admired the figure of Jesus, he was also deeply suspicious of organized religion as the enemy of liberty. He saw it as another dangerous offshoot of Plato’s baneful influence on the West.17
The answer was complete religious freedom, including the freedom not to believe. “It does me no injury,” he wrote in Notes on the State of Virginia, “for my neighbor to say that there are twenty gods or no gods. It neither picks my pocket nor breaks my leg.” Yet it was also Jefferson who wrote, “Can the liberties of a nation be thought secure when we have removed their only firm basis, a conviction in the minds of people that these liberties are the gift of God?” Later he added, “No nation has ever yet existed or been governed without religion. Nor can be.”18
So from the start the United States found itself with a constitution founded on a permanent clash between the executive, legislative, and judicial branches and between federal power and states’ rights (epitomized by the fierce ideological battles between Jefferson and Alexander Hamilton in the early decades of the Republic) and a sectional split between a commercial-minded North and a slave-owning South. It was also a society delicately balanced between individualism and volunteerism and between a business- and engineer-centered culture of focused practicality and a religious evangelism bordering on mysticism. Clearly, some kind of conceptual glue was going to be needed to hold all these disparate elements together if the new republic was going to survive.
For nearly seventy years, Americans found it in the ideas of Common Sense Realism. It was yet another product of the Scottish Enlightenment, but one with a firmer impress of both Protestant Christianity and Plato’s idealism. In America, its principal spokesman was John Witherspoon, longtime president of Princeton University, signer of the Declaration of Independence, and mentor to an entire generation of American politicians and statesmen, among them James Madison.
The Common Sense philosophy (as it was also called) was a shrewd fusion of an empiricism borrowed from Locke and Aristotle and a moral intuitionism—the idea that the human mind has direct access to truths that the senses cannot reach—that can be traced back to Plato. Thomas Jefferson became a convert to it. Thanks to Witherspoon it became the reigning philosophy in every Protestant seminary of note in America. Its assumptions shaped American education from one-room country schoolhouses to Harvard Yard. It shaped American legal thinking from the moment the United States Supreme Court opened its doors (John Marshall was strongly influenced by it). Indeed, from the Constitutional Convention until the Compromise of 1850, Common Sense Realism was virtually the official creed of the American Republic.19
So what was it? Its founder, Thomas Reid, was part of the empirical tradition that flowed from Aristotle and John Locke, that all knowledge comes from experience. However, Reid made an important alteration to Locke’s theories. Reid said that the mind is not an entirely blank tabula rasa but comes equipped with a set of “natural and original judgments” that enables human beings to separate out internal sensations arising within their own minds from sensations arising from an outside world.
In other words, we know automatically when we see a pencil in a glass of water that it isn’t really bent even though it appears to be, just as we know someone’s trying to pick our pocket even though he says he’s only helping us on with our coat—and that there’s a difference between good and evil even when certain philosophers say there isn’t.
Reid called this power of judgment “common sense” because it is common to all human beings. Our common sense allows us to distinguish fantasy from reality and truth from falsehood and tell black from white and right from wrong—not by seeing the world as a series of mental images, but by interacting with it through mental acts. This power of judging is what enables us to live more fully in the real world, and the beliefs of common sense “are older and of more authority,” Reid wrote, “than all the arguments of philosophy.” Common sense tells us that the world consists of real objects that exist in real time and space. It also tells us that the more we know about those objects through our experience, the more effectively we can navigate our way through that reality.
More than any previous philosophy, Common Sense Realism had a built-in democratic bias, one reason it was so popular in America. The power of common judgment belongs to everyone, rich or poor, educated or uneducated; indeed, we exercise it every day in hundreds of ways. Of course, ordinary people make mistakes—but so do philosophers. And sometimes they cannot prove what they believe is true, but many philosophers have the same problem. On some things, however, like the existence of the real world and basic moral truths, they know they don’t have to prove it. These things are, as Reid put it, self-evident, meaning they are “no sooner understood than they are believed” because they “carry the light of truth itself.”
Common sense man turned out to be the enemy of more than just moral relativism. Madison’s constitution had ensured that countervailing interests would jam the political doorway, allowing no one to get his agenda through without facing the opposition of others. How to sort it out? The answer was “that degree of judgment which is common,” as Reid put it, “to men with whom we can converse and transact business.” Common sense would enable people to agree on certain fundamental priorities and truths, so that a solution can be worked out, whether it’s over a Supreme Court nomination or a tariff issue or whether America should go to war. In a democratic America where no one was officially in charge, not even philosophers, common sense would have to rule.
But what if it didn’t rule? In 1860, it collapsed on the issue of slavery. Reasonable Americans, men who conducted business in Washington and elsewhere on those common sense principles every day, saw the same disaster looming: the secession of the southern states. All agreed it would be a disaster, many appealed to the many compromises struck since 1820 to make the issue go away. Yet no one could do anything to stop it. Some on both sides even welcomed it.
It took Abraham Lincoln to realize that abolitionists like William Lloyd Garrison had seen a higher truth that a common sense man like Stephen Douglas failed to recognize: Slavery wasn’t just a national embarrassment or source of sectional friction. It was a deep and pervasive national sin. Lincoln was a prairie product of the American Enlightenment, a reader of Locke and Mill as well as Jefferson and the Founding Fathers. But Lincoln also believed in an Old Testament God who would make the nation pay a terrible price for selling human beings as chattel. Says Saint Paul’s letter to the Hebrews (9:22), “Without shedding of blood there is no remission.” Lincoln’s God told him that the sin could be blotted out not by rational argument and compromise, but only by bloodshed.
As president, Lincoln may have started the Civil War believing that saving the Union and ending slavery were two distinct aims. By the time he issued the Emancipation Proclamation in 1862, he realized they were one and the same. Only then, as he stated in his Gettysburg Address, would America be ready for a new “birth of freedom” and to give a new meaning to the idea of democracy.20 It took the slaughter of Gettysburg, Atlanta, the Wilderness, and Spotsylvania to convince the rest of the nation that he had been right.
It also meant that the old way of framing intellectual and moral debates in America would have to change. The Civil War of 1861–65 shattered the certainties of Common Sense Realism almost as decisively as the Great War would shatter those of Victorian Europe. Of course, as in the European case, the doubts and counterthrusts had begun years before that. Hegel, Kant, and German Idealism had broken through the Common Sense crust as early as the 1840s. It would branch out with the American Transcendentalists like Ralph Waldo Emerson. It would grow into full blossom in the Harvard of Josiah Royce and George Santayana, just as Princeton served as the last bastion of Common Sense Realism under its Scottish-born president, James McCosh. Ernst Mach, Auguste Comte, even Karl Marx: all found American converts in the new post–Civil War industrial age.
It was clear some new reassessment of old principles was in desperate order, and the place it happened was in the thriving port city of Baltimore, at the brand-new university founded by a Quaker merchant named Johns Hopkins.
The life of its founder reflected many of the cross-currents of American culture, as well as the vibrancy of its business culture. Born in 1795 on a tobacco plantation, Hopkins was the son of Quakers who, in 1807, freed their slaves in accordance with Quaker doctrine and put their own eleven children, including twelve-year-old Johns, to work in the fields in their place.
Five years later Johns Hopkins left to join his uncle’s wholesale grocery business in Baltimore—just in time to witness the British siege of Fort McHenry during the war that same year. After the war, Hopkins struck out to found a dry goods business on his own with his three brothers. Hopkins and Brothers became dealers in the region, and Johns Hopkins became a very rich man as well as a director of the fledgling Baltimore and Ohio Railroad.
The Civil War found him—unlike most Marylanders—a firm supporter of Abraham Lincoln and the abolitionist cause: he even gave the Union Army use of the B&O for free. After the war he poured his fortune into various philanthropic projects, including a college in the District of Columbia for African-American women, an orphanage in Baltimore for black children, and a university in the same town that opened its doors three years after his death, in 1876.
Under its first president, Daniel Coit Gilman, the Johns Hopkins University was the first American academy founded to compete with its European counterparts in the breadth of its learning and depth of its cutting-edge research. Gilman recruited scientific giants such as mathematician James Sylvester and physicists Henry Rowland and Lord Kelvin, inventor of the famous temperature scale but also a major researcher in electromagnetism and atomic theory—the same frontiers James Clerk Maxwell and Ludwig Boltzmann were exploring on the other side of the Atlantic. Gilman hired famed philosophers George S. Morris and Stanley Hall; and the first Hopkins PhDs in philosophy would go to such future luminaries as Josiah Royce, Thorstein Veblen, and a rumpled, nearsighted youngster from Vermont named John Dewey.
Gilman always said the goal of a university should be “to make less misery for the poor, less ignorance in the schools, less suffering in the hospitals”—in 1893 he would create the Johns Hopkins Medical School, run by the legendary English physician Sir William Osler—“less fraud in business, and less folly in politics.”21 But his most significant contribution was hiring a shy man with a degree in chemistry from Harvard and a background in physics and astronomy who happened to be working for the U.S. Coast and Geodetic Survey, to teach the Hopkins students logic.
He was Charles Sanders Peirce, and together with another visitor to Hopkins who occasionally dropped in from Harvard to lecture there, William James (Gilman tried desperately to hire James full time, but Harvard refused to let him go), he would create America’s first homegrown philosophical creed—one, more than any other, that worked to translate the culture’s dynamic balance of Plato and Aristotle into a conscious way of understanding the world.
Although Peirce devoted himself to teaching logic, few people in America had better knowledge of the new trends in Western scientific thinking, from Darwin to Maxwell’s thermodynamics and Mach’s Positivism—as well as the mathematics of probability. That knowledge, however, made him uneasy. It was the same unease that had stirred Henry More in the mid-1600s about the triumph of the new mechanical science, on the eve of Newton’s arrival at Cambridge.
In an impersonal world that has finally, definitively banished all final causes from nature and our lives—including, presumably God—what happens to the human factor? “The world … is evidently not governed by blind law,” Peirce would write, “its leading characteristics are absolutely irreconcilable with that view”—including how we lead our lives in accordance with the basic assumption of free will.22 Yet the triumph of Darwin and science, and the breakup of Common Sense Realism, had seemed to encourage people to think the opposite, and made them feel as minor cogs in the great impersonal machine of Nature.
“Sir, I exist!”
“However,” replied the universe,
“The fact has not created in me
A sense of obligation.”
Stephen Crane
No wonder others were being drawn to the nihilistic pessimism already surfacing in the Europe of Friedrich Nietzsche (Birth of Tragedy appeared the year after Peirce published his first article in North American Review); in the deterministic materialism of Karl Marx (the first American edition of The Communist Manifesto appeared in 1871); and in the strange supernatural flights of the mystagogue Baron Swedenborg (one of them was William James’s father). Peirce would have argued that Hegel belonged in the same camp.
That disillusionment had already appeared in the works of Mark Twain. The author of Tom Sawyer and The Adventures of Huckleberry Finn felt a chronic anxiety about the direction the country was headed after the Civil War, a gloom that broke the surface in his late essays, titled What Is Man?: “There is no God, no universe, no human race, no earthly life, no heaven, no hell. It is all a dream—a grotesque and foolish dream. Nothing exists but you. And you are but … a useless thought, a homeless thought, wandering forlorn among the empty eternities.”23 Discovery of the law of entropy had led historian Henry Adams to conclude that the human race was stuck on a degenerative course that would leave not only civilization but the planet itself a cold, lifeless lump of matter by 2025, yet Adams himself was the grandson of a president, John Quincy Adams, and great-grandson of another, John Adams.24
It was precisely this demoralizing despair that Peirce was trying to fight against. Surely there had to be a more secure way to find our place in the world, Peirce believed; a way to rebuild the foundations of both thinking about and living in a universe governed by change, uncertainty, and chance. An America that ceased to believe truth and justice, right and wrong, are as real and important to life as the laws of evolution and physics, was doomed.25
In this, Peirce sympathized with Common Sense Realism. Indeed, he considered his new philosophical path as only a deeper furrow in the direction already charted out by Reid, Witherspoon, and their followers. Still, the old Common Sense school had made two fundamental errors. The first was that it confused certainty with objectivity; the second was that it confused doubt with lack of clarity. In fact, common sense judgments, even our most certain and universal ones, are bound to change with the accumulation of new evidence. “Original beliefs only remain indubitable,” Peirce wrote, as long as they seem applicable to our current conduct. When they aren’t, they can change with a sudden flash of insight.
The classic example is our knowledge that the earth revolves around the sun. At one time, believing the opposite seemed the height of common sense, while those who doubted were deemed either idiots or frauds—as Galileo had found out. Today, however, anyone arguing that the earth is the center of the cosmos would seem the real idiot, as much as the man who insists the earth is flat. Rutherford and Niels Bohr would even apply the heliocentric solar system as their “common sense” model of the atom.
How “flat earthers” (or “global warming deniers”) become repellent to our common sense has little to do with objective evidence, Peirce realized. It has everything to do, however, with how we weigh doubt in the balance of outcomes. The more vital the consequences, the less tolerant we are of doubt and the more certain of our judgment. Yet doubt, Peirce pointed out, is the starting point for acquiring all certain knowledge. What Peter Abelard had believed about logic and theology—“Through doubting we come to understand”—Peirce insisted was the basic rule for modern science as well. It is the desire to clear away doubt that leads to genuine empirical investigation and to arriving at the truth. But with it comes a realization that some of “our indubitable beliefs may be proved false.”26
Some beliefs have to remain fixed. We may doubt that the sky is really blue; science teaches us it isn’t. But we cannot doubt that it seems blue. We cannot doubt that we live in the real world, but we can’t be certain all our judgments about it are accurate. What we can know is that they are our judgments and that they have inescapable practical consequences. “The Critical Common Senser,” Peirce wrote, “holds there is less danger to science in believing too little than believing too much. Yet for all that, the consequences of believing too little may be no less than disaster.”27
Peirce was the most original American thinker of the 1800s. Yet before his death in 1914, he remained largely unknown and his important writings unpublished. It was left to his friend William James to turn his new way of thinking about the relationship between reason and truth into a philosophical guided missile that would light up the skies of America, and Europe, as well.
He was born in 1842 in New York City; his father, Henry James Sr., was a celebrated religious figure and his brother a distinguished novelist. His own claim to fame was to translate the traditional problems of philosophy into a distinctive American idiom. James charmed professional and amateur readers alike with vivid phrases like “the buzzing blooming world of reality,” the “cash value” of ideas and propositions, and “the bitch goddess success.” His gift for putting abstruse problems in ordinary language also allowed him to redefine the old battle between rationalism and empiricism—or ideas versus facts—as essentially a clash between two types of human personality, the “tough-minded” and the “tender-minded.”
“Empiricist,” he wrote in 1907, “means your lover of facts in all their crude variety, rationalist means your devotee to abstract and eternal principles.… The individual rationalist is what is called a man of feeling, [while] the individual empiricist prides himself on being hardheaded.”
He drew up their character in two contrasting columns:
THE TENDER-MINDED | THE TOUGH-MINDED |
Rationalistic (going by principles) | Empiricist (going by facts) |
Intellectualistic | Sensationalistic |
Idealistic | Materialistic |
Optimistic | Pessimistic |
Religious | Irreligious |
Freewillist | Fatalistic |
Monistic | Pluralistic |
Dogmatical | Skeptical |
The two philosophers James saw as epitomizing the tender-minded versus tough-minded split were probably Hegel and John Stuart Mill.28 Still, with the exception of optimism and pessimism (and here James was thinking of the optimism of Hegelians and Marxists in believing history has a final purpose), it’s clear he was really talking about the perennial split between Platonists and Aristotelians in a distinctly American guise.
Indeed, he might have been standing in front of Raphael’s School of Athens when he wrote that the clash between the tough- and tender-minded “has formed in all ages a part of the philosophical atmosphere.” Each has a low opinion of the other. “The tough think of the tender as sentimentalists and soft-heads”—in other words, as a collection of weak-willed Percy Shelleys or Walt Whitmans. “The tender feel the tough to be unrefined, callous, or brutal”—a nation of John Waynes.
However, James realized that “most of us have a hankering for the good things on both sides of the line.” What was needed instead was a stable blend of the two temperaments.29 What was needed, William James believed, was an intellectual creed tender-minded enough to show us our connection to something outside ourselves; but also tough enough to deal with robust reality, whether it’s a presidential election, analyzing the behavior of atoms, or driving a locomotive across the Great Plains.
James called his creed Pragmatism. He had borrowed the term from Charles S. Peirce,30 which was summed up in Peirce’s statement that “to understand a proposition means to know what is the case if it is true.” Truth, in short, emerges from the consequences—what Peirce called the upshot—of what we say or believe.31 We go back to the example of the earth going around the sun or vice versa. If we wished, the debate could go on endlessly; no matter how indubitable the evidence on paper or in photographs, some margin for doubt could still emerge, no matter how tiny.
But try launching the space shuttle on the assumption that we live in a geocentric universe and see what happens. “Seeing what happens” is not only a factor in figuring out whether a scientific theory is true or not. For William James, it is the factor, now and forever, in all forms of knowledge. To know something is not to arrive at a final state of mental being or a form of inner consciousness; or even (as Wittgenstein would soon claim) a certain logical form. It is a constant process involving the perceiving self and “the immediate flux of life which furnishes the material of our later reflection.”32
It was a groundbreaking insight—possibly the most important of the twentieth century.33 Scientific truth, Peirce had asserted, was no more a series of breakthroughs to intellectual certainty than predicting the weather. Instead, it is a series of constant laboratory experiments in which we test hypotheses, run the numbers or heat up the test tubes, and see what comes out.34 William James affirmed the same was true of life. We grope and feel our way along step by step, trying out and sticking to what works and dropping what doesn’t. Our knowledge grows in spots, James liked to say.35 It is from this humble process, not from enacting some series of a priori principles or a transcendental Diktats à la Hegel, that human progress is made. It was in a profound sense an outgrowth of the old Common Sense Realism—“common sense is less sweeping in its demands than philosophy or mysticism have been wont to be, and can suffer the notion of this world being partly saved and partly lost”—and one perfectly suited, James thought, for America, the Common Sense Nation.
Not that James underestimated its importance to previous thinkers. “Socrates was an adept at it,” he wrote. “Aristotle used it methodically. Locke, Berkeley, and Hume made momentous contributions to truth by its means.” This part of James’s Pragmatism, the tough-minded Aristotelian side of the ledger, was empirical in the sense that data are the ultimate data of truth and utilitarian in Mill’s sense of learning by doing.36 At the same time, James was more insistent than any of his “tough-minded” predecessors that truth is a process not just of discovery, but also of intuitive creation.
A modern reader has an easier time grasping what James meant than his Gilded Age contemporaries did, because of our experience of commuter traffic. As a commuter we see the map, we know the rules of the road, and we know where we are headed. We even have the last word in technology over GPS, which is supposed to know all the answers and supply us with all the knowledge we need.
But once we’re in our car, we find that the quickest route recommended by GPS is blocked by an accident, while the alternative is temporarily closed for construction. We are forced to try a series of different routes, sometimes relying on our past experience, sometimes asking advice from taxi drivers or passersby, and sometimes based on pure hunch. We decide to cut through a parking lot here or even head back across town there. But we never turn around and give up or abandon the car to hitch a ride to the airport instead. We accept the consequences of the decisions we make on the move and keep going until we finally reach our goal.
Once we get to our destination—and this is true whether it’s an office building or the truth of a proposition—we can retrace our route on the map and say, “Here’s how I got here.” But as with most of life, James would argue, how we got there was never according to plan. The journey unfolds instead through a series of deliberate choices, based on what we know has worked in the past and what we think will work now.
The anthropologist Clifford Geertz once quipped, “Life is a bowl of strategies.” William James would have agreed, although he put it somewhat differently: “An idea is ‘true’ so long as to believe it is profitable to our lives.”37
Understandably, some critics have branded his Pragmatism a squishy form of moral relativism. Others, alternately, see it as an ideology peculiarly suited to a nation of engineers and capitalist entrepreneurs.38 But James himself could counter that what we call fixed moral principles are themselves the end products of the same experimental process, including the Ten Commandments. What kind of society could exist, after all, where men were free to covet their neighbors’ wives and cattle or commit murder on a whim? God’s command is one thing, and an important thing. The proof, however, is in the doing.
In fact, James’s Pragmatism is inherently conservative in Edmund Burke’s sense—and Aristotle’s. Every subject of knowledge, Aristotle wrote in Book II of the Ethics, has its own method. We learn to play the flute by studying the best flute players; we learn to understand mathematics by studying with the best mathematicians. The same is true of our ethical and social life, Aristotle argued: by following the best-tested rules of our predecessors, we can expect the best results.39
If others in the past have done what I’m trying to do successfully in a certain way, whether we’re talking about self-government or conducting a business deal or a marriage, then I should be inclined (though not necessarily required) to do it that way, too. James’s Pragmatism doesn’t cut us off from Burke’s definition of society as “the partnership of the living, those to come, and those who came before.” Far from it. It imbeds us in it, as an active participant in the same perennial search for answers.
Finally, James’s Pragmatism was pluralistic. Philosophers since Plato had assumed there was one way out of the cave: their way. Now thanks to James it turned out there may be more than one way at any given moment—particularly when at almost the same time, modern physics, including quantum physics, was revealing that the cave itself was constantly changing.
Charles Darwin had made process the basic structure of biology. By the time of William James’s death in 1910, Boltzmann, Einstein, and Bohr were showing that an evolutionary process governed the basic structure of the physical world as well. To such a world, James offered a vital message. We need to be open to possibilities, since circumstances might one day prove our assumptions wrong—including circumstances of our own making. The power of the individual to change, not only his own life, but the world, was not diminished but affirmed, by the precepts of Pragmatism.
So instead of Plato’s universe of moral absolutes, Pragmatism leaves us with a universe of probable outcomes. “So far as man stands for anything,” James wrote, “and is productive or originative at all, his entire vital function may be said to have to deal with maybes.” Still, in order for this approach to work, we need a destination—just like the commuter in traffic. The goal of James’s Pragmatism was to arrive at a truth that works, not just go with the flow.
And here we come to the other, tender-minded Platonic side of James’s thinking, on the issue of religious belief.
His The Will to Believe (1897) and The Varieties of Religious Experience (1902) drew a sharp differentiation between trying to treat religion as a set of truths and seeing it as a set of beliefs that give force and meaning to our lives. Truths are ideas we can verify; false ideas are those we can’t.40 We may not be able to prove God exists; but believing He does can change our lives and actions in profound ways that, from the Pragmatic standpoint, can actually make that belief true. “There are cases,” James said, “where a fact cannot come at all unless a preliminary faith exists in its coming.”
James liked to use the example of a train robbery. A pair of bandits rob an entire train of passengers because the robbers believe they can count on each other if they encounter trouble, while each passenger believes resisting means instant death, even though they outnumber the thugs a hundred to one. The result is a robbery. However, “if we believed that the whole carful would rise at once with us, we should each severally rise, and train-robbing would never even be attempted.”41
Or take the mountain climber who has to leap an immense and deep chasm in order to return home. If he believes he can make it, he can make it. If he hesitates and jumps halfheartedly, he will plunge to his death. Beliefs, James believed, are rules for action, including (or especially) Christianity. Religious belief helps us to overcome the maybes and the self-doubts that lurk in the normal interactions of life. It can inspire a mountain climber to superhuman acts or a drug addict to stop taking heroin. It can inspire people to resist a fearsome tyranny or save others from the same threat.
“The world interpreted religiously,” he told a European audience in 1902, “is not the materialistic world over again with an altered expression.” It looks and is different from the one a pure materialist sees and through which he moves, even with the benefit of science. An Aristotelian view allows us to see clear and far. A Platonist belief may help us to see farther.
“St. Paul long ago,” he wrote in Varieties, “made our ancestors familiar with the idea that every soul is virtually sacred. Since Christ died for us all without exception, St. Paul said, we must despair of no one. This belief in the essential sacredness of everyone,” he continued, became the driving force of modern Christianity and its humanitarian offshoots from penal reform to aversion to the death penalty. It is “the tip of the wedge, the cleaver of the darkness.”42
In America, William James created an entirely new school of philosophy, Pragmatism, which spawned followers in logic, sociology, and the other social sciences. In Europe, however, he had an impact unlike any other American thinker before or since. Henri Bergson hailed James as a soul mate for his celebration of “the immediate flux of life” as the essential grounding of all knowledge. From opposite ends of the political spectrum, Georges Sorel and Max Weber both relied on his demolition of the notion that knowledge is an essentially contemplative activity.
The Logical Positivists, meanwhile, were quick to claim him as one of their own, for seeing life as well as science as a constant process of experiment out of which a unified picture of reality gradually emerges.†43 This affinity may seem odd, since unlike the men of the Vienna Circle, James had seen the benefits of religion and metaphysical belief in people’s lives. He was even open to the possibility that they might in fact be true, including spiritualism and life after death.
At the same time, however, James shared the Vienna Circle’s detestation of tyranny and fanaticism in either its intellectual or its political form. He was horrified by those like Hegel and Marx who celebrated conflict and violence as necessary steps in human progress, or those like Nietzsche who saw in man’s dark side a source of healing vitality.44 To despise compassion as weakness is not an expression of the love of life, but its opposite.
From the serene perspective of Gilded Age Harvard Yard, James had been inclined to treat these threats somewhat lightheartedly. “To my personal knowledge,” he once wrote, “all Hegelians are not prigs but I somehow feel as if prigs end up, if developed, by becoming Hegelians.” But what would keep their ideas at bay, he believed, and keep them from seizing power was a nation of men and women committed to what experience teaches us works. Such a people can afford to be realistic about the challenges of the present, but also optimistic about the multiple possibilities for the future. Like the train passengers defending themselves against the armed thugs, they will be inclined to say to one another: We can do this, and do it together.
These, then, are my last words to you. Be not afraid of life. Believe that life is worth living, and your belief will help create the fact. The scientific proof that you are right may not be clear before the day of judgement (or some stage of being which that expression may serve to symbolize) is reached. But the faithful fighters of this hour, or the beings that then and there will represent them, may then turn to the faint-hearted, who here decline to go on, with words with which Henry IV greeted the tardy Crillon after a great victory had been gained: “Hang yourself, brave Crillon! We fought at Arques, and you were not there.”45
James spoke these words to the Harvard YMCA in October 1895. The movement he and Charles Peirce had founded was about to be hijacked, pulled and dragged in a direction quite contrary to the one they had in mind—and which would disrupt American politics for more than a generation. But forty years later, James’s words could serve as a rallying cry, as the forces of barbarism and darkness descended on Europe, from both the West and the East.
* The exception, of course, was slavery. But even here, Tocqueville noted, the clash was between two competing visions of liberty rather than between opposed political ideologies or conflicting classes.
† His impact on Ludwig Wittgenstein was decisive. Wittgenstein read The Will to Believe and The Varieties of Religious Experience with his usual laser insight. They helped him to realize that the world he had described in the Tractatus, as a matrix of logic and scientific propositions, was missing a key component: real-life experience and how language seeks to describe it, however imperfectly. The next great stage in Wittgenstein’s philosophical quest, the analysis of ordinary language, which consumed him until his death in 1951, was at least in part inspired by James.