In 1859, two historical events dramatically altered the course of American thought. The first took place on October 16 in Harpers Ferry (in today’s West Virginia), when the abolitionist John Brown led a band of twenty-one men on a raid on a US arsenal, which he hoped would set the stage for a slave revolt. The second event took place in London, England, a little over a month later, with the publication of On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. For Americans living at the time, the two events were yoked together. For many abolitionists, Charles Darwin’s ideas proved that what Brown had martyred himself for was true: that black people were no more animals than whites, and that both shared a common origin and therefore deserved the same destiny as free people living in a free republic. For proslavery advocates, the two were connected as well, but in a very different way. They used Darwin’s ideas to prove that even in a democracy, “survival of the fittest” was the law of the land, thus explaining why lesser races are inevitably subordinated to higher ones. Over the next decades, Americans would wrestle mightily with the implications of both Brown’s actions and Darwin’s words.
“The Civil War marks an era in the history of the American mind,” wrote the novelist Henry James in 1879. “It introduced into the national consciousness a certain sense of proportion and relation, of the world being a more complicated place. . . . [The] American, in days to come, will be a more critical person than his complacent and confident grandfather. He has eaten of the tree of knowledge.”1 Henry James was one of America’s most perspicacious writers, but on this score he was only partly correct. This may have been true for white Northerners and Southerners. It most certainly was true for a chastened Abraham Lincoln, who in 1863 at Gettysburg, Pennsylvania, lamented that the warring views of the North and South could not be reconciled with words rather than waged in blood on the battlefield.
But African Americans did not need a war to impress on them that the world was a “complicated place.” In fact, the foremost African American intellectual of the nineteenth century, Frederick Douglass, had long tried to complicate whites’ views of themselves and their America, and to show how inconsistent slavery was with those views. “What, to the American slave, is your Fourth of July?” Douglass, a former slave himself, asked a group of abolitionists in Rochester, New York, in 1852. Before they could shout back “freedom!” or “liberty!” he told them what Independence Day looked like from the slave’s perspective: “a day that reveals to him, more than all other days in the year, the gross injustice and cruelty to which he is the constant victim. To him, your celebration is a sham; your boasted liberty an unholy license; your national greatness [a] swelling vanity. . . . [F]or revolting barbarity and shameless hypocrisy, America reigns without a rival.”2
Abraham Lincoln dedicated his presidency to closing this gap between American ideals and reality, free and unfree labor, white rights and wronged blacks, all the while binding up the wounds between Northerners and Southerners. Born just a few hours apart from Charles Darwin on February 12, 1809, the Illinois lawyer turned national statesman knew of the British naturalist but never read his Origins of Species or was conversant in his theory on evolution. Though deeply committed to an enlightenment republican ideology tinged with the mystical Romanticism of his day, Lincoln nevertheless believed that American democracy, much like Darwin’s natural world, was a process, an unfolding or, as he put it, an “undecided experiment.”3 In 1863, at the gravesite of the Union soldiers who had died in the Battle of Gettysburg, Lincoln maintained that the founders had bequeathed a “proposition that all men are created equal” and that it was the awesome task of his contemporaries on both sides of the Mason-Dixon line to “test” whether a “nation so conceived and so dedicated” could endure. The “unfinished work” of the dead Union soldiers left it to “the living” to ensure that the political foundations of America would have a democratic future and one that was capable of evolving out of its slaveholding origins.4 Lincoln hoped until his final days that they could answer the proposition in the affirmative, all the while being haunted by the prospect that they would not.
One of the primary aims of intellectual history is to understand the ideas undergirding competing moral viewpoints, like those between abolitionists and proslavery advocates, and Darwin’s American supporters and his detractors. It seeks to comprehend the factors that shape historical actors’ intellectual options, and to see how their moral horizons and habits of thought played decisive roles both internally in their acts of intellectual volition and externally in their actions in the world. What factors inhibited intellectual agreement? What ideas or viewpoints were available to some but not others? What is the balance of power between need, desire, fear, folly, sagacity, and foresight in the making and unmaking of historical actors’ intellectual worldviews? And how were their moral horizons constructed in the first place? The primary responsibility of the intellectual historian is not to issue verdicts on moral decision-making of the past but rather to comprehend how those actors came to their understanding of their world and their role in it. These contests of moral authority and the range of human responses to human problems are on full display in the “entangled bank” of late nineteenth-century American life.5
In 1835, a number of tiny finches living on a volcanic archipelago about five hundred miles west of Ecuador helped forever change the course of modern thought. They were not the only creatures to impress the twenty-two-year-old aspiring Anglican minister and unpaid naturalist Charles Darwin during his five-year journey with the British survey ship HMS Beagle. Indeed, they were much less visually captivating than the archipelago’s five-hundred-pound tortoises lumbering around and the wacky and adorable blue-footed boobies doing their goofy mating dances. But Darwin noticed something quite remarkable about these otherwise unremarkable little songbirds. The sizes, beak shape, and claw formation of the finches living on the islands were somewhat different than those found on the mainland, as well as somewhat different from each other. Darwin concluded that these variations must be due to the particular food sources available on each of the islands. Pecking the juice out of cacti on one island surely required a particular beak shape different than the one needed for chewing tiny berries on another. The shape of the claw most advantageous for grasping seedlings was not the same as the best one for grabbing crawly insects. Darwin extrapolated from his findings that the finches must have evolved over time from a common mainland ancestor and developed in such a way as to favor those features that were best suited to survival on their particular island. Chance, he concluded, was involved in variations, nothing more. But the finches whose qualities gave them a competitive advantage in that terrain were more likely to survive and reproduce. Darwin settled on the name “natural selection” for this mechanism of evolution and generalized more broadly that all life on Earth likely started with a single origin and evolved from there. In 1859, he published his findings in On the Origin of Species.
Prior to Darwin, most educated people in America and Europe believed that God created the universe, with each species as it was when he first made the earth. Tortoises were tortoises. Elephants were elephants. Blue-footed boobies were blue-footed boobies. That is how God made them, and so that is how they had always been and were supposed to be. They assumed that humans too were made as the Creator meant them to be: in his own image from the time of Adam and Eve up until their own nineteenth-century age. Even the land on which they lived, the waters on which they sailed, and the sky and stars that provided them a sacred canopy were just as God created them; their universe was stable and immutable.
Darwin’s theory of natural selection helped change all of that, but it took the efforts of his American popularizers, both detractors and advocates, to ensure that it would. Initially, two of the most important commentators on Origin were professors at Harvard University. Zoologist Louis Agassiz, Darwin’s most formidable opponent, and biologist Asa Gray, his most dogged defender, helped make the university a hub of Darwinian contestation in the early years of its American reception. Like the vast majority of scientists in nineteenth-century America, Agassiz and Gray were Christians who believed that science was but another way to access God’s universe. They simply had very different views about how Darwin’s theories could be made to harmonize with their understanding of the natural world and their religious commitments.
Agassiz was known for his breakthrough studies of the fossil record of fish, his glacial theory, and his encyclopedic classification of the animal kingdom. He commanded respect among scientists and enjoyed an enormous popular appeal. He took it upon himself to assure unsettled Americans that Origin was nothing more than a compendium of “marvelous bear, cuckoo, and other stories.”6 With a predominantly idealist and theological approach to biology, he stood fast to his long-held scientific position that every species represented an idea produced by God’s “premeditation, power, wisdom, greatness, prescience, omniscience, providence.” On top of that, every living thing had its place in a rank order descending from the highest to the lowest, each with its own “natural connection” to the “One God, whom man may know, adore, and love.”7 He agreed with Darwin that species changed over time, but not that chance had anything to do with these changes. Rather, he posited that after every geological period, God rethought his designs and made whatever changes, if any, he deemed appropriate. When God rolled out the newer model, it may have had a resemblance to the prior one, but only because both were his creations. Whatever connection they had, then, was to their Maker, not to each other as a result of evolution.
Agassiz’s colleague Gray found Darwin’s natural selection more persuasive than Agassiz’s “mind of God” classifications, and he preferred it to earlier evolutionists’ theories that change resulted from the innate workings of the species themselves and not from their interactions with their environments. As early as 1860, Gray published a review of Origin in the American Journal of Science and Arts, establishing him as Darwin’s foremost American ambassador and helping to provide scientific respectability to the British naturalist’s radical ideas. Gray, unlike Darwin, remained an orthodox Protestant and therefore worked hard to demonstrate that Darwin’s natural selection could be compatible with religious belief. Though Darwin was grateful for Gray’s advocacy, it made him uneasy because he thought that Gray simply had no empirical evidence for his claims that evolution was a divinely driven process. Gray also broke with Darwin on the origin of humans, pressing for a “special origination” thesis, which meant that the rules of natural selection did not apply to them. But despite some of his own disagreements with Darwin’s model and Darwin’s ambivalence about Gray’s rejiggering of his ideas to make them line up with his religion so that they were more palatable to broader audiences, Gray proved instrumental in encouraging American scientists to invest further scientific investigation into these questions. So too did Agassiz, as his own students at Harvard eventually came to accept the Darwinian framework and conducted their scientific inquiries accordingly. By 1877, a leading paleontologist would announce that “to doubt evolution to-day is to doubt science.”8 Few professional American scientists from that point on would make claims for the special creation of species.
In the popular retelling of Darwinism, Origin was like a powder keg that exploded readers’ religious certainties the moment they lifted its front cover. Nothing could be further from the truth. Lay Christian readers had no reason to concern themselves with Darwinian evolution until their religious leaders told them they needed to, and initially, for most American clergymen and theologians, natural selection was just another theory from just another fallen man just trying to make sense of his fallen world. True, this particular fallen man had described nature as an “entangled bank” of creatures engaged in a “Struggle for Life” with no higher purpose or order governing the “war of nature.”9 But Darwin had affirmed that this struggle is what produces the beauty and harmony of the natural world. Nevertheless, given the yawning abyss between this characterization and the one found in the Bible, many had no trouble swatting it away. This was the case with the Presbyterian theologian at the Princeton seminary, Charles Hodge, who titled his 1874 book What Is Darwinism?. For him, the answer was patently clear: “It is Atheism.”10
For much of the 1860s, 1870s, and even 1880s, liberal theologians had a relatively easy time of it, finding all sorts of ways to refashion Darwin’s ideas as an apologia for their faith, much as Gray had already started to do in 1860. Once a growing number of scientists followed Gray over Agassiz, liberal Protestants found ways to unlock the hidden divine plan within Darwin’s evolutionary scheme. John Fiske, an American historian and popularizer of Darwinism, titled his 1886 study The Idea of God as Affected by Modern Knowledge and his 1899 work Through Nature to God in an effort to show that modern science could improve rather than undermine faith. In a variety of ways, Fiske used familiar religious language to explain Darwinism: “The principle of natural selection is in one respect intensely Calvinistic; it elects the one and damns the ninety and nine.”11 Henry Ward Beecher, the New York Congregational minister and popular lecturer (and brother of antislavery author Harriet Beecher Stowe), exemplified the growing tendency of liberal theologians to embrace evolution as an endorsement of their faith. Like other liberal theologians of the era, Beecher had long been at work dismissing the forbidding God of unreconstructed Calvinism and welcoming a more nurturing, tender, helpmate, and his version of Darwinism fit nicely in that project. Beecher was neither a penetrating thinker nor a careful reader of Darwin, but this helped him as he drafted his popular Evolution and Religion (1885), which reassured his largely white, middle-class readers that evolution simply proved their dearly held view of moral progress (despite the fact that Darwin thought it directionless and without any higher purpose). Beecher’s liberal Christian Darwinism assured his late nineteenth-century audiences that fellowship and compassion, not greed and ruthlessness, were the human traits most necessary for racial progress and survival.
As the cases of Hodge, Fiske, and Beecher demonstrate, religious faith is more than a belief—it is an entire cosmology. When that faith is strong, as it was with the vast majority of mid-nineteenth-century American scientists, clergy, and laypeople, then all new ideas, both radical and seemingly inconsequential, are read through the prism of that worldview. Looking at the past through the lens of intellectual history demonstrates time and time again that the human imagination is extraordinarily deft at making new ideas jibe with prior intellectual and moral commitments, and when the two cannot or simply will not be reconciled, it is almost always the prior worldview that wins out. Henry James’s brother, the brilliant psychologist and philosopher William James, understood this without fail: “in this matter of belief we are all extreme conservatives.” The acceptance of a new idea almost always “preserves the older stock of truths with a minimum of modification, stretching them just enough to make them admit the novelty, but conceiving that in ways as familiar as the case leaves possible. An outrée explanation, violating all our preconceptions, would never pass for a true account of a novelty.”12
There are rare cases, though, when an individual has only a tenuous hold on an inherited worldview but has not yet settled into a new one to replace it that there is a chance—sometimes nothing more than a lightning-flash moment—that space cracks open for something truly outrée to gain footing in her or his psyche. This is the exceptional case when the knowing mind admits that it does not know, and feels a moral obligation to hold true to that condition of uncertainty. And this is precisely what happened with Civil War veteran, lawyer, and popular orator Robert Ingersoll. The son of a Presbyterian minister, Ingersoll drifted from his father’s faith as he witnessed how religion pandered to prejudice and justified injustices like slavery. In one of his most noteworthy speeches, “The Gods” (1872), Ingersoll told his audience something that the Civil War made hard to deny:
Each nation has created a god, and the god has always resembled his creators. He hated and loved what they hated and loved, and he was invariably found on the side of those in power. Each god was intensely patriotic, and detested all nations but his own. All these gods demanded praise, flattery, and worship. Most of them were pleased with sacrifice, and the smell of innocent blood has ever been considered a divine perfume.13
The way clergy often handled new scientific discoveries that suggested that the Bible was a most unreliable account of the natural world also appalled Ingersoll: “Anything they could not dodge, they swallowed, and anything they could not swallow, they dodged.” When he added all of the mischief and misery religious faith had unleashed on the world, Ingersoll maintained that he had no choice but to become a professing “agnostic.”14
Referred to as the “Great Agnostic” by his admirers and “Robert Injuresoul” by his clerical detractors, Ingersoll’s stance—though prominent—was a rare exception at the time. For the remainder of the century, his agnosticism became more of an intellectual force rather than a new faith. It was one that proved very productive for other liberal intellectuals as they, like Ingersoll and socially progressive Protestants, fought against racism and the death penalty, and for women’s rights. In time, Ingersoll’s view that Darwinism demanded a more rigorous accounting of both the natural world and the moral worlds of the human descendants of apes living in it helped pave the way for change. It enabled a variety of late nineteenth- and early twentieth-century freethinkers and secular humanists fight to keep the public sphere genuinely public, and not simply an extension of the church’s dominion.
Although Darwinism made inroads into religious and scientific thought, it had a much more immediate and transformative effect in the realm of social ideas. It helped scholars in the emerging disciplines of sociology, anthropology, political economy, and psychology move with greater force and clarity in the direction they were already heading. Darwin helped them as they brought their understanding of social processes in line with the dramatic changes in American life due to industrial capitalism, urbanization, and mass immigration. For some that meant taking an un-Romantic, unafraid view of social development as merely another expression of the law of the jungle. For others, it meant the exact opposite, taking the position that efforts to overcome the struggle and strife of modernization or to reduce their damage were the mark of a truly evolved society. But there were two things both could agree on. First, they concurred, that change—not stasis—was the prime motor of the natural world, and thus working with it, rather than trying to prevent it, was the surest route to human progress. And second, they agreed that verisimilitude, not idealism, must be the prime motor of their post-Darwinian intellectual projects.
Figure 4.2 Evolutionary thought influenced late nineteenth-century American artists in different ways. Albert Bierstadt’s valedictory painting, The Last of the Buffalo (1888, above), shows the era’s increasing preoccupation with evolution and extinction. Whereas Bierstadt’s work is executed in an earlier style of Hudson River School high Romanticism, Thomas Eakins’s The Gross Clinic (1875, right), by contrast, reflects the growing commitment to brute realism and unadorned verisimilitude, by depicting a prosaic (and unelegaic) scene of a surgery at Jefferson Medical College. Corcoran Collection, National Gallery of Art; Philadelphia Museum of Art
Yale University professor of political economy William Graham Sumner emerged in 1872 as the strongest voice in this period of what became known as Social Darwinism (though the ideas had only a tenuous connection to Darwin and tracked more closely to Herbert Spencer, the British sociologist, who coined the phrase “survival of the fittest”). Sumner had trained in theology and biblical criticism in Germany and England before doing a brief stint as an Episcopalian priest. But as he found the growing scientific positivism of the age more intellectually compelling than religion, he left the clergy to pursue the social sciences at Yale, where he became a hugely popular and influential professor. Sumner helped establish the budding field of sociology by proposing that the rules that govern the natural world also rule society, and that the evolution of humans is no different than the evolution of lower animals—both progress through conflict and contest. As a result, he argued that the study of human affairs, just like the study of organic compounds and orangutans, must be a scientific enterprise.Sumner’s science of society mixed Protestant ethics, classical economics, and democratic individualism in its advocacy of an unflinching “Darwinian” framework. In his 1883 treatise What Social Classes Owe to Each Other, his answer was unapologetic: absolutely nothing. His chapter titles were about as forthcoming as they could possibly be in announcing his endorsement of laissez-faire: “That Poverty Is the Best Policy,” “That It Is Not Wicked to Be Rich,” and “He Who Would Be Well Taken Care of Must Take Care of Himself.” In The Absurd Effort to Make the World Over (1894), Sumner pressed forward with his laissez-faire positions as he tried to persuade Gilded Age social reformers that their desire to legislate social policies and economic structures to keep Americans from tumbling into poverty, illness, and destitution was touching, but pointless. He wrote:
The first instinct of the modern man is to get a law passed to forbid or prevent what, in his wisdom, he disapproves. A thing which is inevitable, however, is one which we cannot control. We have to make up our minds to it, adjust ourselves to it, and sit down to live with it. Its inevitableness may be disputed, in which case we must re-examine it; but if our analysis is correct, when we reach what is inevitable we reach the end, and our regulations must apply to ourselves, not to the social facts.15
Sumner even outpaced Spencer’s worship of the “actual,” “inevitable,” and “reality” by hoisting “facts” over theory with greater zeal than the inventor of Social Darwinism himself. But together Sumner and Spencer engaged in a transatlantic campaign to persuade moderns that “the law of the survival of the fittest was not made by man,” and so the only thing man can do is interfere with it, and thereby “produce the survival of the unfittest.”16
Evolutionary theories similarly influenced the course of linguistics, ethnology, and archeology as they came together into a new modern social science: anthropology. They helped discredit polygenesis theories, but not necessarily the social inequalities and racist attitudes they hoped to defend.
The arrival of Darwin’s ideas managed to undermine the antebellum American version of the “science of man.” The most prominent example of this midcentury doctrine can be found in the scientific racism of the physician Josiah Nott and his collaborator, Egyptologist George Gliddon. In a curious departure from the biblical authority of the Christianity they aimed to safeguard, Nott and Gliddon, like Agassiz before them, asserted that there were separate creations for the different races, with Europeans the most superior physically and mentally. African Americans started out as an inferior race and thus they would forever remain inferior, and Native Americans were so compromised as to be doomed to extinction. “Polygenesis” was the name for this imaginary schema, but it proved to be no match for Darwin’s evidence of the shared origin of all humanity. But while Darwinism managed to upend polygenesis in America, scientists still persuaded of the superiority of whites and the inferiority of all other racial groups managed to rework evolutionary claims to justify their positions.
Nevertheless, Social Darwinism did not produce uniform ideas about policies toward other races and ethnicities. For example, many experts on Native Americans employed evolutionary ideas with the humanitarian goal of “helping” Indians through assimilationist policies, while others used evolution to justify establishing a system of reservations to protect them from the encroachments of Euro-Americans. Similarly, advocates of American imperial ventures abroad were confident that Anglo Saxons would be better stewards of foreign lands while helping to civilize their native populations. And yet the greatest American champion of Social Darwinism—William Graham Sumner himself—was a passionate anti-imperialist, as he believed that imperialism fostered an evolutionary regression toward primitive militarism and authoritarianism.
Figure 4.3 Darwin’s On the Origin of Species (1859) challenged polygenesis, a theory of racial difference delineated in this tableau classifying human races and fauna according to their regional “realms.” Louis Agassiz, the foremost polygenesis theorist in America, believed that “what are called human races, down to their specializations as nations, are distinct primordial forms of the type of man.” Josiah Clark Nott and George Robert Gliddon, Types of mankind, Or, Ethnological researches, based upon the ancient monuments, paintings, sculptures, and crania of races, and upon their natural, geographical, philological and Biblical history (Philadelphia and London, 1854). University of Wisconsin–Madison, Special Collections
Darwinist ideas were also employed to advance some of the most sophisticated economic theories, and indeed some of the wittiest, most clever (if mercilessly scathing) cultural critiques of the era: the writings of the sociologist and economist Thorstein Veblen. Having trained under Sumner at Yale, Veblen produced lines one would expect from his mentor, such as “The life of man in society, just like the life of other species, is a struggle for existence, and therefore it is a process of selective adaptation.”17 But that is where the similarities end.
Veblen pushed for an even more rigorous and nuanced application of evolutionary theory, dispensing with the tendency to use it to either justify or criticize economic competition. Instead, he employed it to remake modern economists’ “habits of thought.” For Veblen the value of evolutionary theory was that it challenged not the ideas from classical political economy but rather its methods of inquiry. Veblen thus focused on how human beings, as well as their economic institutions and social practices, tracked with the vicissitudes of modern life. His Theory of the Leisure Class (1899) revealed the fruits of this new way of thinking by examining the economic and cultural implications of the propertied classes’ “wastefulness” and “conspicuous consumption” on the well-being of society as a whole. He used evolutionary theory to claim that in the “savage” stage of civilization, every member of the tribe had to work to ensure the survival of the race. But with the coming of the “barbarian” stage (the stage late nineteenth-century America was in), a surplus of resources and labor enabled what he called the “leisure class” to exempt themselves from work and to live off the fruits of others. Rather than assume that the uneven distribution of wealth indicated an uneven distribution of human talent and ingenuity, Veblen argued that it reflected an evolutionary maladaptation of a “retarding influence” upon social betterment.18 And for Veblen, nothing was so retarding as the culture of Victorianism that he took as his enemy.
Queen Victoria of England did not reign over the United States, but the worldview she and many of her British subjects sought to embody significantly influenced American thought and culture from the mid- to late nineteenth century. While both elite and bourgeois Americans were drawn to Victorian notions of order, uplift, and refinement, it was middle-class boosters who made it such a potent cultural force. As industrial capitalism took command, it fueled dramatic technological advancement, geographic expansion, and the creep of markets and commodification into all facets of life. It also brought with it intensified class stratification in America between an upper class on the one end and the laboring classes and the poor at the other, with a middle class in between gaining a distinct group identity and sense of mission. Members of the new middle class looked to what they regarded as the excesses and entitlements of the rich and the decrepitude and rudeness of the poor, deeming both a threat to the well-being of a democratic, civil society. They thus championed a new didactic, prescriptive, Victorian ideal of “culture” as a means of moral improvement and a safeguard of democratic virtue, with themselves as its ideal custodians. In Culture and Anarchy (1869), the British author Matthew Arnold provided them with a vision of culture as “a study of perfection,” the repository of “the best that has been thought and known,” and thus with a means to counteract the deleterious effects of modernization.19
The emergent ideal of culture found its most potent expression in the institutions designed to cultivate and disseminate it. Victorian Americans founded an extraordinary network of newly established universities and colleges, museums, symphonies, theaters, literary societies, and parks during this period, all with the purpose of channeling the competitive and materialist energies of a modernizing America into leisure pursuits that cultivated minds, nurtured bodies, and nourished souls. One of the quintessential Victorian institutions that combined instruction and refinement was the Chautauqua Institution in western New York, founded in 1874 by Methodist minister John Heyl Vincent. What began as an ecumenical institute to train Sunday school teachers turned into an extensive system of “outdoor university” programs throughout the Midwest (and eventually the country), bringing in some of the most eminent intellectual and cultural figures of the period to lecture to large crowds in a festive—even celebratory—environment. Uplift, enlightenment, and moral improvement were the catchwords of Chautauqua.
The same could be said of Frederick Law Olmsted’s landscape designs. The foremost landscape architect of the era, Olmsted designed scenic parks (most notably Central Park in New York City and Golden State Park in San Francisco), stately university campuses, and bucolic grounds for state capitols, hospitals, and libraries, all in an effort to provide little pastoral retreats from the demands and dangers of city life. He drew from Jeremy Bentham’s theories of control and reform as he imagined that his green spaces could serve a pedagogical and even disciplinary function. Olmsted set out clearly marked paths, neatly manicured lawns, decorative trees planted as objects of beauty and canopies to provide protection from the elements, and marble fountains to offer the soothing sights and sounds of water tumbling down its tiers—all staged to offer open spaces for democratic conviviality, while providing cues for order to train urban dwellers in refinement. Both Vincent’s Chautauquas and Olmsted’s landscapes aspired to make Victorian “recreation” not simply a Gilded Age pastime but also a way to literally re-create the self in terms harmonious with ideals and images of perfection.
Though the work of culture took place in public venues, Victorians thought it should also be instilled in private homes. The cultural logic of gentility was on full display in the Victorian domestic sphere. The home was to be a sanctuary from the heartless world of capitalist competition. Whereas the middle-class husband and father had to make his way in the public world of aggressiveness and acquisitiveness in the emerging marketplace, the role of the wife and mother was to provide a redemptive sphere of pleasing beauty, sentimental comforts, and moral sustenance as a retreat from such ugliness. If the Victorian man must pursue the base needs for the family’s material prosperity, the Victorian woman must seek the higher ideals for the family’s moral progression. The liberal Protestant minister and theologian Horace Bushnell explained the Victorians’ logic: men represented the “force principle” and women the “beauty principle.”20
Material culture and social conventions are sometimes the best register of intellectual commitments of a period, and this is very much the case with the Victorian parlor. The domestic parlor was the Victorians’ sanctuary of sentimental grace and tenderness, as well as a private stage for members of the family to perform their roles. Everything was scripted. At some prominent focal point in the room sat the oversized, decorative family Bible, which was too heavy and cumbersome to actually read, but looked positively lovely as a display piece. Every furnishing in the room had a specific function: decorative armchairs for men, delicate settees for the ladies, miniature upholstered seating for the children, windowseats for the family cat to curl up on, and reception chairs at the entrance for visitors. It was not enough to have walls, ceilings, and windows: carved or fluted molding, heavy baseboards, and ornamented door and window frames to adorn them were a must. Add to that the cavalry of curtains, pillows, carpets, and table runners to ensure protection for all the room’s surfaces and soft comfort for its human inhabitants. In an ideal scenario, large paintings would cover the walls, as in the new museums, and leather-bound books accented by the occasional miniature Greek sculpture reproduction would line the shelves, as in the new urban libraries and reading rooms. In this sacred space, everyone knew their social cues: things one could say and not say, a volume in which to say it, and a bodily comportment for carrying on conversations, and all of this differed depending on one’s gender and age. The elaborate vocabulary of furnishings and practices had more important work to do than to be merely outwardly pleasing. It had to stand as the register of the family’s—and especially the wife’s and mother’s—inner cultivation.
Staging all of these paraphernalia and practices could be hard labor for Victorian wives and mothers. But making terrible fun of it was an equally demanding job for the growing chorus of critics who found this whole spectacle of Victorian uplift intellectually obscene and morally indefensible. In 1873, Mark Twain satirized the manners and morals of the whole period in his novel The Gilded Age.A year later, E. L. Godkin, founder and editor of The Nation, surveyed an American landscape increasingly populated by popular lyceum lectures, newspapers and periodicals with “a kind of smattering of all sorts of knowledge,” large art museums in the cities, and the miniature art museums of families’ private parlors and cited them all as evidence of America’s pretentious and vacuous “Chromo-Civilization.” According to Godkin, this “pseudo-culture” created a “society of ignoramuses” who substituted the accumulation of facts for the assimilation of real knowledge, and the consumption of goods for the cultivation of character.21
Another exasperated critic of Victorianism, especially the habits of mind it cultivated, was the Spanish-born philosopher George Santayana. Together with William James and Josiah Royce, Santayana taught philosophy at Harvard University before he gave up his professorship and moved to Europe to become a full-time writer in 1912. But before he left, he gave his adopted homeland a parting gift in the form of a withering critique of its thought and culture: “The Genteel Tradition in American Philosophy” (1911). Santayana reproached the mental habits of Victorian Americans who viewed culture as a corrective to, rather than as a condition of, daily life. In his view, this “genteel” Victorianism grew out of two sources, and neither of them was good.
The first was a despiritualized Calvinism, which retained its lust for order and severe moralizing, but no longer the “agonized conscience” and “sense of sin” that gave earlier Protestantism its form. The second and more dominant source, however, was Emersonian Transcendentalism, which, in his view, endorsed a subjective view of knowledge and an aggrandized conception of self. Santayana believed that the restless, revolutionary American temperament proved to be the ideal host environment for early nineteenth-century Romanticism, which “felt that Will was deeper than Intellect; it focused everything here and now, and asked all things to show their credentials at the bar of the young self. . . These things are truly American.”22 In Santayana’s genealogy, these two intellectual legacies crossed paths and consolidated their capital in the form of the genteel tradition: a moralistic and evasive intellectual temperament, suffused with light yet casting no shadows, an unthreatened if unthreatening view of the universe and man’s place within it. Santayana’s main complaint with the genteel tradition was not only that it made an easy peace with the universe but also that it had too reverential an attitude to the culture and ideas of Europe, which neither grew out of nor had any bearing on American material realities. According to Santayana, in a post-Darwinian age of advanced capitalism, it simply made no sense to treat intellectual pursuits as custodianship rather than the creation of ideas and culture.
Henry David Thoreau greatly admired John Brown and Charles Darwin. Just two weeks after Brown’s siege on Harpers Ferry, Thoreau gave a speech, “A Plea for Captain John Brown,” in which he made clear why Brown was the very best example of a Transcendentalist. Brown was “a transcendentalist above all, a man of ideas and principles,—that was what distinguished him. Not yielding to a whim or transient impulse, but carrying out the purpose of a life.”23 Thoreau had no problem recognizing, with Brown, that this “purpose” was individual freedom in harmony with a higher power. But his steadfast belief in human purpose did not prevent him from cherishing Darwin’s theories, which presented a world with no telos but instead one marked by chance, accident, and randomness. Thoreau was entranced with Darwin’s eye for details in nature, for his experimentation, and for characterizing the natural world in terms resonant with his own understanding of life: as something unsettled and ever becoming. Indeed, the American Transcendentalist was so taken with the British naturalist’s attention to the tiniest drop of dew and croak of a baby frog that it inspired him to return to his own recordings of nature in his journals and compare them to Darwin’s.
Thoreau died of the flu exacerbated by his tuberculosis in 1862, living long enough neither to learn the outcome of the Civil War nor to feel any pressure to square his feelings for Brown’s higher purpose with Darwin’s purposeless (though in his view sublime) universe. That task would fall to sensitive Americans in the following decades, who sought to use evolutionary theory as a way to foster a more just democracy.