In early September 1989, I was a twenty-year-old student on my first trip outside of North America, jet-lagged, laden with luggage, and aboard a train bound for Budapest, where I was to enroll as an exchange student for a semester at what was then called the Karl Marx University of Economics. I was majoring in East European history and had been studying Eastern Bloc politics, but I’d never before set foot in a country that wasn’t a liberal democracy. The prospect of living on the other side of the Iron Curtain—in a region many had died trying to escape from, where secret police might be listening to your phone calls and opening your mail, and where the nuclear missiles that had cast an apocalyptic shadow over two generations of children were pointed toward our homes—was an intimidating one. That the Hungarian language tapes I had been listening to featured conversations where people cheerily said things like “You don’t have to eat meat every day, once or twice a week’s enough” and “Were there eggs at the store today?” suggested an austere world quite different from the late-1980s United States. I expected to learn a great deal about what it was really like to live in authoritarian and totalitarian societies, in an economy where shortages, not demand, conditioned individual economic behavior, and civil liberties did not exist.
I did not expect, however, to also witness the collapse of communism.
My train originated in Vienna and crossed the frontier between Austria and Hungary on September 10, 1989, at just about the time the reform communist government in Budapest made a decision that would crack the foundations of the Eastern Bloc. It opened its border fences and let East Germany’s people go.
As my train rolled toward Budapest’s cavernous Keleti station, thousands of East Germans were fleeing in the opposite direction, part of an exodus to freedom in the West that grew so large it forced regimes in Moscow, East Berlin, Prague, and Budapest to make a decision: call in the military or capitulate to popular will. For Hungary, the most open, politically relaxed, and consumer-oriented member of the bloc, a nation that resented the ongoing Soviet occupation and longed to become a neutral state like Austria or Yugoslavia, the decision was a relatively easy one. Their citizens were already allowed to travel to the West, and that night they decided to let the sixty thousand East German “tourists” who had flooded the country that summer—in reality families seeking to escape through Hungary’s recently defortified frontier—do so as well. The East Germans had known that if they made it to Austria, they would be allowed to travel on to West Germany, whose constitution already considered them citizens. Taking advantage of the fact that Hungary was one of the few places in the world their regime permitted them to visit, tens of thousands had booked their summer vacations there, but had no intention of ever coming home.
By the end of the day, more than ten thousand East Germans had escaped to the West. Journalists at the main receiving center in Passau, West Germany, reported that nearly all the escapees were under forty years of age, and half of them were in their twenties. Almost nine in ten had professional jobs: doctors, dentists, industrial engineers, and the like. Two-thirds had owned cars, a luxury item in East Germany. They were, put simply, some of the most valuable and irreplaceable citizens of a country that already had such an acute labor shortage it had imported one hundred thousand guest workers from Vietnam and other developing nations just to keep its industries going.1
Over the coming weeks, the hemorrhaging worsened, as citizens long deprived of a say in the affairs of their country voted with their feet. Tens of thousands left their homes in September, even after their government stopped issuing exit visas to Hungary. With the Hungarian escape route closed, they made their way instead to the only foreign country they could still easily travel to, Czechoslovakia. Nearly three thousand sought refuge in West Germany’s embassy in Prague, hoping to be allowed to emigrate to the West. A thousand more risked swimming to Hungary across the broad, dangerous Danube River, which formed a section of that country’s border with Czechoslovakia. Emboldened by the regime’s impotence, large antigovernment demonstrations broke out in Dresden and Leipzig. In mid-October, East Germany’s seventy-eight-year-old leader, Erich Honecker, was removed from power by his own politburo, which made a futile attempt to win back the people’s confidence by pledging to lift travel restrictions; nobody believed they would. In the first week of November alone, more than sixty thousand additional people fled the country. Soldiers were deployed to relieve growing labor shortages in factories, where remaining workers were putting in seventy-hour weeks. Small towns found themselves without doctors or dentists. There were shortages of clothing, fruit, lumber, and spare parts. Days later, the beleaguered regime accepted the inevitable, opening the country’s borders with West Germany and West Berlin. Millions poured out, and by December the Eastern Bloc had effectively ceased to exist.2
I traveled throughout the region in those final months of the Soviet Empire, from Berlin and Warsaw to central Romania and Yugoslavia’s Dalmatian coast. I witnessed the privations and tyranny of the collapsing order, a system whose alleged purpose was the promotion of the common good, but one that yielded instead collective misery.
Romania was the worst. Nicolae Ceaușescu, general secretary of the Romanian Communist Party for a quarter century, had sunk the country deeply into debt to finance absurd projects: a useless canal off the Danube River; a misguided scheme to turn Europe’s greatest wetland, the Danube Delta, into a rice-growing region, which would help trigger a biochemical disaster that killed off much of the life in the Black Sea while failing to produce much rice; the bulldozing of great swaths of historic Bucharest to build the hideous Palace of the People, which at 3.77 million square feet was the second-largest administrative structure on the planet after the Pentagon in Washington, D.C.; a nuclear power plant whose construction was so flawed that many viewed its possible completion as a threat to the safety of southeastern Europe. Having accumulated an enormous foreign debt—the palace alone cost $4 billion—Ceaușescu had suddenly become obsessed with paying it all back as quickly as possible. The resulting policy was Totul Pentru Export—“Everything for Export”—under which everything possible was sold abroad for hard currency, regardless of the needs of Romania’s twenty-three million people. The result was tragic.3
In a country in possession of one of Europe’s largest oil fields, people now had to wear winter hats and jackets in offices and hotel lobbies. A nation of farmers stood in long lines to buy pig snouts and feet, the intervening parts of the animals having been sold abroad. At dusk, Brașov, the city I visited that October, descended into darkness: By law, each room of a home or business could have just one forty-watt bulb to conserve electricity, and police made random checks to ensure compliance, as every watt consumed by Romania’s people reduced the stock of energy that could be exported. The power shut off entirely an hour after sunset, forcing guests at my unheated hotel to feel their way along the stairway walls in total darkness to reach the lobby. Outside, teams of policemen stood on each street corner toting machine guns and accompanied by German shepherds, their silhouettes outlined in the ghostly glare of the only source of light: spotlights bouncing off great rooftop billboards declaring the greatness of Ceaușescu, “Genius of the Carpathians” and, as one ironically put it, founder of the “Epoch of Light.”
In Czechoslovakia and East Germany people occasionally took a second glance at foreigners in their midst; in Ceaușescu’s Romania they stopped in their tracks and stared, unblinking, as if we had dropped from the moon. At the bus stop in front of the station, commuters formed a silent semicircle around my two American traveling companions and myself, everyone gaped in silence. Nobody would return greetings.
Then, after ten minutes had passed, as if responding to a starting gun, everyone rushed inward toward us, faces desperate, wads of greasy, decomposing currency in their hands, pleading urgently to buy most anything: food, cigarettes, candy. We gave what we had of the latter, handing chocolate bars to grasping hands, refusing pay until—as suddenly as it began—everyone swooped away, back into their silent semicircle. An electric trolley bus finally dragged itself painfully up the street to our stop, three of its four tires completely flat.
Ceaușescu had built a cult of personality modeled on that of North Korea’s Kim Il Sung and backed by the Securitate, a secret police force believed to have Orwellian omnipresence. Some Securitate officers were said to have been recruited when young from Romania’s massive population of abandoned children, itself a direct product of Ceaușescu’s rule. Possessed with the notion that Romania’s national greatness would grow in direct proportion to its population size, he had banned abortion and the import or sale of any form of birth control, slapped taxes on couples with few children, and instituted surprise examinations to detect illegal terminations of pregnancy. The result was a flood of unwanted babies, one hundred thousand of whom wound up in horrifying conditions in overtaxed, undersupplied state orphanages. I encountered a party of them when we returned to the train station to catch the overnight to Budapest. The station’s cavernous interior was blacked out, and in the shadows hundreds of people lay on the floor, waiting for trains. One boy who looked to be six or seven took notice of our foreignness and struck up a conversation in Romanian, unperturbed and apparently uninterested in the fact that we couldn’t understand him. He had a small flashlight that he kept pointed in his own face, flicking it on and off. He laughed in a sort of maniacal way from time to time for no apparent reason. Everyone else in the station was afraid to speak to us—the invisible presence of informants was reflected in their body language—but eventually a young man sitting beside us asked the boy a few questions and whispered a translation, while pretending to look the other way. “He is going to a school for hopeless children in Moldavia, alone. He is sick. He is actually eleven years old. Yes, he is small, because of his sickness.” The man added that he could be arrested just for talking to us. “I want to leave Romania,” he said, before stating the obvious. “There is no freedom here.”4
The surveillance state was ubiquitous. Classrooms and offices were bugged. All typewriters had to be registered with the police, and photocopying machines were banned outright. One in four people was said to be an informant. Many Romanian professionals believed that even the political jokes whispered from person to person—that human flesh was being sold at restaurants for lack of ordinary supplies, or that Ceaușescu’s wife, Elena, dined on the blood of children—were actually circulated by the Securitate for conspiratorial reasons of their own.5
Like those of the Soviets before him, Ceaușescu’s development policies were based on the forced industrialization of the country, which he believed would foster the creation of a “new socialist man.” There was a twist, however. Ceaușescu and his wife (and heir apparent) had both been raised in small village peasant families and were fundamentally distrustful of cities and urban dwellers. To the greatest extent possible, they wanted to industrialize without urbanizing. Ceaușescu ordered large factories to be built in remote areas of the countryside and assigned workers to man them, but didn’t provide the housing, schools, hospitals, roads, sewage, or water systems necessary to support new residents. Instead, members of peasant families commuted from area villages or moved into jerry-built factory dormitories lacking heat, electricity, or flush toilets. To discourage individual independence and character, Ceaușescu bulldozed the centuries-old cottages in the villages along with their garden plots, forcing their inhabitants to be relocated into drab multistory concrete blocks. Seven thousand villages were slated for destruction, though only a few dozen were demolished before the December 1989 uprising ended the regime.6
Two months after my visit the Ceaușescus were dead and the lights were back on. Hungary had left the Warsaw Pact, Czechoslovakia’s regime had surrendered to the Velvet Revolution, and the Berlin Wall had fallen. Through it all, Soviet leader Mikhail Gorbachev never rolled the tanks. Two years later, even his country had ceased to exist.
For a period of forty years in occupied Eastern Europe—and seventy in the Soviet Union—authorities in Moscow had attempted to build a utopian society devoted, in theory, entirely to the common good. Individuals would have no meaningful rights. Virtually everything was to be owned by the state: factories, farms, and power plants; banks, railroads, and airlines; newspapers and television stations; restaurants and bars; shops and supermarkets; children’s summer camps and adult sport teams. The nonprofit sector and civic associations were eliminated and, in the more hard-line countries, weren’t reestablished until after 1989. Any organization that stood between the state and the individual—whether Boy Scouts or Rotary Clubs, foundations or charities, labor unions, private schools or village churches—was disbanded or replaced with state- or party-controlled entities.
Soviet communism was forced upon the countries of Eastern Europe and the Balkans at gunpoint at the end of World War II. On a psychological level, it demanded that every individual abandon his or her own superego—the part of our psychical makeup that acts as our conscience and moral compass and arbiter of social norms in battles with the primal urges of the id—and replace it with that provided by the Communist Party. The system, Czechoslovak dissident Václav Havel attested in a 1978 essay that landed him in jail, “draws everyone into its sphere of power, not so they may realize themselves as human beings, but so they may surrender their identity in favor of the identity of the system, so they may become agents of the system’s general automatism and servants of its self determined goals.”7
John Kósa, a Harvard Medical School sociologist who experienced the Soviet takeover of his native Hungary, wrote poignantly about how the Soviets sought to accomplish this. Every group—whether country club, fishing party, or street gang—has its own collective superego, a set of values, goals, etiquette, and norms accepted by the group. “Through membership, a person accepts the norms of the group as part of his individual superego; in other words, he ‘internalizes’ them,” Kósa wrote in 1962. “Some people cooperate sincerely and eagerly, some more or less reluctantly. . . . Those who seem to follow the superego most perfectly usually become leaders of the group.”8
In a liberal democracy like the United States, individuals are members of many groups simultaneously, including churches, social classes, ethnic groups, sports fandoms, political parties, and the firms where we are employed. A person might say, “I’m a father, an employee of GM, a Republican, an Orioles fan, a Catholic, a trout fisherman, volunteer fireman, and retired marine of middle-class upbringing,” and balance the degree to which and contexts in which he identifies with each.
In the radical collectivism of the Soviet system, none of these countervailing allegiances were allowed. Every individual was to take on the Soviet communist superego as his or her own, and could face derision, punishment, and even death if he or she faltered for even a moment. The Communist Party claimed the sole right to formulate society’s norms, goals, and values, and permitted no criticism or competition. It is telling that upon securing control of Russia in the aftermath of the 1917 Revolution, the Bolsheviks didn’t just liquidate the tsar’s family, the aristocracy, and Russia’s handful of industrialists; they also ruthlessly dismantled independent professional, occupational, and intellectual associations; labor unions; and even the centuries-old collective farming institutions of the Russian peasants (the mir) that some had thought would be the natural building blocks of a communist society in a country lacking an industrial working class.
In the early days of Soviet communism, even the family was targeted for destruction. Parents were encouraged to let the state raise their children, and anyone could permanently deposit his or her offspring in state-run detskie doma (“children’s houses”), which were in effect poorly staffed and underresourced orphanages. Divorces and abortions were made legal and easy to obtain not because of respect for the autonomy and will of individual adults, but as a means of weakening their loyalty to the family, the only remaining rival to the communist state. State nurseries, kindergartens, schools, and youth organizations were established to co-opt as many family functions as possible, and to increase the state’s ability to shape the next generation.9 New housing—and there was a desperate need for it after the destruction of World War I and the revolution—consisted of communal apartments where unrelated strangers were forced to share crowded kitchens, bathrooms, and living spaces. Children were encouraged to spy on their parents and report any deviant actions or speech to the authorities. Only when the party discovered it lacked the resources and capacity to achieve its vision—many of the children under its care grew to become misfits and criminals—did it concede early child rearing to the family, though it continued to seek to penetrate and condition the Soviet household.10
Another key goal of the regime was to create what it called a “New Socialist Man,” a selfless, collective-minded, austere, hardworking, and obedient “man of the future” who would help build and sustain the communist utopia. Joseph Stalin, who ruled the Soviet empire from 1924 to 1953, thought young people could be molded into such New Men like clay, through propaganda, education, and experience. Embracing the pseudoscientific theories of Trofim Lysenko, the head of the USSR’s Institute of Genetics, Stalin believed that these attributes could be inherited by the conditioned people’s children, permanently transforming the character of the Soviet people. (Questioning Lysenko’s anti-Darwinian theory was against the law.) Children were taught not to question the party’s analysis of reality and to let the party serve as their moral arbiter. “The scheme,” as Kósa would put it, “. . . reduces the individual essence of personality and aims for a society of spirited ants and intelligent robots who work, build, fight, marry, and multiply, but in all their actions miss an essential human trait: the labors of conscience.”11
Soon after taking power in Russia and, later, Eastern Europe, the Soviets had expelled, imprisoned, or executed all their real internal enemies, so the regime had to find scapegoats within the party itself to blame for its mistakes. Under Stalin and Ceaușescu, purges were common. The best course for anyone who was accused was to immediately accept his or her guilt, and perhaps even confess to additional crimes that he or she had not committed. Arguing one’s innocence was tantamount to suggesting that party authorities had erred, which of course was impossible; punishment was much more severe for anyone who did so. When census takers in 1937 diligently and accurately tabulated the population of the USSR, they inadvertently exposed the fact that six to eight million people had vanished—people who had in fact starved to death in the state-engineered famines of 1932 to 1933. The census administrators were charged with sedition, and dozens were shot or imprisoned for having done their assigned task.12
By eliminating individual agency, the Soviet system planted the seeds of its own destruction. Since everything was collectively owned, there were no market mechanisms, no supply and demand to organize economic production and social relations. The government had to plan every decision: what to invest in, what resources to extract and where to send them, what farmers should grow and where the produce would be sold, wages and prices, what science should be funded, what the media would report, what artists could create, where each person would live, and how much food their community would receive. If any one part of a supply chain failed, the effects would ripple through the entire system. Innovation, be it to create a better process or an improved product, was discouraged because it might disrupt the carefully laid details of the ongoing five-year economic plan. Meeting production targets was usually more important than producing quality products, or even those that people actually needed. Since these goals were generally increased with every new five-year plan, managers had a strong incentive to hoard inputs against future quotas, further constricting economic activity. Shortages became endemic, as unforeseen events undermined central planners’ assumptions, throwing supply chains into disarray.
The judiciary was simply an arm of the secret police and thus under the direction of senior party officials, meaning there was no established rule of law; the law was whatever the party cadres determined at any given moment. Soviet Bloc parliaments had no independent existence either, and even the party congresses where policies were supposedly set were in reality theatrical stages, where key decisions that had already been made in private by senior leaders at their hunting lodges were presented.
The most devastating critique of this tyrannical system came from Milovan Djilas. A Yugoslav communist who had risked his life fighting a guerrilla war against the Nazis, he’d met repeatedly with Stalin to coordinate military strategy and postwar policy. He had helped build the Federal People’s Republic of Yugoslavia, served in its politburo, and in the early 1950s was considered to be Josip Broz Tito’s likely successor. Then, in 1953, Djilas began protesting the increasingly undemocratic nature of the Yugoslav system, the increasing opulence in which its leaders were indulging, and their refusal to allow a rival socialist party to stand in elections. He was imprisoned for a year and a half, and again shortly after his release for writing in support of the 1956 Hungarian uprising, in which reform communists tried to assert independence from Moscow. While in jail he wrote a full-throated denunciation of what Soviet and Yugoslav communism had become, The New Class: An Analysis of the Communist System, which was published abroad in 1957 and translated into numerous languages. “Bureaucrats in a non-Communist state have political masters, usually elected, or owners over them, while Communists have neither,” Djilas wrote. “The bureaucrats in a non-Communist state are officials in modern capitalist economy, while the Communists are something different and new . . . a new owning class.” At its core were “the all-powerful exploiters and masters” who had monopolistic control over all material assets, and showered themselves with exorbitant salaries, second homes, access to special stores, spas, and facilities, and, of course, the best housing, furniture, food, and automobiles. Its “methods of rule fill some of the most shameful pages of history,” he charged. “When the new class leaves the historical scene—and this must happen—there will be less sorrow over its passing than there was for any other class before it. Smothering everything except what suited its ego, it has condemned itself to failure and shameful ruin.”13
This system not only killed tens of millions and scarred hundreds of millions more but also had direct and consequential effects on political developments in the United States, where émigré Russian and East European intellectuals and other witnesses of the horrors of radical collectivism formed a radical counterreaction, championing an extreme form of individualism and denouncing any response that was more moderate as a slippery slope to slavery.
Perhaps the most militant political philosopher among the Soviet refugees was Ayn Rand, who was born Alisa Zinov’yevna Rosenbaum in Saint Petersburg in the closing years of tsarist Russia. Her parents’ steady upward rise was cut short by the 1917 Revolution, and she watched Red Army soldiers loot her father’s pharmacy at gunpoint, seal the doors, and shut it down. Her private school was closed, and the family maid and nurse were let go. It became difficult to find food and essentials, and “class enemies” were being rounded up. Her best friend, Olga Nabokov, Olga’s soon-to-be-famous older brother Vladimir, and their aristocratic family fled the country, ending Rand’s visits to their Florentine mansion, where the girls had been attended by butlers, governesses, and cooks. Rand’s own family evacuated to the Crimea for three years until it too fell to communist forces. When they returned to Petersburg, they discovered that the family home and business had been expropriated and that all their savings—hoarded in tsarist rubles—had become worthless. Rand enrolled at Petrograd University, where the works of Friedrich Nietzsche were among her favorites, and in a lucky break received a visa to visit relatives in the United States in 1924. She never returned.14
From this traumatic experience, Rand would rise to become the founder and proponent of an extreme individualist creed, which she called “Objectivism.” It insisted that people should not be limited by moral concerns and should embrace their rapacious self-interest. Greed was good, and altruism—which she thought the primary motivating impulse of the Bolsheviks—was the stuff of evil. Talented, hardworking people like her parents and grandparents had been forced to sacrifice to support the weak, stupid, and lazy. Even when Stalin took power in the USSR, eliminating invented enemies, murdering millions on a whim, and building a cult of personality unrivaled in history, Rand proclaimed the egomaniacal Soviet leader’s regime to be an example of being too “altruistic.” Man’s “highest moral purpose,” she once told broadcaster Mike Wallace when asked to describe her philosophy, “is the achievement of his own happiness . . . each man must live as an end in himself, and follow his own rational self-interest.”15
Her influence on radical individualism in America is hard to overstate. Her novel The Fountainhead (1943), a story of individual genius struggling against collective mediocrity, became a bestseller and was made into a movie with Gary Cooper. Shortly thereafter she became a lifelong friend and mentor to a young economic analyst named Alan Greenspan, who embraced and practiced her philosophy—with ultimately disastrous results for the U.S. economy—as the second-longest-serving chair of the Federal Reserve in history. Her second novel, Atlas Shrugged (1957), championed the Nietzschean Superman against “the mob” and approvingly described a train full of less libertarian-minded passengers being gassed to death in a tunnel accident. The book was panned by critics on the left and right—the National Review’s Whittaker Chambers denounced its gas chamber allusions—but it nonetheless became another bestseller and continues to sell staggering numbers of copies. Shortly after it was published Greenspan defended the book in a letter to the New York Times, calling it “a celebration of life and happiness. . . . Creative individuals and undeviating purpose and rationality achieve joy and fulfillment. Parasites who persistently avoid either purpose or reason perish as they should.” The book was later championed by 2012 Republican vice presidential nominee and future House speaker Paul Ryan—author of a budget that would have slashed social services to pay for tax cuts for billionaires—and embraced by his myriad followers. Rand is also an influence of former Texas congressman Ron Paul, hero of the early twenty-first-century libertarian right, and his son, Rand Paul, U.S. senator from Kentucky. As former BusinessWeek investigative reporter Gary Weiss wrote in the aftermath of the 2008 financial collapse, when her followers stopped meaningful Wall Street reform, “Rand had become the Tom Joad of the right. One could almost hear her saying: ‘Wherever there’s a fight where rich people can get richer, I’ll be there. Wherever there is a regulator beating up on a banker, I’ll be there.’”16
Rand was not alone in her problematic reaction to the despotic collectivism of the Soviet Union. Wichita oil magnate Fred Koch made his initial fortune in the early 1930s upgrading Soviet oil refineries for Joseph Stalin. He was shocked at the conditions he found in the USSR: the food rationing, the shoddy clothes, the constant surveillance. As he put it, it was “a land of hunger, misery, and terror.” Over the next twenty years, many of the Soviet engineers he worked with were executed or deported to Siberia. Even his official minder—who had constantly boasted of Soviet plans to infiltrate the American government—was shot for allegedly plotting against the regime. “What I saw there convinced me that communism was the most evil force the world has ever seen and [that] I must do everything in my power to fight it,” he observed. Fight it he did, writing pamphlets and giving speeches across the country warning of an internal communist plot and denouncing U.S. nonprofits, foreign aid programs, labor unions, civil rights leaders, and the United Nations as agents or expressions of the Soviet plot to bring totalitarian collectivism to America. Koch became one of the founders of the John Birch Society, the extreme anticommunist group that regarded water fluoridation programs as “a tool of Communist dominion” and suspected President Dwight Eisenhower of being a dedicated communist. Government handouts, Koch warned, turned people “into dependent creatures without them knowing it. The end result is a human race as portrayed by Orwell—a human face ground into the Earth by the large boot of benevolent Big Brother.” For Fred Koch, public-mindedness would lead inexorably to a Soviet-style dictatorship of the collective.17
He also inveighed against communism at his own dinner table, attempting to instill his worldview in his children, with mixed success. While sons Frederick and Bill would stay clear of politics, Charles, the second eldest, absorbed his father’s views, establishing a John Birch Society bookstore in Wichita and giving speeches warning of the dangers of communism. Another son, David, would later say his father was “paranoid about communism,” but he still adopted his father’s strong views about the evils of government and primacy of the individual. Together David and Charles would spend decades and hundreds of millions of dollars in an attempt to create a libertarian American movement committed to cutting taxes and regulations and ending government entitlement programs, bailouts, deficit spending, and efforts to confront global warming. They showered resources on the libertarian Cato Institute and the Heritage Foundation and founded the Mercatus Center at George Mason University, which issued reports advocating the privatization of Social Security, cuts to social welfare programs and Medicaid, and inaction on climate change. When the Tea Party movement emerged in 2009, the Koch brothers helped fund and organize its largest rallies via their advocacy group, Americans for Prosperity, events at which their father might have felt at home, with speakers denouncing the UN’s alleged plot to bring a one-world government and President Barack Obama’s supposed aim to institute “socialism.” By 2011 the Kochs had become possibly the most powerful force in bankrolling conservative politics.18
Reacting to a genuinely despotic form of collectivism, Ayn Rand, Fred Koch, and their acolytes built a powerful countervailing force arguing for an extreme form of individualism. But, fixated as they were on the dangers of an overarching state, they failed to see that the alternative system they were advocating had been tried before, and had also led to its own brand of tyranny.
★
Advocates of extreme individualism often fail to recognize that while an unrestrained state can indeed become a tyrannical force, so too can private individuals. Throughout human history, societies with weak states have become despotisms because they have been unable to prevent wealth, power, and privilege from becoming concentrated in the hands of an elite who are then able to use their influence to seize control of the economy and the state itself. Such elites—the nobility of medieval Hungary and Poland, the ruling families of late twentieth-century Guatemala or El Salvador, for example—can establish near-monopolistic control of the market, national wealth, the courts, the army, and the administrative and lawmaking functions of government, shifting burdens of taxation onto others and expenditures toward themselves. Absent a sense of social responsibility—and the most radical individualists like Ayn Rand demand that they have none—the self-interested individuals at the top will remake society to serve themselves. As the most powerful individuals maximize their liberty, everyone else sees theirs disappear. The result is not unlike the late-stage Soviet system, where a small elite diverted all of society’s wealth, power, and privilege to themselves.
Just as unchecked collectivism is destructive of the common good, unchecked individualism eventually leads to a society in which individual freedom is impossible for all but a handful of people at the top. It’s no accident that radical libertarianism—unlike communism—has never been put into practice: it’s almost entirely unworkable. While critics of radical collectivism can point to dozens of cautionary examples in real-world societies—North Korea, Nazi Germany, the Soviet Union—critics of radical individualism are forced to resort to places like early twenty-first-century Somalia, where state authority has vanished and individuals are “free” to pursue their self-interest, resulting in anarchy and terror, the “state of nature” imagined by Thomas Hobbes, in which life is nasty, brutish, and short. But that’s not really fair. The situation in Somalia—or in Albania in 1997, or Sierra Leone and Liberia in the 1990s—didn’t come about as the result of excessive devotion to individual liberty, but rather from the collapse of a regime with little interest in such. For a more plausible real-world example of applied radical individualism, one has to look back in time, to America, or rather to one of the Americas that formed on the eastern seaboard in the colonial period.
Its founders arrived by sea, anchoring off what is now Charleston in 1670. Like the founders of New England, the Delaware Bay colonies, and the Virginia Tidewater country, they were English, but unlike the others they had not been raised in the British Isles, but rather in England’s colonies in Barbados and the Leeward Islands of the Caribbean. Their mission was not to build a Calvinist utopia (like the New Englanders), nor a Quaker one (as in Pennsylvania), nor even to replicate the semifeudal manorial society of the English countryside (as the Tidewater gentry had). Instead, they were transplanting a fully formed social model, an entrepreneurial and capitalistic society emphasizing the liberty of the individual property owner and the vigorous defense of property rights. Their constitution was written by John Locke himself. They championed the republicanism of classical antiquity, of ancient Greece and Rome, where a small elite had the liberty of practicing democracy, while subjugation and slavery were the natural lot of the many. Their new settlement was indeed a slave society, and it would spread its gospel of inequality and hierarchical privilege across the vast region it would colonize: most of what is now South Carolina, Georgia, Florida, Alabama, Mississippi, and Louisiana; East Texas and the southwestern half of Arkansas; the Mississippi shore of Tennessee; and the creeks of southeasternmost North Carolina. Its hostility to collective values and the ethic of equality placed it on a collision course with other regions of what would become the United States, triggering the most horrific war in our history and political and social conflicts that continue to be fought today.
At the time the Deep South was founded, Barbados was the most wealthy, populous, and controversial English colony in the Americas. It originated in 1627 as a simple community of peasant farmers eking out an existence growing poor-quality tobacco, cotton, and pigs. But in the 1640s, Dutch traders showed the Barbadians how to plant sugar, and took some of them on tours of plantations in Dutch-controlled northern Brazil, where chain gangs of African slaves did the backbreaking work of tending and harvesting the cane fields in the broiling equatorial sun. Sugar, unlike tobacco, fetched a high price in both England and the Netherlands, and gave those who first mastered its production the means to buy up much of the remaining land on the twenty-one-by-fourteen-mile island. Labor was a problem, however. In Virginia, for example, indentured servants were willing to toil under often brutish masters for years because of the promise of receiving land when their contracts expired. In Barbados, however, there was no more land to be had, so English, Welsh, and Scottish servants avoided the place. Most of the laborers who wound up on the island had come involuntarily, Irish deported by English authorities either because they were prisoners of war, petty criminals, or simply vagrants. When this supply ran out, merchants took to kidnapping children, which resulted in “Barbadozz’d” having the same meaning in the late seventeenth century as “shanghaied” would in the early twentieth. On the island they were abused, cheated, and even murdered by their masters, whose kin and social peers were in charge of law enforcement and the courts. There were frequent servants’ revolts, including one in 1649 that nearly put an end to the planters’ regime. A new labor source had to be found.19
Fortunately for the Barbadian planters, they were by now wealthy enough to be able to purchase and import hundreds of slaves to work the fields—men and women treated not as servants but as property. As such, their slaves had no rights or “liberties” under the slave codes the Barbadians invented, which became the template for English colonies elsewhere. Blacks were “heathenish, brutish and an uncertaine, dangerous kinde of people,” the law stated. They could be beaten, tortured, or worked to death if the planters pleased, and sugar was so profitable that it often made economic sense to do so. “Our English here doth think a Negro child the first day it is born to be worth £5,” a visitor to the island in 1655 reported. “They cost them nothing to bring up, they always go naked. . . . They sell them from one to the other as we do sheep.”20
The great planters became staggeringly rich and earned a reputation throughout the English empire for immorality, arrogance, and excessive displays of wealth. John Dickinson of Pennsylvania, one of the founding fathers, dismissed the Barbadians as “cruel people . . . lords vested with despotic power over myriad vassals and supported in pomp by their slavery.” Other visitors to the island observed that the local gentry lived more sumptuously than their counterparts at home, while commenters in England complained that they were buying up knighthoods and estates. Many indeed returned to England to become absentee landowners and effective lobbyists at Parliament and the Court, which allowed them to impose stiff property requirements to vote or hold office. The social realm on the island was also highly libertine in this profit-driven society. “Paying but scant attention to religion or other social and cultural institutions, Barbados and the Leeward Islands were notorious for their riotous and abandoned styles of life,” as historian Jack Greene put it. “The entire society was organized for profit.”21
However, by the late 1660s, with no land left for new plantations in Barbados or the Leeward Islands, the planters’ younger sons had to leave home to find their fortunes. Hundreds of them accordingly emigrated to the new colony that English officials and cartographers were already calling “Carolina in the West Indies.”
In what is now Greater Charleston, they replicated the slave society they had left behind, but on the subtropical shores of a continent with plenty of room for expansion. The Carolina colony’s founding constitution was written by Locke and envisioned a multitiered society of gentlemen and serfs, with the former granted “absolute authority over his slaves.” Under the charters of government, slaveholders were ensured to have the largest landholdings, as the document provided that they would receive 150 acres for each servant or slave they brought to the new colony. By the mid-1680s, three-quarters of the colony’s plantations were held by Barbadian families, with the highest concentrations in the most valuable areas around Charleston and Goose Creek. On the eve of the American Revolution, per capita wealth in the Charleston area, concentrated in the hands of the planter elite, was a staggering £2,338, more than quadruple that of the Chesapeake colonies and six times that of New York or Philadelphia. Like Barbados, it was a materialistic, grasping, and individualistic place, with the gentry aggressively competing with one another for dominance, and the merchant classes striving to join the elite. “Their whole Lives are one continued Race: in which everyone is endeavoring to distance all behind him; and to overtake or pass by, all before him. . . . Every Tradesman is a Merchant, every Merchant is a Gentleman, and every Gentleman one of the Noblesse,” the South Carolina Gazette observed in 1773. “Between Vanity and Fashion the species is utterly destroyed.”22
★
The society the Barbadians established spread across America’s subtropical lowlands, creating a distinct Deep Southern regional culture that persists to this day. The Deep South in the antebellum period was an extreme individualist’s dream. The purpose of the state was limited to the protection of private property through the provision of courts, circumscribed police functions, and military defense. Individuals at the top of the social pyramid were highly protective of their own liberties, uninterested in those of others, and hostile to the notion of human equality.
Taxes were extremely low, and were designed to spare those most able to pay them. The slave lords who controlled the colonial South Carolina legislature taxed merchants and other townspeople on the actual value of their property and assets, but imposed a flat tax on rural land based on its acreage, not its worth, meaning that a poor farmer with a hundred acres of forest in the hilly frontier paid the same tax as a planter with a highly profitable hundred-acre rice-growing plantation and manor house. Under pressure, legislators undid this disparity after the Revolution, but planters refused to allow assessors to examine their manors to determine the true value of their holdings.23
With scant taxes collected, there were very few public services. Taxpayer-financed public schools had been established across New England in the early seventeenth century; in the colonial Deep South they did not exist at all, and in some states were not established until the 1870s, largely because of the strong individualistic spirit of the white, property-owning heads of household who were the only people allowed to vote there. As Clement Eaton, a mid-twentieth-century historian of the American South, explained, “It was regarded as the duty of the individual and not of the state to see that his children were educated.” Members of the planter oligarchy hired tutors for their children and sent them to private academies, preferably the elite ones back in England, and secured public funding for the colleges their children later attended. Most lower- and middle-class whites received the educations they could afford. As late as 1850 only 10 to 16 percent of white children in the seven states dominated by the Deep South were enrolled in public school, compared to 60 to 90 percent of children overall in the United States. As a consequence, white adult illiteracy in the Deep Southern states stood at 13 to 26 percent (and 53 to 63 percent for all adults); in Massachusetts—the least literate New England state—the overall rate of illiteracy was less than 5 percent, while in New Hampshire it was 1.7 percent. The Deep Southern elite considered an uneducated population to be a good thing, and actively blocked efforts to increase the literacy of poor whites. When settlers in the uplands of South Carolina, which had been settled by independent Scots-Irish farmers and herders, asked state legislators in 1855 to grant their communities the right to tax themselves to provide common schools for their children, the planters who controlled the body refused on the grounds that only they had the power to impose taxes.24
Because state law enforcement, courts, and prisons were so underfunded, people took the law into their own hands, and security and police work were largely carried out by privately organized militias, plantation overseers, and lynch mobs. On the eve of the Civil War, Florida’s white-on-white murder rate reached a shocking 86 per 100,000 inhabitants—higher than the overall rate in the world murder capital in 2009, Honduras (66.8), and nearly twenty times the adult rate in the United States in 2012 (4.7). For decades, Deep Southern voters and legislators resisted spending money to build state penitentiaries because of their preference for violent punishments and their deep-seated suspicion of government power; in 1860, South Carolina was the only state not to have built a single one. Across the Deep South, slaves were overwhelmingly tried in private settings by their masters, who were allowed to punish or even kill them (but had to pay penalties if they maimed or killed those belonging to someone else).25
As Deep Southern governments were forbidden to interfere in the economy or society, economic and social relations were almost entirely unregulated. A glaring exception to this official lack of involvement was the aggressive criminalization of speaking, writing, or disseminating any ideas that questioned the ownership of property in the form of other humans. Under an 1830 Louisiana law it was illegal to say anything from the pulpit, bar, bench, or stage that might produce discontent among the enslaved. In Georgia, Governor Wilson Lumpkin spoke in support of a similar bill, warning legislators that if they did not use force against antislavery writers, “have we not reason to fear that their untiring efforts may succeed in misleading the majority of people . . . and finally produce interference with the constitutional rights of the slaveholders?” The “freedom to own slaves”—an assertion of the individual rights of the property holder—trumped the freedom of speech in the Deep South and even on the floor of the U.S. House of Representatives, where South Carolina congressmen successfully imposed a “gag rule” on antislavery speeches or petitions, which remained in place for six years.26
The oligarchy’s fixation on individual liberty and the sanctity of property was so extreme that it handicapped the Confederacy’s ability to defend itself and its political system. Faced with an existential crisis, the government of the Confederate States of America attempted to take emergency measures to ensure the survival of the newly minted nation, but ran afoul of Deep Southern taboos against empowering government or circumscribing individual property rights. As the Civil War dragged on, food supplies dwindled, including those for soldiers in the field, but many planters refused official requests to grow grain instead of their primary cash crop, cotton. Confederate officers begged planters to loan the army slaves to build critical fortifications, but were rebuffed. Forced in the spring of 1862 to pass a conscription law, the government in Richmond was roundly attacked by Deep Southern governors and even Confederate vice president Alexander Stephens of Georgia, who wrote that the measure made it “a mockery to talk of states rights and . . . constitutional liberty.” (The law was quickly amended to exempt one able-bodied male on any plantation with twenty or more slaves.) In 1863, with a full-scale Union invasion well under way, the CSA empowered the army to seize grain and other goods for the war effort; when an officer presented South Carolina planter James Henry Hammond with an order for a share of his corn, he tore it up, tossed it out the window, and declared that submitting to it meant “branding on my forehead ‘Slave.’” Richmond, others said, should make fewer, not greater demands on the slave lords, as they would provide for the nation’s defense only voluntarily. “The sacrosanctity of slave property in this war,” Assistant Secretary of War John A. Campbell observed, “has operated most injuriously to the Confederacy.”27
Because of their focus on the state as the sole agent of tyranny, libertarians and Ayn Randers would expect a society so committed to small government and private property to have been the epitome of individual freedom and prosperity. This was of course not the case. While a weak state will never become a collectivist dictatorship like the Soviet Union or Nazi Germany, it will also prove no match for another common source of tyranny: the rich and powerful, who in the effort to maximize their own freedom will quickly deprive most everyone else of the same, either by monopolizing the means of advancement or by explicitly redefining who is entitled to the benefits of being a free individual. The result, for the Deep South and so many other societies, past and present, is a despotism of the elite, where an oligarchy is free to do what they will to everyone else.
The Deep Southern oligarchy was passionate about a certain type of individual freedom: their own. Over time they developed an increasingly restrictive definition of which individuals had the capacity to live free. Drawing on the West Indies precedent, they denied those of African descent individual rights on grounds of racial inferiority. They saw no contradiction between their love of individual liberty and rejecting it for others. “That perfect liberty they sigh for,” Abraham Lincoln observed in 1854, was “the liberty of making slaves of other people.” South Carolina’s firebrand senator John C. Calhoun would not have disagreed. “Liberty when forced on a people unfit for it would instead of a blessing be a curse . . . the greatest of all curses,” he wrote, adding that “nothing can be more unfounded and false” than “the prevalent opinion that all men are free and equal.”28
Using this logic, the Deep Southern elite progressively narrowed the set of people considered capable of practicing and enjoying individual freedom. With the weak Deep Southern colonial and state governments entirely under their control—and the U.S. Bill of Rights held to constrain only the federal government—there was no one to stop them from doing so. A person of African descent was from the outset considered incapable of self-rule, and by the early nineteenth century was believed to benefit greatly from slavery, without which, South Carolinian intellectual William Gilmore Simms asserted, he would return “to the condition of cannibal Africa from whom he has been rescued.” By the 1850s, blacks were deemed to be not fully human under the pseudoscientific theory of “polygenesis,” which held that the various races were in fact separately evolved species, and the negro was the most inferior of them all. The “great truth,” Alexander Stephens would assert, is “that slavery, subordination to the superior race, is [the Negro’s] natural and moral condition.”29
This point of view was vigorously and publically refuted at the time by educated African Americans who had either been born in or escaped to regions where slavery was illegal. The most famous of them, the fugitive Tidewater slave Frederick Douglass, spoke against slavery across Britain, returning to the United States after abolitionists there purchased his freedom from his former master. He then published an abolitionist tract and went on a speaking tour across the northernmost tier of the country, arguing that the immorality of slavery was self-evident. “What, am I to argue that it is wrong to make men brutes, to rob them of their liberty, to work them without wages, to keep them ignorant of their relations to their fellow men, to beat them with sticks, to flay their flesh with the lash, to load their limbs with irons, to hunt them with dogs, to sell them at auction, to sunder their families, to knock out their teeth, to burn their flesh, to starve them into obedience and submission to their masters?” he asked white audiences. “Must I argue that a system thus marked with blood, and stained with pollution is wrong? No . . . I have better employments for my time and strength.” The Constitution was a “glorious liberty document,” he asserted, and thus its very spirit prohibited slavery. Others rejected racial categorization altogether. “We rejoice that we are colored Americans, but deny that we are a ‘different race of people,’ as God has made of one blood all nations that dwell on the face of the earth and hence has no respect of men in regard to color,” a large group of New York City’s residents wrote Abraham Lincoln in 1862 to protest his support for a postwar deportation of black Americans to Africa. Such arguments, moral, racial, or constitutional, were controversial in the northern nations, but they could not even be discussed in the Deep South.30
As the Deep South’s planters accumulated wealth and power during the eighteenth-century rice boom and nineteenth-century cotton one, they continued to maximize their individual liberty by constraining everyone else’s. In most of the colonies in the region, there were stiff property requirements to be allowed to vote, and progressively more stringent ones to stand for various levels of office, under the theory that the people who owned the country should be the ones to govern it. South Carolina maintained this exclusionary system right up to the Civil War. Power was concentrated in the state legislature, but the districts were designed so that the plantation parishes received nearly all the representation, and the increasingly populous backcountry of yeoman Scots-Irish farmers got very few seats. To run for legislature in the early nineteenth century, one had to have five hundred acres and ten slaves or real estate valued at £150 after debts; at a time when a typical American laborer made the equivalent of £20 a year, state senators needed a £300 estate, and governors a £15,000 one. In turn, this planter-dominated legislature—not the public—chose all state and local officers, the state’s two U.S. senators, and delegates to the electoral college to choose the president.31
Facing increasing criticism of slaveholding in the mid-nineteenth century, Deep Southern leaders developed an elaborate defense for human bondage, one that did not limit itself to blacks. James Henry Hammond, a former governor of South Carolina, published a seminal book in 1845 arguing that enslaved laborers were happier, fitter, and better looked after than their free counterparts in Great Britain and the northern states. The latter, Hammond said, were ruthlessly exploited by industrial capitalists who, unlike slave lords, were under no obligation to look out for the interests of people they did not own. Since the working classes of the North weren’t kept in bondage, they also posed a risk to society, ever ready to rise up in support of populist political causes, strikes, or possibly revolution, threatening “Republican institutions.” Enslaved labor, by contrast, was docile and disenfranchised, the “foundation” of what Hammond termed a “well-designed and durable ‘Republican edifice.’” The white poor and laboring classes, Hammond asserted in an argument widely embraced among the Deep Southern oligarchs, should be enslaved for their own good, and would find it “a most glorious act of emancipation.” One wonders, if the Confederacy had avoided the war and gone its own way, whether this theory might have been put into practice.32
But when South Carolinians foolishly fired on the American flag at Fort Sumter, previously sympathetic or ambivalent regions of the country turned on the Deep South and rallied to the Union cause, triggering a war that otherwise might not have taken place. The result was a disaster for the federation, and especially the Confederacy, resulting in hundreds of thousands dead, cities in ruins, the destruction of the Confederate economic system, and a Yankee occupation that sought to “reconstruct” the South in their own image. The Yankees would fail in that effort, and the former Confederacy would restore and defend a racial caste system right into the 1960s. In the Deep South, the oligarchs would return to power, forming a determined voice for laissez-faire economic and social policy that continues to shape American politics today.
As different as they were, the Soviet and antebellum southern systems had several things in common. They were profoundly undemocratic, distinctly illiberal in their lack of interest in the civil rights and liberties of the vast majority of their citizenries, profoundly unfair in their distribution of national wealth, and ultimately unable to compete, economically, militarily, or diplomatically, against less autocratic external rivals. Each society denigrated one or the other of the two vital components of mass human freedom—individual rights in the case of the Soviet Union, the collective good in the case of the Deep South—putting each on an inevitable track to tyranny. Their lesson for humanity is that liberal democracy requires balance between freedom’s competing mandates. For Americans, however, negotiating this balance is complicated by the fact that we are not one nation, but several, each with its own views on where the balance point should lie. The first step in finding a solution to our national deadlock is to understand these divides.