Democracy and dictatorship are typically discussed as contrasting political and ethical systems. This chapter seeks to shift the terms of the discussion, by surveying the history of democracy and dictatorship as contrasting types of information networks. It examines how information in democracies flows differently than in dictatorial systems and how inventing new information technologies helps different kinds of regimes flourish.
Dictatorial information networks are highly centralized.[1] This means two things. First, the center enjoys unlimited authority; hence information tends to flow to the central hub, where the most important decisions are made. In the Roman Empire all roads led to Rome, in Nazi Germany information flowed to Berlin, and in the Soviet Union to Moscow. Sometimes the central government attempts to concentrate all information in its hands and to dictate all decisions by itself, controlling the totality of people’s lives. This totalizing form of dictatorship, practiced by the likes of Hitler and Stalin, is known as totalitarianism. But not every dictatorship is totalitarian. Technical difficulties often prevent dictators from becoming totalitarian. The Roman emperor Nero, for example, didn’t have the means to micromanage the lives of millions of peasants in remote provincial villages. In many dictatorial regimes considerable autonomy is therefore left to individuals, corporations, and communities. However, the dictators always retain the authority to intervene in people’s lives. In Nero’s Rome freedom was not an ideal but a by-product of the government’s inability to exert totalitarian control.
The second characteristic of dictatorial networks is that they assume the center is infallible. They therefore dislike any challenge to the center’s decisions. Soviet propaganda depicted Stalin as an infallible genius, and Roman propaganda treated emperors as divine beings. Even when Stalin or Nero made a patently disastrous decision, there were no robust self-correcting mechanisms in the Soviet Union or the Roman Empire that could expose the mistake and push for a better course of action.
In theory, a highly centralized information network could try to maintain strong self-correcting mechanisms, like independent courts and elected legislative bodies. But if they functioned well, these would challenge the central authority and thereby decentralize the information network. Dictators always see such independent power hubs as threats and seek to neutralize them. This is what happened to the Roman Senate, whose power was whittled away by successive Caesars until it became little more than a rubber stamp for imperial whims.[2] The same fate befell the Soviet judicial system, which never dared resist the will of the Communist Party. Stalinist show trials, as their name indicates, were theater with preordained results.[3]
To summarize, a dictatorship is a centralized information network, lacking strong self-correcting mechanisms. A democracy, in contrast, is a distributed information network, possessing strong self-correcting mechanisms. When we look at a democratic information network, we do see a central hub. The government is the most important executive power in a democracy, and government agencies therefore gather and store vast quantities of information. But there are many additional information channels that connect lots of independent nodes. Legislative bodies, political parties, courts, the press, corporations, local communities, NGOs, and individual citizens communicate freely and directly with one another so that most information never passes through any government agency and many important decisions are made elsewhere. Individuals choose for themselves where to live, where to work, and whom to marry. Corporations make their own choices about where to open a branch, how much to invest in certain projects, and how much to charge for goods and services. Communities decide for themselves about organizing charities, sporting events, and religious festivals. Autonomy is not a consequence of the government’s ineffectiveness; it is the democratic ideal.
Even if it possesses the technology necessary to micromanage people’s lives, a democratic government leaves as much room as possible for people to make their own choices. A common misconception is that in a democracy everything is decided by majority vote. In fact, in a democracy as little as possible is decided centrally, and only the relatively few decisions that must be made centrally should reflect the will of the majority. In a democracy, if 99 percent of people want to dress in a particular way and worship a particular god, the remaining 1 percent should still be free to dress and worship differently.
Of course, if the central government doesn’t intervene at all in people’s lives, and doesn’t provide them with basic services like security, it isn’t a democracy; it is anarchy. In all democracies the center raises taxes and maintains an army, and in most modern democracies it also provides at least some level of health care, education, and welfare. But any intervention in people’s lives demands an explanation. In the absence of a compelling reason, a democratic government should leave people to their own devices.
Another crucial characteristic of democracies is that they assume everyone is fallible. Therefore, while democracies give the center the authority to make some vital decisions, they also maintain strong mechanisms that can challenge the central authority. To paraphrase President James Madison, since humans are fallible, a government is necessary, but since government too is fallible, it needs mechanisms to expose and correct its errors, such as holding regular elections, protecting the freedom of the press, and separating the executive, legislative, and judicial branches of government.
Consequently, while a dictatorship is about one central information hub dictating everything, a democracy is an ongoing conversation between diverse information nodes. The nodes often influence one another, but in most matters they are not obliged to reach a consensus. Individuals, corporations, and communities can continue to think and behave in different ways. There are, of course, cases when everyone must behave the same and diversity cannot be tolerated. For example, when in 2002–3 Americans disagreed about whether to invade Iraq, everyone ultimately had to abide by a single decision. It was unacceptable that some Americans would maintain a private peace with Saddam Hussein while others declared war. Whether good or bad, the decision to invade Iraq committed every American citizen. So also when initiating national infrastructure projects or defining criminal offenses. No country can function well if every person is allowed to lay a separate rail network or to have their own definition of murder.
In order to make decisions on such collective matters, a countrywide public conversation must first be held, following which the people’s representatives—elected in free and fair elections—make a choice. But even after that choice has been made, it should remain open to reexamination and correction. While the network cannot change its previous choices, it can elect a different government next time.
The definition of democracy as a distributed information network with strong self-correcting mechanisms stands in sharp contrast to a common misconception that equates democracy only with elections. Elections are a central part of the democratic tool kit, but they are not democracy. In the absence of additional self-correcting mechanisms, elections can easily be rigged. Even if the elections are completely free and fair, by itself this too doesn’t guarantee democracy. For democracy is not the same thing as majority dictatorship.
Suppose that in a free and fair election 51 percent of voters choose a government that subsequently sends 1 percent of voters to be exterminated in death camps, because they belong to some hated religious minority. Is this democratic? Clearly it is not. The problem isn’t that genocide demands a special majority of more than 51 percent. It’s not that if the government gets the backing of 60 percent, 75 percent, or even 99 percent of voters, then its death camps finally become democratic. A democracy is not a system in which a majority of any size can decide to exterminate unpopular minorities; it is a system in which there are clear limits on the power of the center.
Suppose 51 percent of voters choose a government that then takes away the voting rights of the other 49 percent of voters, or perhaps of just 1 percent of them. Is that democratic? Again the answer is no, and it has nothing to do with the numbers. Disenfranchising political rivals dismantles one of the vital self-correcting mechanisms of democratic networks. Elections are a mechanism for the network to say, “We made a mistake; let’s try something else.” But if the center can disenfranchise people at will, that self-correcting mechanism is neutered.
These two examples may sound outlandish, but they are unfortunately within the realm of the possible. Hitler began sending Jews and communists to concentration camps within months of rising to power through democratic elections, and in the United States numerous democratically elected governments have disenfranchised African Americans, Native Americans, and other oppressed populations. Of course, most assaults on democracy are more subtle. The careers of strongmen like Vladimir Putin, Viktor Orbán, Recep Tayyip Erdoğan, Rodrigo Duterte, Jair Bolsonaro, and Benjamin Netanyahu demonstrate how a leader who uses democracy to rise to power can then use his power to undermine democracy. As Erdoğan once put it, “Democracy is like a tram. You ride it until you arrive at your destination, then you step off.”[4]
The most common method strongmen use to undermine democracy is to attack its self-correcting mechanisms one by one, often beginning with the courts and the media. The typical strongman either deprives courts of their powers or packs them with his loyalists and seeks to close all independent media outlets while building his own omnipresent propaganda machine.[5]
Once the courts are no longer able to check the government’s power by legal means, and once the media obediently parrots the government line, all other institutions or persons who dare oppose the government can be smeared and persecuted as traitors, criminals, or foreign agents. Academic institutions, municipalities, NGOs, and private businesses are either dismantled or brought under government control. At that stage, the government can also rig the elections at will, for example by jailing popular opposition leaders, preventing opposition parties from participating in the elections, gerrymandering election districts, or disenfranchising voters. Appeals against these antidemocratic measures are dismissed by the government’s handpicked judges. Journalists and academics who criticize these measures are fired. The remaining media outlets, academic institutions, and judicial authorities all praise these measures as necessary steps to protect the nation and its allegedly democratic system from traitors and foreign agents. The strongmen don’t usually take the final step of abolishing the elections outright. Instead, they keep them as a ritual that serves to provide legitimacy and maintain a democratic facade, as happens, for example, in Putin’s Russia.
Supporters of strongmen often don’t see this process as antidemocratic. They are genuinely baffled when told that electoral victory doesn’t grant them unlimited power. Instead, they see any check on the power of an elected government as undemocratic. However, democracy doesn’t mean majority rule; rather, it means freedom and equality for all. Democracy is a system that guarantees everyone certain liberties, which even the majority cannot take away.
Nobody disputes that in a democracy the representatives of the majority are entitled to form the government and to advance their preferred policies in myriad fields. If the majority wants war, the country goes to war. If the majority wants peace, the country makes peace. If the majority wants to raise taxes, taxes are raised. If the majority wants to lower taxes, taxes are lowered. Major decisions about foreign affairs, defense, education, taxation, and numerous other policies are all in the hands of the majority.
But in a democracy, there are two baskets of rights that are protected from the majority’s grasp. One contains human rights. Even if 99 percent of the population wants to exterminate the remaining 1 percent, in a democracy this is forbidden, because it violates the most basic human right—the right to life. The basket of human rights contains many additional rights, such as the right to work, the right to privacy, freedom of movement, and freedom of religion. These rights enshrine the decentralized nature of democracy, making sure that as long as people don’t harm anyone, they can live their lives as they see fit.
The second crucial basket of rights contains civil rights. These are the basic rules of the democratic game, which enshrine its self-correcting mechanisms. An obvious example is the right to vote. If the majority were permitted to disenfranchise the minority, then democracy would be over after a single election. Other civil rights include freedom of the press, academic freedom, and freedom of assembly, which enable independent media outlets, universities, and opposition movements to challenge the government. These are the key rights that strongmen seek to violate. While sometimes it is necessary to make changes to a country’s self-correcting mechanisms—for example, by expanding the franchise, regulating the media, or reforming the judicial system—such changes should be made only on the basis of a broad consensus including both majority and minority groups. If a small majority could unilaterally change civil rights, it could easily rig elections and get rid of all other checks on its power.
An important thing to note about both human rights and civil rights is that they don’t just limit the power of the central government; they also impose on it many active duties. It is not enough for a democratic government to abstain from infringing on human and civil rights. It must take actions to ensure them. For example, the right to life imposes on a democratic government the duty to protect citizens from criminal violence. If a government doesn’t kill anyone, but also makes no effort to protect citizens from murder, this is anarchy rather than democracy.
Of course, in every democracy, there are lengthy discussions concerning the exact limits of human and civil rights. Even the right to life has limits. There are democratic countries like the United States that impose the death penalty, thereby denying some criminals the right to life. And every country allows itself the prerogative to declare war, thereby sending people to kill and be killed. So where exactly does the right to life end? There are also complicated and ongoing discussions concerning the list of rights that should be included in the two baskets. Who determined that freedom of religion is a basic human right? Should internet access be defined as a civil right? And what about animal rights? Or the rights of AI?
We cannot resolve these matters here. Both human and civil rights are intersubjective conventions that humans invent rather than discover, and they are determined by historical contingencies rather than universal reason. Different democracies can adopt somewhat different lists of rights. At least from the viewpoint of information flows, what defines a system as “democratic” is only that its center doesn’t have unlimited authority and that the system possesses robust mechanisms to correct the center’s mistakes. Democratic networks assume that everyone is fallible, and that includes even the winners of elections and the majority of voters.
It is particularly crucial to remember that elections are not a method for discovering truth. Rather, they are a method for maintaining order by adjudicating between people’s conflicting desires. Elections establish what the majority of people desire, rather than what the truth is. And people often desire the truth to be other than what it is. Democratic networks therefore maintain some self-correcting mechanisms to protect the truth even from the will of the majority.
For example, during the 2002–3 debate over whether to invade Iraq in the wake of the September 11 attacks, the Bush administration claimed that Saddam Hussein was developing weapons of mass destruction and that the Iraqi people were eager to establish an American-style democracy and would welcome the Americans as liberators. These arguments carried the day. In October 2002 the elected representatives of the American people in Congress voted overwhelmingly to authorize the invasion. The resolution passed with a 296 to 133 majority (69 percent) in the House of Representatives and a 77 to 23 majority (77 percent) in the Senate.[6] In the early days of the war in March 2003, polls found that the elected representatives were indeed in tune with the mass of voters and that 72 percent of American citizens supported the invasion.[7] The will of the American people was clear.
But the truth turned out to be different from what the government said and what the majority believed. As the war progressed, it became evident that Iraq had no weapons of mass destruction and that many Iraqis had no wish to be “liberated” by the Americans or to establish a democracy. By August 2004 another poll found that 67 percent of Americans believed that the invasion was based on incorrect assumptions. As the years went by, most Americans acknowledged that the decision to invade was a catastrophic mistake.[8]
In a democracy the majority has every right to make momentous decisions like starting wars, and that includes the right to make momentous errors. But the majority should at least acknowledge its own fallibility and protect the freedom of minorities to hold and publicize unpopular views, which might turn out to be correct.
As another example, consider the case of a charismatic leader who is accused of corruption. His loyal supporters obviously wish these accusations to be false. But even if most voters support the leader, their desires should not prevent judges from investigating the accusations and getting to the truth. As with the justice system, so also with science. A majority of voters might deny the reality of climate change, but they should not have the power to dictate scientific truth or to prevent scientists from exploring and publishing inconvenient facts. Unlike parliaments, departments of environmental studies should not reflect the will of the majority.
Of course, when it comes to making policy decisions about climate change, in a democracy the will of the voters should reign supreme. Acknowledging the reality of climate change does not tell us what to do about it. We always have options, and choosing between them is a question of desire, not truth. One option might be to immediately cut greenhouse gas emissions, even at the cost of slowing economic growth. This means incurring some difficulties today but saving people in 2050 from more severe hardship, saving the island nation of Kiribati from drowning, and saving the polar bears from extinction. A second option might be to continue with business as usual. This means having an easier life today, but making life harder for the next generation, flooding Kiribati, and driving the polar bears—as well as numerous other species—to extinction. Choosing between these two options is a question of desire, and should therefore be done by all voters rather than by a limited group of experts.
But the one option that should not be on offer in elections is hiding or distorting the truth. If the majority prefers to consume whatever amount of fossil fuels it wishes with no regard to future generations or other environmental considerations, it is entitled to vote for that. But the majority should not be entitled to pass a law stating that climate change is a hoax and that all professors who believe in climate change must be fired from their academic posts. We can choose what we want, but we shouldn’t deny the true meaning of our choice.
Naturally, academic institutions, the media, and the judiciary may themselves be compromised by corruption, bias, or error. But subordinating them to a governmental Ministry of Truth is likely to make things worse. The government is already the most powerful institution in developed societies, and it often has the greatest interest in distorting or hiding inconvenient facts. Allowing the government to supervise the search for truth is like appointing the fox to guard the chicken coop.
To discover the truth, it is better to rely on two other methods. First, academic institutions, the media, and the judiciary have their own internal self-correcting mechanisms for fighting corruption, correcting bias, and exposing error. In academia, peer-reviewed publication is a far better check on error than supervision by government officials, because academic promotion often depends on uncovering past mistakes and discovering unknown facts. In the media, free competition means that if one outlet decides not to break a scandal, perhaps for self-serving reasons, others are likely to jump at the scoop. In the judiciary, a judge who takes bribes may be tried and punished just like any other citizen.
Second, the existence of several independent institutions that seek the truth in different ways allows these institutions to check and correct one another. For example, if powerful corporations manage to break down the peer-review mechanism by bribing a sufficiently large number of scientists, investigative journalists and courts can expose and punish the perpetrators. If the media or the courts are afflicted by systematic racist biases, it is the job of sociologists, historians, and philosophers to expose those biases. None of these mechanisms are completely fail-safe, but no human institution is. Government certainly isn’t.
If all this sounds complicated, it is because democracy should be complicated. Simplicity is a characteristic of dictatorial information networks in which the center dictates everything and everybody silently obeys. It’s easy to follow this dictatorial monologue. In contrast, democracy is a conversation with numerous participants, many of them talking at the same time. It can be hard to follow such a conversation.
Moreover, the most important democratic institutions tend to be bureaucratic behemoths. Whereas citizens avidly follow the biographical dramas of the princely court and the presidential palace, they often find it difficult to understand how parliaments, courts, newspapers, and universities function. This is what helps strongmen mount populist attacks on institutions, dismantle all self-correcting mechanisms, and concentrate power in their own hands. We discussed populism briefly in the prologue, to help explain the populist challenge to the naive view of information. Here we need to revisit populism, get a broader understanding of its worldview, and explain its appeal to antidemocratic strongmen.
The term “populism” derives from the Latin populus, which means “the people.” In democracies, “the people” is considered the sole legitimate source of political authority. Only representatives of the people should have the authority to declare wars, pass laws, and raise taxes. Populists cherish this basic democratic principle, but somehow conclude from it that a single party or a single leader should monopolize all power. In a curious political alchemy, populists manage to base a totalitarian pursuit of unlimited power on a seemingly impeccable democratic principle. How does it happen?
The most novel claim populists make is that they alone truly represent the people. Since in democracies only the people should have political power, and since allegedly only the populists represent the people, it follows that the populist party should have all political power to itself. If some party other than the populists wins elections, it does not mean that this rival party won the people’s trust and is entitled to form a government. Rather, it means that the elections were stolen or that the people were deceived to vote in a way that doesn’t express their true will.
It should be stressed that for many populists, this is a genuinely held belief rather than a propaganda gambit. Even if they win just a small share of votes, populists may still believe they alone represent the people. An analogous case are communist parties. In the U.K., for example, the Communist Party of Great Britain (CPGB) never won more than 0.4 percent of votes in a general election,[9] but was nevertheless adamant that it alone truly represented the working class. Millions of British workers, they claimed, were voting for the Labour Party or even for the Conservative Party rather than for the CPGB because of “false consciousness.” Allegedly, through their control of the media, universities, and other institutions, the capitalists managed to deceive the working class into voting against its true interests, and only the CPGB could see through this deception. In like fashion, populists can believe that the enemies of the people have deceived the people to vote against its true will, which the populists alone represent.
A fundamental part of this populist credo is the belief that “the people” is not a collection of flesh-and-blood individuals with various interests and opinions, but rather a unified mystical body that possesses a single will—“the will of the people.” Perhaps the most notorious and extreme manifestation of this semireligious belief was the Nazi motto “Ein Volk, ein Reich, ein Führer,” which means “One People, One Country, One Leader.” Nazi ideology posited that the Volk (people) had a single will, whose sole authentic representative was the Führer (leader). The leader allegedly had an infallible intuition for how the people felt and what the people wanted. If some German citizens disagreed with the leader, it didn’t mean that the leader might be in the wrong. Rather, it meant that the dissenters belonged to some treasonous outsider group—Jews, communists, liberals—instead of to the people.
The Nazi case is of course extreme, and it is grossly unfair to accuse all populists of being crypto-Nazis with genocidal inclinations. However, many populist parties and politicians deny that “the people” might contain a diversity of opinions and interest groups. They insist that the real people has only one will and that they alone represent this will. In contrast, their political rivals—even when the latter enjoy substantial popular support—are depicted as “alien elites.” Thus, Hugo Chávez ran for the presidency in Venezuela with the slogan “Chávez is the people!”[10] President Erdoğan of Turkey once railed against his domestic critics, saying, “We are the people. Who are you?”—as if his critics weren’t Turks, too.[11]
How can you tell, then, whether someone is part of the people or not? Easy. If they support the leader, they are part of the people. This, according to the German political philosopher Jan-Werner Müller, is the defining feature of populism. What turns someone into a populist is claiming that they alone represent the people and that anyone who disagrees with them—whether state bureaucrats, minority groups, or even the majority of voters—either suffers from false consciousness or isn’t really part of the people.[12]
This is why populism poses a deadly threat to democracy. While democracy agrees that the people is the only legitimate source of power, democracy is based on the understanding that the people is never a unitary entity and therefore cannot possess a single will. Every people—whether Germans, Venezuelans, or Turks—is composed of many different groups, with a plurality of opinions, wills, and representatives. No group, including the majority group, is entitled to exclude other groups from membership in the people. This is what makes democracy a conversation. Holding a conversation presupposes the existence of several legitimate voices. If, however, the people has only one legitimate voice, there can be no conversation. Rather, the single voice dictates everything. Populism may therefore claim adherence to the democratic principle of “people’s power,” but it effectively empties democracy of meaning and seeks to establish a dictatorship.
Populism undermines democracy in another, more subtle, but equally dangerous way. Having claimed that they alone represent the people, populists argue that the people is not just the sole legitimate source of political authority but the sole legitimate source of all authority. Any institution that derives its authority from something other than the will of the people is antidemocratic. As the self-proclaimed representatives of the people, populists consequently seek to monopolize not just political authority but all types of authority and to take control of institutions such as media outlets, courts, and universities. By taking the democratic principle of “people’s power” to its extreme, populists turn totalitarian.
In fact, while democracy means that authority in the political sphere comes from the people, it doesn’t deny the validity of alternative sources of authority in other spheres. As discussed above, in a democracy independent media outlets, courts, and universities are essential self-correcting mechanisms that protect the truth even from the will of the majority. Biology professors claim that humans evolved from apes because the evidence supports this, even if the majority wills it to be otherwise. Journalists can reveal that a popular politician took a bribe, and if compelling evidence is presented in court, a judge may send that politician to jail, even if most people don’t want to believe these accusations.
Populists are suspicious of institutions that in the name of objective truths override the supposed will of the people. They tend to see this as a smoke screen for elites grabbing illegitimate power. This drives populists to be skeptical of the pursuit of truth, and to argue—as we saw in the prologue—that “power is the only reality.” They thereby seek to undercut or appropriate the authority of any independent institutions that might oppose them. The result is a dark and cynical view of the world as a jungle and of human beings as creatures obsessed with power alone. All social interactions are seen as power struggles, and all institutions are depicted as cliques promoting the interests of their own members. In the populist imagination, courts don’t really care about justice; they only protect the privileges of the judges. Yes, the judges talk a lot about justice, but this is a ploy to grab power for themselves. Newspapers don’t care about facts; they spread fake news to mislead the people and benefit the journalists and the cabals that finance them. Even scientific institutions aren’t committed to the truth. Biologists, climatologists, epidemiologists, economists, historians, and mathematicians are just another interest group feathering its own nest—at the expense of the people.
In all, it’s a rather sordid view of humanity, but two things nevertheless make it appealing to many. First, since it reduces all interactions to power struggles, it simplifies reality and makes events like wars, economic crises, and natural disasters easy to understand. Anything that happens—even a pandemic—is about elites pursuing power. Second, the populist view is attractive because it is sometimes correct. Every human institution is indeed fallible and suffers from some level of corruption. Some judges do take bribes. Some journalists do intentionally mislead the public. Academic disciplines are occasionally plagued by bias and nepotism. That is why every institution needs self-correcting mechanisms. But since populists are convinced that power is the only reality, they cannot accept that a court, a media outlet, or an academic discipline would ever be inspired by the value of truth or justice to correct itself.
While many people embrace populism because they see it as an honest account of human reality, strongmen are attracted to it for a different reason. Populism offers strongmen an ideological basis for making themselves dictators while pretending to be democrats. It is particularly useful when strongmen seek to neutralize or appropriate the self-correcting mechanisms of democracy. Since judges, journalists, and professors allegedly pursue political interests rather than truth, the people’s champion—the strongman—should control these positions instead of allowing them to fall into the hands of the people’s enemies. Similarly, since even the officials in charge of arranging elections and publicizing their results may be part of a nefarious conspiracy, they too should be replaced by the strongman’s loyalists.
In a well-functioning democracy, citizens trust the results of elections, the decisions of courts, the reports of media outlets, and the findings of scientific disciplines because citizens believe these institutions are committed to the truth. Once people think that power is the only reality, they lose trust in all these institutions, democracy collapses, and the strongmen can seize total power.
Of course, populism could lead to anarchy rather than totalitarianism, if it undermines trust in the strongmen themselves. If no human is interested in truth or justice, doesn’t this apply to Mussolini or Putin too? And if no human institution can have effective self-correcting mechanisms, doesn’t this include Mussolini’s National Fascist Party or Putin’s United Russia party? How can a deep-seated distrust of all elites and institutions be squared with unwavering admiration for one leader and party? This is why populists ultimately depend on the mystical notion that the strongman embodies the people. When trust in bureaucratic institutions like election boards, courts, and newspapers is particularly low, an enhanced reliance on mythology is the only way to preserve order.
Strongmen who claim to represent the people may well rise to power through democratic means, and often rule behind a democratic facade. Rigged elections in which they win overwhelming majorities serve as proof of the mystical bond between the leader and the people. Consequently, to measure how democratic an information network is, we cannot use a simple yardstick like whether elections are being held regularly. In Putin’s Russia, in Iran, and even in North Korea elections are held like clockwork. Rather, we need to ask much more complex questions like “What mechanisms prevent the central government from rigging the elections?” “How safe is it for leading media outlets to criticize the government?” and “How much authority does the center appropriate to itself?” Democracy and dictatorship aren’t binary opposites, but rather are on a continuum. To decide whether a network is closer to the democratic or the dictatorial end of the continuum, we need to understand how information flows in the network and what shapes the political conversation.
If one person dictates all the decisions, and even their closest advisers are terrified to voice a dissenting view, no conversation is taking place. Such a network is situated at the extreme dictatorial end of the spectrum. If nobody can voice unorthodox opinions publicly, but behind closed doors a small circle of party bosses or senior officials are able to freely express their views, then this is still a dictatorship, but it has taken a baby step in the direction of democracy. If 10 percent of the population participate in the political conversation by airing their opinions, voting in fair elections, and running for office, that may be considered a limited democracy, as was the case in many ancient city-states like Athens, or in the early days of the United States, when only wealthy white men had such political rights. As the percentage of people taking part in the conversation rises, so the network becomes more democratic.
The focus on conversations rather than elections raises a host of interesting questions. For example, where does that conversation take place? North Korea, for example, has the Mansudae Assembly Hall in Pyongyang, where the 687 members of the Supreme People’s Assembly meet and talk. However, while this Assembly is officially known as North Korea’s legislature, and while elections to the Assembly are held every five years, this body is widely considered a rubber stamp, executing decisions taken elsewhere. The anodyne discussions follow a predetermined script, and they aren’t geared to change anyone’s mind about anything.[13]
Is there perhaps another, more private hall in Pyongyang where the crucial conversations take place? Do Politburo members ever dare criticize Kim Jong Un’s policies during formal meetings? Perhaps it can be done in unofficial dinner parties or in unofficial think tanks? Information in North Korea is so concentrated and so tightly controlled that we cannot provide clear answers to these questions.[14]
Similar questions can be asked about the United States. In the United States, unlike in North Korea, people are free to say almost anything they want. Scathing public attacks on the government are a daily occurrence. But where is the room where the crucial conversations happen, and who sits there? The U.S. Congress was designed to fulfill this function, with the people’s representatives meeting to converse and try to convince one another. But when was the last time that an eloquent speech in Congress by a member of one party persuaded members of the other party to change their minds about anything? Wherever the conversations that shape American politics now take place, it is definitely not in Congress. Democracies die not only when people are not free to talk but also when people are not willing or able to listen.
Based on the above definition of democracy, we can now turn to the historical record and examine how changes in information technology and information flows have shaped the history of democracy. To judge by the archaeological and anthropological evidence, democracy was the most typical political system among archaic hunter-gatherers. Stone Age bands obviously didn’t have formal institutions like elections, courts, and media outlets, but their information networks were usually distributed and gave ample opportunities for self-correction. In bands numbering just a few dozen people information could easily be shared among all group members, and when the band decided where to pitch camp, where to go hunting, or how to handle a conflict with another band, everyone could take part in the conversation and dispute one another. Bands usually belonged to a larger tribe that included hundreds or even thousands of people. But when important choices affecting the whole tribe had to be made, such as whether to go to war, tribes were usually still small enough for a large percentage of their members to gather in one place and converse.[15]
While bands and tribes sometimes had dominant leaders, these tended to exercise only limited authority. Leaders had no standing armies, police forces, or governmental bureaucracies at their disposal, so they couldn’t just impose their will by force.[16] Leaders also found it difficult to control the economic basis of people’s lives. In modern times, dictators like Vladimir Putin and Saddam Hussein have often based their political power on monopolizing economic assets like oil wells.[17] In medieval and classical antiquity, Chinese emperors, Greek tyrants, and Egyptian pharaohs dominated society by controlling granaries, silver mines, and irrigation canals. In contrast, in a hunter-gatherer economy such centralized economic control was possible only under special circumstances. For example, along the northwestern coast of North America some hunter-gatherer economies relied on catching and preserving large numbers of salmon. Since salmon runs peaked for a few weeks in specific creeks and rivers, a powerful chief could monopolize this asset.[18]
But this was exceptional. Most hunter-gatherer economies were far more diversified. One leader, even supported by a few allies, could not corral the savanna and prevent people from gathering plants and hunting animals there. If all else failed, hunter-gatherers could therefore vote with their feet. They had few possessions, and their most important assets were their personal skills and personal friends. If a chief turned dictatorial, people could just walk away.[19]
Even when hunter-gatherers did end up ruled by a domineering chief, as happened among the salmon-fishing people of northwestern America, at least that chief was accessible. He didn’t live in a faraway fortress surrounded by an unfathomable bureaucracy and a cordon of armed guards. If you wanted to voice a complaint or a suggestion, you could usually get within earshot of him. The chief couldn’t control public opinion, nor could he shut himself off from it. In other words, there was no way for a chief to force all information to flow through the center, or to prevent people from talking with one another, criticizing him, or organizing against him.[20]
In the millennia following the agricultural revolution, and especially after writing helped create large bureaucratic polities, it became easier to centralize the flow of information and harder to maintain the democratic conversation. In small city-states like those of ancient Mesopotamia and Greece, autocrats like Lugal-Zagesi of Umma and Pisistratus of Athens relied on bureaucrats, archives, and a standing army to monopolize key economic assets and information about ownership, taxation, diplomacy, and politics. It simultaneously became harder for the mass of citizens to keep in direct touch with one another. There was no mass communication technology like newspapers or radio, and it was not easy to squeeze tens of thousands of citizens into the main city square to hold a communal discussion.
Democracy was still an option for these small city-states, as the history of both early Sumer and classical Greece clearly indicates.[21] However, the democracy of ancient city-states tended to be less inclusive than the democracy of archaic hunter-gatherer bands. Probably the most famous example of ancient city-state democracy is Athens in the fifth and fourth centuries BCE. All adult male citizens could participate in the Athenian assembly, vote on public policy, and be elected to public offices. But women, slaves, and noncitizen residents of the city did not enjoy these privileges. Only about 25–30 percent of the adult population of Athens enjoyed full political rights.[22]
As the size of polities continued to increase, and city-states were superseded by larger kingdoms and empires, even Athenian-style partial democracy disappeared. All the famous examples of ancient democracies are city-states such as Athens and Rome. In contrast, we don’t know of any large-scale kingdom or empire that operated along democratic lines.
For example, when in the fifth century BCE Athens expanded from a city-state into an empire, it did not grant citizenship and political rights to those it conquered. The city of Athens remained a limited democracy, but the much bigger Athenian Empire was ruled autocratically from the center. All the important decisions about taxes, diplomatic alliances, and military expeditions were taken in Athens. Subject lands like the islands of Naxos and Thasos had to obey the orders of the Athenian popular assembly and elected officials, without the Naxians and Thasians being able to vote in that assembly or be elected to office. It was also difficult for Naxos, Thasos, and other subject lands to coordinate a united opposition to the decisions taken in the Athenian center, and if they tried to do so, it would have brought ruthless Athenian reprisals. Information in the Athenian Empire flowed to and from Athens.[23]
When the Roman Republic built its empire, conquering first the Italian Peninsula and eventually the entire Mediterranean basin, the Romans took a somewhat different course. Rome gradually did extend citizenship to the conquered people. It began by granting citizenship to the inhabitants of Latium, then to the inhabitants of other Italian regions, and finally to inhabitants of even distant provinces like Gallia and Syria. However, as citizenship was extended to more people, the political rights of citizens were simultaneously restricted.
The ancient Romans had a clear understanding of what democracy means, and they were originally fiercely committed to the democratic ideal. After expelling the last king of Rome in 509 BCE, the Romans developed a deep dislike for monarchy and a fear of giving unlimited power to any single individual or institution. Supreme executive power was therefore shared by two consuls who balanced each other. These consuls were chosen by citizens in free elections, held office for a single year, and were additionally checked by the powers of the popular assembly, of the Senate, and of other elected officials like the tribunes.
But when Rome extended citizenship to Latins, Italians, and finally to Gauls and Syrians, the power of the popular assembly, the tribunes, the Senate, and even the two consuls was gradually reduced, until in the late first century BCE the Caesar family established its autocratic rule. Anticipating present-day strongmen like Putin, Augustus didn’t crown himself king, and pretended that Rome was still a republic. The Senate and the popular assembly continued to convene, and every year citizens continued to choose consuls and tribunes. But these institutions were emptied of real power.[24]
In 212 CE, the emperor Caracalla—the offspring of a Phoenician family from North Africa—took a seemingly momentous step and granted automatic Roman citizenship to all free adult males throughout the vast empire. Rome in the third century CE accordingly had tens of millions of citizens.[25] But by that time, all the important decisions were made by a single unelected emperor. While consuls were still ceremonially chosen every year, Caracalla inherited power from his father, Septimius Severus, who became emperor by winning a civil war. The most important step Caracalla took to cement his rule was murdering his brother and rival, Geta.
When Caracalla ordered the murder of Geta, declared war on the Parthian Empire, or extended Roman citizenship to millions of Britons, Greeks, and Arabs, he had no need to ask permission from the Roman people. All of Rome’s self-correcting mechanisms had been neutralized long before. If Caracalla made some error in foreign or domestic policy, neither the Senate nor any officials could intervene to correct it, except by rising in rebellion or assassinating him. And when Caracalla was indeed assassinated in 217, it only led to a new round of civil wars culminating in the rise of new autocrats. Rome in the third century CE, like Russia in the eighteenth century, was, in the words of Madame de Staël, “autocracy tempered by strangulation.”
By the third century CE, not only the Roman Empire but all other major human societies on earth were centralized information networks lacking strong self-correcting mechanisms. This was true of the Parthian and Sassanian Empires in Persia, of the Kushan and Gupta Empires in India, and of China’s Han Empire and its successor Three Kingdoms.[26] Thousands of more small-scale societies continued to function democratically in the third century CE and beyond, but it seemed that distributed democratic networks were simply incompatible with large-scale societies.
Were large-scale democracies really unworkable in the ancient world? Or did autocrats like Augustus and Caracalla deliberately sabotage them? This question is important not only for our understanding of ancient history but also for our view of democracy’s future in the age of AI. How do we know whether democracies fail because they are undermined by strongmen or because of much deeper structural and technological reasons?
To answer that question, let’s take a closer look at the Roman Empire. The Romans were clearly familiar with the democratic ideal, and it continued to be important to them even after the Caesar family rose to power. Otherwise, Augustus and his heirs would not have bothered to maintain seemingly democratic institutions like the Senate or annual elections to the consulate and other offices. So why did power end up in the hands of an unelected emperor?
In theory, even after Roman citizenship was expanded to tens of millions of people throughout the Mediterranean basin, wasn’t it possible to hold empire-wide elections for the position of emperor? This would surely have required very complicated logistics, and it would have taken several months to learn the results of the elections. But was that really a deal breaker?
The key misconception here is equating democracy with elections. Tens of millions of Roman citizens could theoretically vote for this or that imperial candidate. But the real question is whether tens of millions of Romans could have held an ongoing empire-wide political conversation. In present-day North Korea no democratic conversation takes place because people aren’t free to talk, yet we could well imagine a situation when this freedom is guaranteed—as it is in South Korea. In the present-day United States the democratic conversation is endangered by people’s inability to listen to and respect their political rivals, yet this can presumably still be fixed. By contrast, in the Roman Empire there was simply no way to conduct or sustain a democratic conversation, because the technological means to hold such a conversation did not exist.
To hold a conversation, it is not enough to have the freedom to talk and the ability to listen. There are also two technical preconditions. First, people need to be within hearing range of one another. This means that the only way to hold a political conversation in a territory the size of the United States or the Roman Empire is with the help of some kind of information technology that can swiftly convey what people say over long distances.
Second, people need at least a rudimentary understanding of what they are talking about. Otherwise, they are just making noise, not holding a meaningful conversation. People usually have a good understanding of political issues of which they have direct experience. Poor people have many insights about poverty that escape economics professors, and ethnic minorities understand racism in a much more profound way than people who never suffered from it, for example. However, if lived experience were the only way to understand crucial political issues, large-scale political conversations would be impossible. For then every group of people could talk meaningfully only about its own experiences. Even worse, nobody else could understand what they were saying. If lived experience is the sole possible source of knowledge, then merely listening to the insights gained from someone else’s lived experience cannot impart these insights to me.
The only way to have a large-scale political conversation among diverse groups of people is if people can gain some understanding of issues that they have never experienced firsthand. In a large polity, it is a crucial role of the education system and the media to inform people about things they have never faced themselves. If there is no education system or media platform to perform this role, no meaningful large-scale conversations can take place.
In a small Neolithic town of a few thousand inhabitants people might sometimes have been afraid to say what they thought, or might have refused to listen to their rivals, but it was relatively easy to satisfy the more fundamental technical preconditions for meaningful discourse. First, people lived in proximity to one another, so they could easily meet most other community members and hear their voices. Second, everybody had intimate knowledge of the dangers and opportunities that the town faced. If an enemy war party approached, everyone could see it. If the river flooded the fields, everyone witnessed the economic effects. When people talked about war and hunger, they knew what they were saying.
In the fourth century BCE, the city-state of Rome was still small enough to allow a large percentage of its citizens to congregate in the Forum in times of emergency, listen to respected leaders, and voice their personal views on the matter at hand. When in 390 BCE Gallic invaders attacked Rome, almost everyone lost a relative in the defeat at the Battle of the Allia and lost property when the victorious Gauls then sacked Rome. The desperate Romans appointed Marcus Camillus as dictator. In Rome, the dictator was a public official appointed in times of emergency who had unlimited powers but only for a short predetermined period, following which he was held accountable for his actions. After Camillus led the Romans to victory, everybody could see that the emergency was over, and Camillus stepped down.[27]
In contrast, by the third century CE, the Roman Empire had a population of between sixty and seventy-five million people,[28] spread over five million square kilometers.[29] Rome lacked mass communication technology like radio or daily newspapers. Only 10–20 percent of adults had reading skills,[30] and there was no organized education system that could inform them about the geography, history, and economy of the empire. True, many people across the empire did share some cultural ideas, such as a strong belief in the superiority of Roman civilization over the barbarians. These shared cultural beliefs were crucial in preserving order and holding the empire together. But their political implications were far from clear, and in times of crisis there was no possibility of holding a public conversation about what should be done.
How could Syrian merchants, British shepherds, and Egyptian villagers converse about the ongoing wars in the Middle East or about the immigration crisis brewing along the Danube? The lack of a meaningful public conversation was not the fault of Augustus, Nero, Caracalla, or any of the other emperors. They didn’t sabotage Roman democracy. Given the size of the empire and the available information technology, democracy was simply unworkable. This was acknowledged already by ancient philosophers like Plato and Aristotle, who argued that democracy can work only in small-scale city-states.[31]
If the absence of Roman democracy had merely been the fault of particular autocrats, we should have at least seen large-scale democracies flourishing in other places, like in Sassanian Persia, Gupta India, or Han China. But prior to the development of modern information technology, there are no examples of large-scale democracies anywhere.
It should be stressed that in many large-scale autocracies local affairs were often managed democratically. The Roman emperor didn’t have the information needed to micromanage hundreds of cities across the empire, whereas local citizens in each city could continue to hold a meaningful conversation about municipal politics. Consequently, long after the Roman Empire became an autocracy, many of its cities continued to be governed by local assemblies and elected officials. At a time when elections to the consulship in Rome became ceremonial affairs, elections to municipal offices in small cities like Pompeii were hotly contested.
Pompeii was destroyed in the eruption of Vesuvius in 79 CE, during the reign of the emperor Titus. Archaeologists uncovered about fifteen hundred graffiti concerned with various local election campaigns. One coveted office was that of the city’s aedile—the magistrate in charge of maintaining the city’s infrastructure and public buildings.[32] Lucretius Fronto’s supporters drew the graffiti “If honest living is thought to be any recommendation, then Lucretius Fronto is worthy of being elected.” One of his opponents, Gaius Julius Polybius, ran with the slogan “Elect Gaius Julius Polybius to the office of aedile. He provides good bread.”
There were also endorsements by religious groups and professional associations, such as “The worshippers of Isis demand the election of Gnaeus Helvius Sabinus” and “All the mule drivers request that you elect Gaius Julius Polybius.” There was dirty work, too. Someone who clearly wasn’t Marcus Cerrinius Vatia drew the graffiti “All the drunkards ask you to elect Marcus Cerrinius Vatia” and “The petty thieves ask you to elect Vatia.”[33] Such electioneering indicates that the position of aedile had power in Pompeii and that the aedile was chosen in relatively free and fair elections, rather than appointed by the imperial autocrat in Rome.
Even in empires whose rulers never had any democratic pretensions, democracy could still flourish in local settings. In the Tsarist Empire, for example, the daily lives of millions of villagers were managed by rural communes. Going back at least to the eleventh century, each commune usually included fewer than a thousand people. They were subject to a landlord and bore many obligations to their lord and to the central tsarist state, but they had considerable autonomy in managing their internal affairs and in deciding how to discharge their external obligations, such as paying taxes and providing military recruits. The commune mediated local disputes, provided emergency relief, enforced social norms, oversaw the distribution of land to individual households, and regulated access to shared resources like forests and pastures. Decisions on important matters were made in communal meetings in which the heads of local households expressed their views and chose the commune’s elder. Resolutions at least tried to reflect the majority’s will.[34]
In tsarist villages and Roman cities a form of democracy was possible because a meaningful public conversation was possible. Pompeii was a city of about eleven thousand people in 79 CE,[35] so everybody could supposedly judge for themselves whether Lucretius Fronto was an honest man and whether Marcus Cerrinius Vatia was a drunken thief. But democracy at a scale of millions became possible only in the modern age, when mass media changed the nature of large-scale information networks.
Mass media are information technologies that can quickly connect millions of people even when they are separated by vast distances. The printing press was a crucial step in that direction. Print made it possible to cheaply and quickly produce large numbers of books and pamphlets, which enabled more people to voice their opinions and be heard over a large territory, even if the process still took time. This sustained some of the first experiments in large-scale democracy, such as the Polish-Lithuanian Commonwealth established in 1569 and the Dutch Republic established in 1579.
Some may contest the characterization of these polities as “democratic,” since only a minority of relatively wealthy citizens enjoyed full political rights. In the Polish-Lithuanian Commonwealth, political rights were reserved for adult male members of the szlachta—the nobility. These numbered up to 300,000 individuals, or about 5 percent of the total adult population.[36] One of the szlachta’s prerogatives was to elect the king, but since voting required traveling long distances to a national convention, few exercised their right. In the sixteenth and seventeenth centuries participation in royal elections usually ranged between 3,000 and 7,000 voters, except for the 1669 elections, in which 11,271 participated.[37] While this hardly sounds democratic in the twenty-first century, it should be remembered that all large-scale democracies until the twentieth century limited political rights to a small circle of relatively wealthy men. Democracy is never a matter of all or nothing. It is a continuum, and late-sixteenth-century Poles and Lithuanians explored previously unknown regions of that continuum.
Aside from electing its king, Poland-Lithuania had an elected parliament (the Sejm) that approved or blocked new legislation and had the power to veto royal decisions on taxation and foreign affairs. Moreover, citizens enjoyed a list of inviolable rights such as freedom of assembly and freedom of religion. In the late sixteenth and early seventeenth centuries, when most of Europe suffered from bitter religious conflicts and persecutions, Poland-Lithuania was a tolerant haven, where Catholics, Greek Orthodox, Lutherans, Calvinists, Jews, and even Muslims coexisted in relative harmony.[38] In 1616, more than a hundred mosques functioned in the commonwealth.[39]
In the end, however, the Polish-Lithuanian experiment in decentralization proved to be impractical. The country was Europe’s second-largest state (after Russia), covering almost a million square kilometers and including most of the territory of today’s Poland, Lithuania, Belarus, and Ukraine. It lacked the information, communication, and education systems necessary to hold a meaningful political conversation between Polish aristocrats, Lithuanian noblemen, Ukrainian Cossacks, and Jewish rabbis spread from the Baltic Sea to the Black Sea. Its self-correcting mechanisms were also too costly, paralyzing the power of the central government. In particular, every single Sejm deputy was given the right to veto all parliamentary legislation, which led to political deadlock. The combination of a large and diverse polity with a weak center proved fatal. The commonwealth was torn apart by centrifugal forces, and its pieces were then divided between the centralized autocracies of Russia, Austria, and Prussia.
The Dutch experiment fared better. In some ways the Dutch United Provinces were even less centralized than the Polish-Lithuanian Commonwealth, since they lacked a monarch, and were a union of seven autonomous provinces, which were in turn made up of self-governing towns and cities.[40] This decentralized nature is reflected in the plural form of how the country was known abroad—the Netherlands in English, Les Pays-Bas in French, Los Países Bajos in Spanish, and so on.
However, taken together the United Provinces were twenty-five times smaller in landmass than Poland-Lithuania and possessed a much better information, communication, and education system that tied its constituent parts closely together.[41] The United Provinces also pioneered a new information technology with a big future. In June 1618 a pamphlet titled Courante uyt Italien, Duytslandt &c. appeared in Amsterdam. As its title indicated, it carried news from the Italian Peninsula, the German lands, and other places. There was nothing remarkable about this particular pamphlet, except that new issues were published in the following weeks, too. They appeared regularly until 1670, when the Courante uyt Italien, Duytslandt &c. merged with other serial pamphlets into the Amsterdamsche Courant, which appeared until 1903, when it was merged into De Telegraaf—the Netherlands’ largest newspaper to this day.[42]
The newspaper is a periodic pamphlet, and it was different from earlier one-off pamphlets because it had a much stronger self-correcting mechanism. Unlike one-off publications, a weekly or daily newspaper has a chance to correct its mistakes and an incentive to do so in order to win the public’s trust. Shortly after the Courante uyt Italien, Duytslandt &c. appeared, a competing newspaper titled Tijdinghen uyt Verscheyde Quartieren (Tidings from Various Quarters) made its debut. The Courante was generally considered more reliable, because it tried to check its stories before publishing them, and because the Tijdinghen was accused of being overly patriotic and reporting only news favorable to the Netherlands. Nevertheless, both newspapers survived, because, as one reader explained, “one can always find something in one newspaper that is not available in the other.” In the following decades dozens of additional newspapers were published in the Netherlands, which became Europe’s journalistic hub.[43]
Newspapers that succeeded in gaining widespread trust became the architects and mouthpieces of public opinion. They created a far more informed and engaged public, which changed the nature of politics, first in the Netherlands and later around the world.[44] The political influence of newspapers was so crucial that newspaper editors often became political leaders. Jean-Paul Marat rose to power in revolutionary France by founding and editing L’Ami du Peuple; Eduard Bernstein helped create Germany’s Social Democratic Party by editing Der Sozialdemokrat; Vladimir Lenin’s most important position before becoming Soviet dictator was editor of Iskra; and Benito Mussolini rose to fame first as a socialist journalist in Avanti! and later as founder and editor of the firebrand right-wing paper Il Popolo d’Italia.
Newspapers played a crucial role in the formation of early modern democracies like the United Provinces in the Low Countries, the United Kingdom in the British Isles, and the United States in North America. As the names themselves indicate, these were not city-states like ancient Athens and Rome but amalgams of different regions glued together in part by this new information technology. For example, when on December 6, 1825, President John Quincy Adams gave his First Annual Message to the U.S. Congress, the text of the address and summaries of the main points were published over the next weeks by newspapers from Boston to New Orleans. (At the time, hundreds of newspapers and magazines were being published in the United States.[45])
Adams declared his administration’s intentions of initiating numerous federal projects ranging from the construction of roads to the founding of an astronomical observatory, which he poetically named “light-house of the skies.” His speech ignited a fierce public debate, much of it conducted in print between those who supported such “big government” plans as essential for the development of the United States and many who preferred a “small government” approach and saw Adams’s plans as federal overreach and an encroachment on states’ rights.
Northern supporters of the “small government” camp complained that it was unconstitutional for the federal government to tax the citizens of richer states in order to build roads in poorer states. Southerners feared that a federal government that claims the power to build a lighthouse of the skies in their backyard may one day claim the power to free their slaves, too. Adams was accused of harboring dictatorial ambitions, while the erudition and sophistication of his speech were criticized as elitist and disconnected from ordinary Americans. The public debates over the 1825 message to Congress dealt a severe blow to the reputation of the Adams administration and helped pave the way to Adams’s subsequent electoral defeat. In the 1828 presidential elections, Adams lost to Andrew Jackson—a rich slaveholding planter from Tennessee who was successfully rebranded in numerous newspaper columns as “the man of the people” and who claimed that the previous elections were in fact stolen by Adams and by the corrupt Washington elites.[46]
Newspapers of the time were of course still slow and limited compared with the mass media of today. Newspapers traveled at the pace of a horse or sailboat, and relatively few people read them regularly. There were no newsstands or street vendors, so people had to buy subscriptions, which were expensive; average annual subscriptions cost around one week’s wages for a skilled journeyman. As a result, the total number of subscribers to all U.S. newspapers in 1830 is estimated at just seventy-eight thousand. Since some subscribers were associations or businesses rather than individuals, and since every copy was probably read by several people, it seems reasonable to assume that regular newspaper readership numbered in the hundreds of thousands. But millions more people rarely, if ever, read newspapers.[47]
No wonder that American democracy in those days was a limited affair—and the domain of wealthy white men. In the 1824 elections that brought Adams to power, 1.3 million Americans were theoretically eligible to vote, out of an adult population of about 5 million (or around 25 percent). Only 352,780 people—7 percent of the total adult population—actually made use of their right. Adams didn’t even win a majority of those who voted. Owing to the quirks of the U.S. electoral system, he became president thanks to the support of just 113,122 voters, or not much more than 2 percent of adults, and 1 percent of the total population.[48] In Britain at the same time, only about 400,000 people were eligible to vote for Parliament, or around 6 percent of the adult population. Moreover, 30 percent of parliamentary seats were not even contested.[49]
You may wonder whether we are talking about democracies at all. At a time when the United States had more slaves than voters (more than 1.5 million Americans were enslaved in the early 1820s),[50] was the United States really a democracy? This is a question of definitions. As with the late-sixteenth-century Polish-Lithuanian Commonwealth, so also with the early-nineteenth-century United States, “democracy” is a relative term. As noted earlier, democracy and autocracy aren’t absolutes; they are part of a continuum. In the early nineteenth century, out of all large-scale human societies, the United States was probably the closest to the democratic end of the continuum. Giving 25 percent of adults the right to vote doesn’t sound like much today, but in 1824 that was a far higher percentage than in the Tsarist, Ottoman, or Chinese Empires, in which nobody had the right to vote.[51]
Besides, as emphasized throughout this chapter, voting is not the only thing that counts. An even more important reason to consider the United States in 1824 a democracy is that compared with most other polities of its day, the new country possessed much stronger self-correcting mechanisms. The Founding Fathers were inspired by ancient Rome—witness the Senate and the Capitol in Washington—and they were well aware that the Roman Republic eventually turned into an autocratic empire. They feared that some American Caesar would do something similar to their republic, and constructed multiple overlapping self-correcting mechanisms, known as the system of checks and balances. One of these was a free press. In ancient Rome, the self-correcting mechanisms stopped functioning as the republic enlarged its territory and population. In the United States, modern information technology combined with freedom of the press helped the self-correcting mechanisms survive even as the country extended from the Atlantic to the Pacific.
It was these self-correcting mechanisms that gradually enabled the United States to expand the franchise, abolish slavery, and turn itself into a more inclusive democracy. As noted in chapter 2, the Founding Fathers committed enormous mistakes—such as endorsing slavery and denying women the vote—but they also provided the tools for their descendants to correct these mistakes. That was their greatest legacy.
Printed newspapers were just the first harbinger of the mass media age. During the nineteenth and twentieth centuries, a long list of new communication and transportation technologies—such as the telegraph, the telephone, television, radio, the train, the steamship, and the airplane—supercharged the power of mass media.
When Demosthenes gave a public speech in Athens around 350 BCE, it was aimed primarily at the limited audience actually present in the Athenian agora. When John Quincy Adams gave his First Annual Message in 1825, his words spread at the pace of a horse. When Abraham Lincoln gave his Gettysburg Address on November 19, 1863, telegraphs, locomotives, and steamships conveyed his words much faster throughout the Union and beyond. The very next day The New York Times had already reprinted the speech in full,[52] as had numerous other newspapers from The Portland Daily Press in Maine to the Ottumwa Courier in Iowa.[53]
As befitting a democracy with strong self-correcting mechanisms in place, the president’s speech sparked a lively conversation rather than universal applause. Most newspapers lauded it, but some expressed their doubts. The Chicago Times wrote on November 20 that “the cheek of every American must tingle with shame as he reads the silly, flat and dishwatery utterances” of President Lincoln.[54] The Patriot & Union, a local newspaper in Harrisburg, Pennsylvania, also blasted “the silly remarks of the President” and hoped that “the veil of oblivion shall be dropped over them and that they shall be no more repeated or thought of.”[55] Though the country was in the midst of a civil war, journalists were free to publicly criticize—and even ridicule—the president.
Fast-forward a century, and things really picked up speed. For the first time in history, new technologies allowed masses of people, spread over vast swaths of territory, to connect in real time. In 1960, about seventy million Americans (39 percent of the total population), dispersed over the North American continent and beyond, watched the Nixon-Kennedy presidential debates live on television, with millions more listening on the radio.[56] The only effort viewers and listeners had to make was to press a button while sitting in their homes. Large-scale democracy had now become feasible. Millions of people separated by thousands of kilometers could conduct informed and meaningful public debates about the rapidly evolving issues of the day. By 1960, all adult Americans were theoretically eligible to vote, and close to seventy million (about 64 percent of the electorate) actually did so—though millions of Blacks and other disenfranchised groups were prevented from voting through various voter-suppression schemes.[57]
As always, we should beware of technological determinism and of concluding that the rise of mass media led to the rise of large-scale democracy. Mass media made large-scale democracy possible, rather than inevitable. And it also made possible other types of regimes. In particular, the new information technologies of the modern age opened the door for large-scale totalitarian regimes. Like Nixon and Kennedy, Stalin and Khrushchev could say something over the radio and be heard instantaneously by hundreds of millions of people from Vladivostok to Kaliningrad. They could also receive daily reports by phone and telegraph from millions of secret police agents and informers. If a newspaper in Vladivostok or Kaliningrad wrote that the supreme leader’s latest speech was silly (as happened to Lincoln’s Gettysburg Address), then everyone involved—from the editor in chief to the typesetters—would likely have received a visit from the KGB.
Totalitarian systems assume their own infallibility, and seek total control over the totality of people’s lives. Before the invention of the telegraph, radio, and other modern information technology, large-scale totalitarian regimes were impossible. Roman emperors, Abbasid caliphs, and Mongol khans were often ruthless autocrats who believed they were infallible, but they did not have the apparatus necessary to impose totalitarian control over large societies. To understand this, we should first clarify the difference between totalitarian regimes and less extreme autocratic regimes. In an autocratic network, there are no legal limits on the will of the ruler, but there are nevertheless a lot of technical limits. In a totalitarian network, many of these technical limits are absent.[58]
For example, in autocratic regimes like the Roman Empire, the Abbasid Empire, and the Mongol Empire, rulers could usually execute any person who displeased them, and if some law got in their way, they could ignore or change the law. The emperor Nero arranged the murder of his mother, Agrippina, and his wife, Octavia, and forced his mentor Seneca to commit suicide. Nero also executed or exiled some of the most respected and powerful Roman aristocrats merely for voicing dissent or telling jokes about him.[59]
While autocratic rulers like Nero could execute anyone who did or said something that displeased them, they couldn’t know what most people in their empire were doing or saying. Theoretically, Nero could issue an order that any person in the Roman Empire who criticized or insulted the emperor must be severely punished. Yet there were no technical means for implementing such an order. Roman historians like Tacitus portray Nero as a bloodthirsty tyrant who instigated an unprecedented reign of terror. But this was a very limited type of terror. Although he executed or exiled a number of family members, aristocrats, and senators within his orbit, ordinary Romans in the city’s slums and provincials in distant towns like Jerusalem and Londinium could speak their mind much more freely.[60]
Modern totalitarian regimes like the Stalinist U.S.S.R. instigated terror on an altogether different scale. Totalitarianism is the attempt to control what every person throughout the country is doing and saying every moment of the day, and potentially even what every person is thinking and feeling. Nero might have dreamed about such powers, but he lacked the means to realize them. Given the limited tax base of the agrarian Roman economy, Nero couldn’t employ many people in his service. He could place informers at the dinner parties of Roman senators, but he had only about 10,000 imperial administrators[61] and 350,000 soldiers[62] to control the rest of the empire, and he lacked the technology to communicate with them swiftly.
Nero and his fellow emperors had an even bigger problem ensuring the loyalty of the administrators and soldiers they did have on their payroll. No Roman emperor was ever toppled by a democratic revolution like the ones that deposed Louis XVI, Nicolae Ceauşescu, or Hosni Mubarak. Instead, dozens of emperors were assassinated or deposed by their own generals, officials, bodyguards, or family members.[63] Nero himself was overthrown by a revolt of the governor of Hispania, Galba. Six months later Galba was ousted by Otho, the governor of Lusitania. Within three months, Otho was deposed by Vittelius, commander of the Rhine army. Vitellius lasted about eight months before he was defeated and killed by Vespasian, commander of the army in Judaea. Being killed by a rebellious subordinate was the biggest occupational hazard not just for Roman emperors but for almost all premodern autocrats.
Emperors, caliphs, shahs, and kings found it a huge challenge to keep their subordinates in check. Rulers consequently focused their attention on controlling the military and the taxation system. Roman emperors had the authority to interfere in the local affairs of any province or city, and they sometimes exercised that authority, but this was usually done in response to a specific petition sent by a local community or official,[64] rather than as part of some empire-wide totalitarian Five-Year Plan. If you were a mule driver in Pompeii or a shepherd in Roman Britain, Nero didn’t want to control your daily routines or to police the jokes you told. As long as you paid your taxes and didn’t resist the legions, that was good enough for Nero.
Some scholars claim that despite the technological difficulties there were attempts to establish totalitarian regimes in ancient times. The most common example cited is Sparta. According to this interpretation, Spartans were ruled by a totalitarian regime that micromanaged every aspect of their lives—from whom they married to what they ate. However, while the Spartan regime was certainly draconian, it actually included several self-correcting mechanisms that prevented power from being monopolized by a single person or faction. Political authority was divided between two kings, five ephors (senior magistrates), twenty-eight members of the Gerousia council, and the popular assembly. Important decisions—such as whether to go to war—often involved fierce public debates.
Moreover, irrespective of how we evaluate the nature of Sparta’s regime, it is clear that the same technological limitations that confined ancient Athenian democracy to a single city also limited the scope of the Spartan political experiment. After winning the Peloponnesian War, Sparta installed military garrisons and pro-Spartan governments in numerous Greek cities, requiring them to follow its lead in foreign policy and sometimes also pay tribute. But unlike the U.S.S.R. after World War II, Sparta after the Peloponnesian War did not try to expand or export its system. Sparta couldn’t construct an information network big and dense enough to control the lives of ordinary people in every Greek town and village.[65]
A much more ambitious totalitarian project might have been launched by the Qin dynasty in ancient China (221–206 BCE). After defeating all the other Warring States, the Qin ruler Qin Shi Huang controlled a huge empire with tens of millions of subjects, who belonged to numerous different ethnic groups, spoke diverse languages, and were loyal to various local traditions and elites. To cement its power, the victorious Qin regime tried to dismantle any regional powers that might challenge its authority. It confiscated the lands and wealth of local aristocrats and forced regional elites to move to the imperial capital of Xiangyang, thereby separating them from their power base and monitoring them more easily.
The Qin regime also embarked on a ruthless campaign of centralization and homogenization. It created a new simplified script to be used throughout the empire and standardized coinage, weights, and measurements. It built a road network radiating out of Xiangyang, with standardized rest houses, relay stations, and military checkpoints. People needed written permits in order to enter or leave the capital region or frontier zones. Even the width of axles was standardized to ensure that carts and chariots could run in the same ruts.
Every action, from tilling fields to getting married, was supposed to serve some military need, and the type of military discipline that Rome reserved for the legions was imposed by the Qin on the entire population. The envisioned reach of this system can be exemplified by one Qin law that specified the punishment an official faced if he neglected a granary under his supervision. The law discusses the number of rat holes in the granary that would warrant fining or berating the official: “For three or more rat holes the fine is [the purchase of] one shield [for the army] and for two or fewer [the responsible official] is berated. Three mouse holes are equal to one rat hole.”[66]
To facilitate this totalitarian system, the Qin attempted to create a militarized social order. Every male subject had to belong to a five-man unit. These units were aggregated into larger formations, from local hamlets (li), through cantons (xiang) and counties (xian), all the way to the large imperial commanderies (jun). People were forbidden to change their residence without permit, to the extent that guests could not even stay overnight at a friend’s house without proper identification and authorization.
Every Qin male subject was also given a rank, just as every soldier in an army has a rank. Obedience to the state resulted in promotion to higher ranks, which brought with it economic and legal privileges, while disobedience could result in demotion or punishment. People in each formation were supposed to supervise one another, and if any individual committed some misdeed, all could be punished for it. Anyone who failed to report a criminal—even their own relatives—would be killed. Those who reported crimes were rewarded with higher ranks and other perks.
It is highly questionable to what extent the regime managed to implement all these totalitarian measures. Bureaucrats writing documents in a government office often invent elaborate rules and regulations, which then turn out to be impractical. Did conscientious government officials really go around the entire Qin Empire counting rat holes in every granary? Were peasants in every remote mountain hamlet really organized into five-man squads? Probably not. Nevertheless, the Qin Empire outdid other ancient empires in its totalitarian ambitions.
The Qin regime even tried to control what its subjects were thinking and feeling. During the Warring States period Chinese thinkers were relatively free to develop myriad ideologies and philosophies, but the Qin adopted the doctrine of Legalism as the official state ideology. Legalism posited that humans were naturally greedy, cruel, and egotistical. It emphasized the need for strict control, argued that punishments and rewards were the most effective means of control, and insisted that state power not be curtailed by any moral consideration. Might was right, and the good of the state was the supreme good.[67] The Qin proscribed other philosophies, such as Confucianism and Daoism, which believed humans were more altruistic and which emphasized the importance of virtue rather than violence.[68] Books espousing such soft views were banned, as well as books that contradicted the official Qin version of history.
When one scholar argued that Qin Shi Huang should emulate the founder of the ancient Zhou dynasty and decentralize state power, the Qin chief minister, Li Si, countered that scholars should stop criticizing present-day institutions by idealizing the past. The regime ordered the confiscation of all books that romanticized antiquity or otherwise criticized the Qin. Such problematic texts were stored in the imperial library and could be studied only by official scholars.[69]
The Qin Empire was probably the most ambitious totalitarian experiment in human history prior to the modern age, and its scale and intensity would prove to be its ruin. The attempt to regiment tens of millions of people along military lines, and to monopolize all resources for military purposes, led to severe economic problems, wastefulness, and popular resentment. The regime’s draconian laws, along with its hostility to regional elites and its voracious appetite for taxes and recruits, fanned the flames of this resentment even further. Meanwhile, the limited resources of an ancient agrarian society couldn’t support all the bureaucrats and soldiers that the Qin needed to contain this resentment, and the low efficiency of their information technology made it impossible to control every town and village from distant Xiangyang. Not surprisingly, in 209 BCE a series of revolts broke out, led by regional elites, disgruntled commoners, and even some of the empire’s own newly minted officials.
According to one account, the first serious revolt started when a group of conscripted peasants sent to work in a frontier zone were delayed by rain and flooding. They feared they would be executed for this dereliction of duty, and felt they had nothing to lose. They were quickly joined by numerous other rebels. Just fifteen years after reaching the apogee of power, the Qin Empire collapsed under the weight of its totalitarian ambitions, splintering into eighteen kingdoms.
After several years of war, a new dynasty—the Han—reunited the empire. But the Han then adopted a more realistic, less draconian attitude. Han emperors were certainly autocratic, but they were not totalitarian. They did not recognize any limits on their authority, but they did not try to micromanage everyone’s lives. Instead of following Legalist ideas of surveillance and control, the Han turned to Confucian ideas of encouraging people to act loyally and responsibly out of inner moral convictions. Like their contemporaries in the Roman Empire, Han emperors sought to control only some aspects of society from the center, while leaving considerable autonomy to provincial aristocrats and local communities. Due largely to the limitations imposed by the available information technology, premodern large-scale polities like the Roman and Han Empires gravitated toward nontotalitarian autocracy.[70] Full-blown totalitarianism might have been dreamed about by the likes of the Qin, but its implementation had to wait for the development of modern technology.
Just as modern technology enabled large-scale democracy, it also made large-scale totalitarianism possible. Beginning in the nineteenth century, the rise of industrial economies allowed governments to employ many more administrators, and new information technologies—such as the telegraph and radio—made it possible to quickly connect and supervise all these administrators. This facilitated an unprecedented concentration of information and power, for those who dreamed about such things.
When the Bolsheviks seized control of Russia after the 1917 revolution, they were driven by exactly such a dream. The Bolsheviks craved unlimited power because they believed they had a messianic mission. Marx taught that for millennia, all human societies were dominated by corrupt elites who oppressed the people. The Bolsheviks claimed they knew how to finally end all oppression and create a perfectly just society on earth. But to do so, they had to overcome numerous enemies and obstacles, which, in turn, required all the power they could get. They refused to countenance any self-correcting mechanisms that might question either their vision or their methods. Like the Catholic Church, the Bolshevik party was convinced that though its individual members might err, the party itself was always right. Belief in their own infallibility led the Bolsheviks to destroy Russia’s nascent democratic institutions—like elections, independent courts, the free press, and opposition parties—and to create a one-party totalitarian regime. Bolshevik totalitarianism did not start with Stalin. It was evident from the very first days of the revolution. It stemmed from the doctrine of party infallibility, rather than from the personality of Stalin.
In the 1930s and 1940s, Stalin perfected the totalitarian system he inherited. The Stalinist network was composed of three main branches. First, there was the governmental apparatus of state ministries, regional administrations, and regular Red Army units, which in 1939 comprised 1.6 million civilian officials[71] and 1.9 million soldiers.[72] Second, there was the apparatus of the Communist Party of the Soviet Union and its ubiquitous party cells, which in 1939 included 2.4 million party members.[73] Third, there was the secret police: first known as the Cheka, in Stalin’s days it was called the OGPU, NKVD, and MGB, and after Stalin’s death it morphed into the KGB. Its post-Soviet successor organization has been known since 1995 as the FSB. In 1937, the NKVD had 270,000 agents and millions of informers.[74]
The three branches operated in parallel. Just as democracy is maintained by having overlapping self-correcting mechanisms that keep each other in check, modern totalitarianism created overlapping surveillance mechanisms that keep each other in order. The governor of a Soviet province was constantly watched by the local party commissar, and neither of them knew who among their staff was an NKVD informer. A testimony to the effectiveness of the system is that modern totalitarianism largely solved the perennial problem of premodern autocracies—revolts by provincial subordinates. While the U.S.S.R. had its share of court coups, not once did a provincial governor or a Red Army front commander rebel against the center.[75] Much of the credit for that goes to the secret police, which kept a close eye on the mass of citizens, on provincial administrators, and even more so on the party and the Red Army.
While in most polities throughout history the army had wielded enormous political power, in twentieth-century totalitarian regimes the regular army ceded much of its clout to the secret police—the information army. In the U.S.S.R., the Cheka, OGPU, NKVD, and KGB lacked the firepower of the Red Army, but had more influence in the Kremlin and could terrorize and purge even the army brass. The East German Stasi and the Romanian Securitate were similarly stronger than the regular armies of these countries.[76] In Nazi Germany, the SS was more powerful than the Wehrmacht, and the SS chief, Heinrich Himmler, was higher up the pecking order than Wilhelm Keitel, chief of the Wehrmacht high command.
In none of these cases could the secret police defeat the regular army in traditional warfare, of course; what made the secret police powerful was its command of information. It had the information necessary to preempt a military coup and to arrest the commanders of tank brigades or fighter squadrons before they knew what hit them. During the Stalinist Great Terror of the late 1930s, out of 144,000 Red Army officers about 10 percent were shot or imprisoned by the NKVD. This included 154 of 186 divisional commanders (83 percent), eight of nine admirals (89 percent), thirteen of fifteen full generals (87 percent), and three of five marshals (60 percent).[77]
The party leadership fared just as badly. Of the revered Old Bolsheviks, people who joined the party before the 1917 revolution, about a third didn’t survive the Great Terror.[78] Of the thirty-three men who served on the Politburo between 1919 and 1938, fourteen were shot (42 percent). Of the 139 members and candidate members of the party’s Central Committee in 1934, 98 (70 percent) were shot. Only 2 percent of the delegates who took part in the Seventeenth Party Congress in 1934 evaded execution, imprisonment, expulsion, or demotion, and attended the Eighteenth Party Congress in 1939.[79]
The secret police—which did all the purging and killing—was itself divided into several competing branches that closely watched and purged one another. Genrikh Yagoda, the NKVD head who orchestrated the beginning of the Great Terror and supervised the killing of hundreds of thousands of victims, was executed in 1938 and replaced by Nikolai Yezhov. Yezhov lasted for two years, killing and imprisoning millions of people before being executed in 1940.
Perhaps most telling is the fate of the thirty-nine people who in 1935 held the rank of general in the NKVD (called commissars of state security in Soviet nomenclature). Thirty-five of them (90 percent) were arrested and shot by 1941, one was assassinated, and one—the head of the NKVD’s Far East regional office—saved himself by defecting to Japan, but was killed by the Japanese in 1945. Of the original cohort of thirty-nine NKVD generals, only two men were left standing by the end of World War II. The remorseless logic of totalitarianism eventually caught up with them too. During the power struggles that followed Stalin’s death in 1953, one of them was shot, while the other was consigned to a psychiatric hospital, where he died in 1960.[80] Serving as an NKVD general in Stalin’s day was one of the most dangerous jobs in the world. At a time when American democracy was improving its many self-correcting mechanisms, Soviet totalitarianism was refining its triple self-surveilling and self-terrorizing apparatus.
Totalitarian regimes are based on controlling the flow of information and are suspicious of any independent channels of information. When military officers, state officials, or ordinary citizens exchange information, they can build trust. If they come to trust one another, they can organize resistance to the regime. Therefore, a key tenet of totalitarian regimes is that wherever people meet and exchange information, the regime should be there too, to keep an eye on them. In the 1930s, this was one principle that Hitler and Stalin shared.
On March 31, 1933, two months after Hitler became chancellor, the Nazis passed the Coordination Act (Gleichschaltungsgesetz). This stipulated that by April 30, 1933, all political, social, and cultural organizations throughout Germany—from municipalities to football clubs and local choirs—must be run according to Nazi ideology, as organs of the Nazi state. It upended life in every city and hamlet in Germany.
For example, in the small Alpine village of Oberstdorf, the democratically elected municipal council met for the last time on April 21, 1933, and three days later it was replaced by an unelected Nazi council that appointed a Nazi mayor. Since the Nazis alone allegedly knew what the people really wanted, who other than Nazis could implement the people’s will? Oberstdorf also had about fifty associations and clubs, ranging from a beekeeping society to an alpinist club. They all had to conform to the Coordination Act, adjusting their boards, membership, and statutes to Nazi demands, hoisting the swastika flag, and concluding every meeting with the “Horst Wessel Song,” the Nazi Party’s anthem. On April 6, 1933, the Oberstdorf fishing society banned Jews from its ranks. None of the thirty-two members was Jewish, but they felt they had to prove their Aryan credentials to the new regime.[81]
Things were even more extreme in Stalin’s U.S.S.R. Whereas the Nazis still allowed church organizations and private businesses some partial freedom of action, the Soviets made no exceptions. By 1928 and the launch of the first Five-Year Plan, there were government officials, party functionaries, and secret police informants in every neighborhood and village, and between them they controlled every aspect of life: all businesses from power plants to cabbage farms; all newspapers and radio stations; all universities, schools, and youth groups; all hospitals and clinics; all voluntary and religious organizations; all sporting and scientific associations; all parks, museums, and cinemas.
If a dozen people came together to play football, hike in the woods, or do some charity work, the party and the secret police had to be there too, represented by the local party cell or NKVD agent. The speed and efficiency of modern information technology meant that all these party cells and NKVD agents were always just a telegram or phone call away from Moscow. Information about suspicious persons and activities was fed into a countrywide, cross-referenced system of card catalogs. Known as kartoteki, these catalogs contained information from work records, police files, residence cards, and other forms of social registrations and, by the 1930s, had become the primary mechanism for surveilling and controlling the Soviet population.[82]
This made it feasible for Stalin to seek control over the totality of Soviet life. One crucial example was the campaign to collectivize Soviet farming. For centuries, economic, social, and private life in the thousands of villages of the sprawling Tsarist Empire was managed by several traditional institutions: the local commune, the parish church, the private farm, the local market, and above all the family. In the mid-1920s, the Soviet Union was still an overwhelmingly agrarian economy. About 82 percent of the total population lived in villages, and 83 percent of the workforce was engaged in farming.[83] But if each peasant family made its own decisions about what to grow, what to buy, and how much to charge for their produce, it greatly limited the ability of Moscow officials to themselves plan and control social and economic activities. What if the officials decided on a major agrarian reform, but the peasant families rejected it? So when in 1928 the Soviets came up with their first Five-Year Plan for the development of the Soviet Union, the most important item on the agenda was to collectivize farming.
The idea was that in every village all the families would join a kolkhoz—a collective farm. They would hand over to the kolkhoz all their property—land, houses, horses, cows, shovels, pitchforks. They would work together for the kolkhoz, and in return the kolkhoz would provide for all their needs, from housing and education to food and health care. The kolkhoz would also decide—based on orders from Moscow—whether they should grow cabbages or turnips; whether to invest in a tractor or a school; and who would work in the dairy farm, the tannery, and the clinic. The result, thought the Moscow masterminds, would be the first perfectly just and equal society in human history.
They were similarly convinced of the economic advantages of their proposed system, thinking that the kolkhoz would enjoy economy of scale. For example, when every peasant family had but a small strip of land, it made little sense to buy a tractor to plow it, and in any case most families couldn’t afford a tractor. Once all land was held communally, it could be cultivated far more efficiently using modern machinery. In addition, the kolkhoz was supposed to benefit from the wisdom of modern science. Instead of every peasant deciding on production methods on the basis of old traditions and groundless superstitions, state experts with university degrees from institutions like the Lenin All-Union Academy of Agricultural Sciences would make the crucial decisions.
To the planners in Moscow, it sounded wonderful. They expected a 50 percent increase in agricultural production by 1931.[84] And if in the process the old village hierarchies and inequalities were bulldozed, all the better. To most peasants, however, it sounded terrible. They didn’t trust the Moscow planners or the new kolkhoz system. They did not want to give up their old way of life or their private property. Villagers slaughtered cows and horses instead of handing them to the kolkhoz. Their motivation to work dwindled. People made less effort plowing fields that belonged to everyone than plowing fields that belonged to their own family. Passive resistance was ubiquitous, sometimes flaring into violent clashes. Whereas Soviet planners expected to harvest ninety-eight million tons of grain in 1931, production was only sixty-nine million, according to official data, and might have been as low as fifty-seven million tons in reality. The 1932 harvest was even worse.[85]
The state reacted with fury. Between 1929 and 1936, food confiscation, government neglect, and man-made famines (resulting from government policy rather than a natural disaster) claimed the lives of between 4.5 and 8.5 million people.[86] Millions of additional peasants were declared enemies of the state and deported or imprisoned. The most basic institutions of peasant life—the family, the church, the local community—were terrorized and dismantled. In the name of justice, equality, and the will of the people, the collectivization campaign annihilated anything that stood in its way. In the first two months of 1930 alone, about 60 million peasants in more than 100,000 villages were herded into collective farms.[87] In June 1929, only 4 percent of Soviet peasant households had belonged to collective farms. By March 1930 the figure had risen to 57 percent. By April 1937, 97 percent of households in the countryside had been confined to the 235,000 Soviet collective farms.[88] In just seven years, then, a way of life that had existed for centuries had been replaced by the totalitarian brainchild of a few Moscow bureaucrats.
It is worthwhile to delve a little deeper into the history of Soviet collectivization. For it was a tragedy that bears some resemblance to earlier catastrophes in human history—like the European witch-hunt craze—and at the same time foreshadows some of the biggest dangers posed by twenty-first-century technology and its faith in supposedly scientific data.
When their efforts to collectivize farming encountered resistance and led to economic disaster, Moscow bureaucrats and mythmakers took a page from Kramer’s Hammer of the Witches. I don’t wish to imply that the Soviets actually read the book, but they too invented a global conspiracy and created an entire nonexistent category of enemies. In the 1930s Soviet authorities repeatedly blamed the disasters afflicting the Soviet economy on a counterrevolutionary cabal whose chief agents were the “kulaks,” or capitalist farmers. Just as in Kramer’s imagination witches serving Satan conjured hailstorms that destroyed crops, so in the Stalinist imagination kulaks beholden to global capitalism sabotaged the Soviet economy.
In theory, kulaks were an objective socioeconomic category, defined by analyzing empirical data on things like property, income, capital, and wages. Soviet officials could allegedly identify kulaks by counting things. If most people in a village had only one cow, then the few families who had three cows were considered kulaks. If most people in a village didn’t hire any labor, but one family hired two workers during harvest time, this was a kulak family. Being a kulak meant not only that you possessed a certain amount of property but also that you possessed certain personality traits. According to the supposedly infallible Marxist doctrine, people’s material conditions determined their social and spiritual character. Since kulaks allegedly engaged in capitalist exploitation, it was a scientific fact (according to Marxist thinking) that they were greedy, selfish, and unreliable—and so were their children. Discovering that someone was a kulak ostensibly revealed something profound about their fundamental nature.
On December 27, 1929, Stalin declared that the Soviet state should seek “the liquidation of the kulaks as a class,”[89] and immediately galvanized the party and the secret police to realize that ambitious and murderous aim. Early modern European witch-hunters worked in autocratic societies that lacked modern information technology; therefore, it took them three centuries to kill fifty thousand alleged witches. In contrast, Soviet kulak hunters were working in a totalitarian society that had at its disposal technologies such as telegraphs, trains, telephones, and radios—as well as a sprawling bureaucracy. They decided that two years would suffice to “liquidate” millions of kulaks.[90]
Soviet officials began by assessing how many kulaks there must be in the U.S.S.R. Based on existing data—such as tax records, employment records, and the 1926 Soviet census—they decided that kulaks constituted 3–5 percent of the rural population.[91] On January 30, 1930, just one month after Stalin’s speech, a Politburo decree translated his vague vision into a much more detailed plan of action. The decree included target numbers for the liquidation of kulaks in each major agricultural region.[92] Regional authorities then made their own estimates of the number of kulaks in each county under their jurisdiction. Eventually, specific quotas were assigned to rural soviets (local administrative units, typically comprising a handful of villages). Often, local officials inflated the numbers along the way, to prove their zeal. Each rural soviet then had to identify the stated number of kulak households in the villages under its purview. These people were expelled from their homes, and—according to the administrative category to which they belonged—resettled elsewhere, incarcerated in concentration camps, or condemned to death.[93]
How exactly did Soviet officials tell who was a kulak? In some villages, local party members made a conscientious effort to identify kulaks by objective measures, such as the amount of property they owned. It was often the most hardworking and efficient farmers who were stigmatized and expelled. In some villages local communists used the opportunity to get rid of their personal enemies. Some villages simply drew lots on who would be considered a kulak. Other villages held communal meetings to vote on the matter and often chose isolated farmers, widows, old people, and other “expendables” (exactly the sorts of people who in early modern Europe were most likely to be branded witches).[94]
The absurdity of the entire operation is manifested in the case of the Streletsky family from the Kurgan region of Siberia. Dmitry Streletsky, who was then a teenager, recalled years later how his family was branded kulaks and selected for liquidation. “Serkov, the chairman of the village Soviet who deported us, explained: ‘I have received an order [from the district party committee] to find 17 kulak families for deportation. I formed a Committee of the Poor and we sat through the night to choose the families. There is no one in the village who is rich enough to qualify, and not many old people, so we simply chose the 17 families. You were chosen. Please don’t take it personally. What else could I do?’ ”[95] If anyone dared object to the madness of the system, they were promptly denounced as kulaks and counterrevolutionaries and would themselves be liquidated.
Altogether, some five million kulaks would be expelled from their homes by 1933. As many as thirty thousand heads of households were shot. The more fortunate victims were resettled in their district of origin or became vagrant workers in the big cities, while about two million were either exiled to remote inhospitable regions or incarcerated as state slaves in labor camps.[96] Numerous important and notorious state projects—such as the construction of the White Sea Canal and the development of mines in the Arctic regions—were accomplished with the labor of millions of prisoners, many of them kulaks. It was one of the fastest and largest enslavement campaigns in human history.[97] Once branded a kulak, a person could not get rid of the stigma. Government agencies, party organs, and secret police documents recorded who was a kulak in a labyrinthine system of kartoteki catalogs, archives, and internal passports.
Kulak status even passed to the next generation, with devastating consequences. Kulak children were refused entrance to communist youth groups, the Red Army, universities, and prestigious areas of employment.[98] In her 1997 memoirs, Antonina Golovina recalled how her family was deported from its ancestral village as kulaks and sent to live in the town of Pestovo. The boys in her new school regularly taunted her. On one occasion, a senior teacher told the eleven-year-old Antonina to stand up in front of all the other children, and began abusing her mercilessly, shouting that “her sort” were “enemies of the people, wretched kulaks! You certainly deserved to be deported, I hope you’re all exterminated!” Antonina wrote that this was the defining moment of her life. “I had this feeling in my gut that we [kulaks] were different from the rest, that we were criminals.” She never got over it.[99]
Like the ten-year-old “witch” Hansel Pappenheimer, the eleven-year-old “kulak” Antonina Golovina found herself cast into an intersubjective category invented by human mythmakers and imposed by ubiquitous bureaucrats. The mountains of information collected by Soviet bureaucrats about the kulaks wasn’t the objective truth about them, but it imposed a new intersubjective Soviet truth. Knowing that someone was labeled a kulak was a very important thing to know about a Soviet person, even though the label was entirely bogus.
The Stalinist regime would go on to attempt something even more ambitious than the mass dismantling of private family farms. It set out to dismantle the family itself. Unlike Roman emperors or Russian tsars, Stalin tried to insert himself even into the most intimate human relationships, coming between parents and children. Family ties were considered the bedrock of corruption, inequality, and antiparty activities. Soviet children were therefore taught to worship Stalin as their real father and to inform on their biological parents if they criticized Stalin or the Communist Party.
Starting in 1932, the Soviet propaganda machine created a veritable cult around the figure of Pavlik Morozov—a thirteen-year-old boy from the Siberian village of Gerasimovka. In autumn 1931, Pavlik informed the secret police that his father, Trofim—the chairman of the village soviet—was selling false papers to kulak exiles. During the subsequent trial, when Trofim shouted to Pavlik, “It’s me, your father,” the boy retorted, “Yes, he used to be my father, but I no longer consider him my father.” Trofim was sent to a labor camp and later shot. In September 1932, Pavlik was found murdered, and Soviet authorities arrested and executed five of his family members, who allegedly killed him in revenge for the denunciation. The real story was far more complicated, but it didn’t matter to the Soviet press. Pavlik became a martyr, and millions of Soviet children were taught to emulate him.[100] Many did.
For example, in 1934 a thirteen-year-old boy called Pronia Kolibin told the authorities that his hungry mother stole grain from the kolkhoz fields. His mother was arrested and presumably shot. Pronia was rewarded with a cash prize and a lot of positive media attention. The party organ Pravda published a poem Pronia wrote. Two of its lines read, “You are a wrecker, Mother / I can live with you no more.”[101]
The Soviet attempt to control the family was reflected in a dark joke told in Stalin’s day. Stalin visits a factory undercover, and conversing with a worker, he asks the man, “Who is your father?”
“Stalin,” replies the worker.
“Who is your mother?”
“The Soviet Union,” the man responds.
“And what do you want to be?”
“An orphan.”[102]
At the time you could easily lose your liberty or your life for telling this joke, even if you told it in your own home to your closest family members. The most important lesson Soviet parents taught their children wasn’t loyalty to the party or to Stalin. It was “keep your mouth shut.”[103] Few things in the Soviet Union were as dangerous as holding an open conversation.
You may wonder whether modern totalitarian institutions like the Nazi Party or the Soviet Communist Party were really all that different from earlier institutions like the Christian churches. After all, churches too believed in their infallibility, had priestly agents everywhere, and sought to control the daily life of people down to their diet and sexual habits. Shouldn’t we see the Catholic Church or the Eastern Orthodox Church as totalitarian institutions? And doesn’t this undermine the thesis that totalitarianism was made possible only by modern information technology?
There are, however, several major differences between modern totalitarianism and premodern churches. First, as noted earlier, modern totalitarianism has worked by deploying several overlapping surveillance mechanisms that keep one another in order. The party is never alone; it works alongside state organs, on the one side, and the secret police, on the other. In contrast, in most medieval European kingdoms the Catholic Church was an independent institution that often clashed with the state institutions instead of reinforcing them. Consequently, the church was perhaps the most important check on the power of European autocrats.
For example, when in the “Investiture Controversy” of the 1070s King Henry IV of Germany and Italy asserted that he had the final say on the appointment of bishops, abbots, and other church officials, Pope Gregory VII mobilized resistance and eventually forced the king to surrender. On January 25, 1077, Henry reached Canossa castle, where the pope was lodging, to offer his submission and apology. The pope refused to open the gates, and Henry waited in the snow outside, barefoot and hungry. After three days, the pope finally opened the gates to the king, who begged forgiveness.[104]
An analogous clash in a modern totalitarian country is unthinkable. The whole idea of totalitarianism is to prevent any separation of powers. In the Soviet Union, state and party reinforced each other, and Stalin was the de facto head of both. There could be no Soviet “Investiture Controversy,” because Stalin had final say about all appointments to both party positions and state functions. He decided both who would be general secretary of the Communist Party of Georgia and who would be foreign minister of the Soviet Union.
Another important difference is that medieval churches tended to be traditionalist organizations that resisted change, while modern totalitarian parties have tended to be revolutionary organizations demanding change. A premodern church built its power gradually by developing its structure and traditions over centuries. A king or a pope who wanted to swiftly revolutionize society was therefore likely to encounter stiff resistance from church members and ordinary believers.
For example, in the eighth and ninth centuries a series of Byzantine emperors sought to forbid the veneration of icons, which seemed to them idolatrous. They pointed to many passages in the Bible, most notably the Second Commandment, that forbade making any graven images. While Christian churches traditionally interpreted the Second Commandment in a way that allowed the veneration of icons, emperors like Constantine V argued that this was a mistake and that disasters like Christian defeats by the armies of Islam were due to God’s wrath over the worship of icons. In 754 more than three hundred bishops assembled in the Council of Hieria to support Constantine’s iconoclastic position.
Compared with Stalin’s collectivization campaign, this was a minor reform. Families and villages were required to give up their icons, but not their private property or their children. Yet Byzantine iconoclasm met with widespread resistance. Unlike the participants in the Council of Hieria, many ordinary priests, monks, and believers were deeply attached to their icons. The resulting struggle ripped apart Byzantine society until the emperors conceded defeat and reversed course.[105] Constantine V was later vilified by Byzantine historians as “Constantine the Shitty” (Koprónimos), and a story was spread about him that he defecated during his baptism.[106]
Unlike premodern churches, which developed slowly over many centuries and therefore tended to be conservative and suspicious of rapid changes, modern totalitarian parties like the Nazi Party and the Soviet Communist Party were organized within a single generation around the promise to quickly revolutionize society. They didn’t have centuries-old traditions and structures to defend. When their leaders conceived some ambitious plan to smash existing traditions and structures, party members typically fell in line.
Perhaps most important of all, premodern churches could not become tools of totalitarian control because they themselves suffered from the same limitations as all other premodern organizations. While they had local agents everywhere, in the shape of parish priests, monks, and itinerant preachers, the difficulty of transmitting and processing information meant that church leaders knew little about what was going on in remote communities, and local priests had a large degree of autonomy. Consequently, churches tended to be local affairs. People in every province and village often venerated local saints, upheld local traditions, performed local rites, and might even have had local doctrinal ideas that differed from the official line.[107] If the pope in Rome wanted to do something about an independent-minded priest in a remote Polish parish, he had to send a letter to the archbishop of Gniezno, who had to instruct the relevant bishop, who had to send someone to intervene in the parish. That might take months, and there was ample opportunity for the archbishop, bishop, and other intermediaries to reinterpret or even “mislay” the pope’s orders.[108]
Churches became more totalitarian institutions only in the late modern era, when modern information technologies became available. We tend to think of popes as medieval relics, but actually they are masters of modern technology. In the eighteenth century, the pope had little control over the worldwide Catholic Church and was reduced to the status of a local Italian princeling, fighting other Italian powers for control of Bologna or Ferrara. With the advent of radio, the pope became one of the most powerful people on the planet. Pope John Paul II could sit in the Vatican and speak directly to millions of Catholics from Poland to the Philippines, without any archbishop, bishop, or parish priest able to twist or hide his words.[109]
We see then that the new information technology of the late modern era gave rise to both large-scale democracy and large-scale totalitarianism. But there were crucial differences between how the two systems used information technology. As noted earlier, democracy encourages information to flow through many independent channels rather than only through the center, and it allows many independent nodes to process the information and make decisions by themselves. Information freely circulates between private businesses, private media organizations, municipalities, sports associations, charities, families, and individuals—without ever passing through the office of a government minister.
In contrast, totalitarianism wants all information to pass through the central hub and doesn’t want any independent institutions making decisions on their own. True, totalitarianism does have its tripartite apparatus of government, party, and secret police. But the whole point of this parallel apparatus is to prevent the emergence of any independent power that might challenge the center. When government officials, party members, and secret police agents constantly keep tabs on one another, opposing the center is extremely dangerous.
As contrasting types of information networks, democracy and totalitarianism both have their advantages and disadvantages. The biggest advantage of the centralized totalitarian network is that it is extremely orderly, which means it can make decisions quickly and enforce them ruthlessly. Especially during emergencies like wars and epidemics, centralized networks can move much faster and farther than distributed networks.
But hyper-centralized information networks also suffer from several big disadvantages. Since they don’t allow information to flow anywhere except through the official channels, if the official channels are blocked, the information cannot find an alternative means of transmission. And official channels are often blocked.
One common reason why official channels might be blocked is that fearful subordinates hide bad news from their superiors. In Good Soldier Švejk—a satirical novel about the Austro-Hungarian Empire during World War I—Jaroslav Hašek describes how the Austrian authorities were worried about waning morale among the civilian population. They therefore bombarded local police stations with orders to hire informers, collect data, and report to headquarters on the population’s loyalty. To be as scientific as possible, headquarters invented an ingenious loyalty grade: I.a, I.b, I.c; II.a, II.b, II.c; III.a, III.b, III.c; IV.a, IV.b, IV.c. They sent to the local police stations detailed explanations about each grade, and an official form that had to be filled out daily. Police sergeants across the country dutifully filled out the forms and sent them back to headquarters. Without exception, all of them always reported a I.a morale level; to do otherwise was to invite rebuke, demotion, or worse.[110]
Another common reason why official channels fail to pass on information is to preserve order. Because the chief aim of totalitarian information networks is to produce order rather than discover truth, when alarming information threatens to undermine social order, totalitarian regimes often suppress it. It is relatively easy for them to do so, because they control all the information channels.
For example, when the Chernobyl nuclear reactor exploded on April 26, 1986, Soviet authorities suppressed all news of the disaster. Both Soviet citizens and foreign countries were kept oblivious of the danger, and so took no steps to protect themselves from radiation. When some Soviet officials in Chernobyl and the nearby town of Pripyat requested to immediately evacuate nearby population centers, their superiors’ chief concern was to avoid the spread of alarming news, so they not only forbade evacuation but also cut the phone lines and warned employees in the nuclear facility not to talk about the disaster.
Two days after the meltdown Swedish scientists noticed that radiation levels in Sweden, more than twelve hundred kilometers from Chernobyl, were abnormally high. Only after Western governments and the Western press broke the news did the Soviets acknowledge that anything was amiss. Even then they continued to hide from their own citizens the full magnitude of the catastrophe and hesitated to request advice and assistance from abroad. Millions of people in Ukraine, Belarus, and Russia paid with their health. When the Soviet authorities later investigated the disaster, their priority was to deflect blame rather than understand the causes and prevent future accidents.[111]
In 2019, I went on a tour of Chernobyl. The Ukrainian guide who explained what led to the nuclear accident said something that stuck in my mind. “Americans grow up with the idea that questions lead to answers,” he said. “But Soviet citizens grew up with the idea that questions lead to trouble.”
Naturally, leaders of democratic countries also don’t relish bad news. But in a distributed democratic network, when official lines of communication are blocked, information flows through alternative channels. For example, if an American official decides against telling the president about an unfolding disaster, that news might nevertheless be published by The Washington Post, and if The Washington Post too deliberately withholds the information, The Wall Street Journal or The New York Times will break the story. The business model of independent media—forever chasing the next scoop—all but guarantees publication.
When, on March 28, 1979, there was a severe accident in the Three Mile Island nuclear reactor in Pennsylvania, the news quickly spread without any need for international intervention. The accident began around 4:00 a.m. and was noticed by 6:30 a.m. An emergency was declared in the facility at 6:56, and at 7:02 the accident was reported to the Pennsylvania Emergency Management Agency. During the following hour the governor of Pennsylvania, the lieutenant governor, and the civil defense authorities were informed. An official press conference was scheduled for 10:00 a.m. However, a traffic reporter at a local Harrisburg radio station picked up a police notice on events, and the station aired a brief report at 8:25 a.m. In the U.S.S.R. such an initiative by an independent radio station was unthinkable, but in the United States it was unremarkable. By 9:00 a.m. the Associated Press issued a bulletin. Though it took days for the full details to emerge, American citizens learned about the accident two hours after it was first noticed. Subsequent investigations by government agencies, NGOs, academics, and the press uncovered not just the immediate causes of the accident but also its deeper structural causes, which helped improve the safety of nuclear technology worldwide. Indeed, some of the lessons of Three Mile Island, which were openly shared even with the Soviets, contributed to mitigating the Chernobyl disaster.[112]
Totalitarian and authoritarian networks face other problems besides blocked arteries. First and foremost, as we have already established, their self-correcting mechanisms tend to be very weak. Since they believe they are infallible, they see little need for such mechanisms, and since they are afraid of any independent institution that might challenge them, they lack free courts, media outlets, or research centers. Consequently, there is nobody to expose and correct the daily abuses of power that characterize all governments. The leader may occasionally proclaim an anticorruption campaign, but in nondemocratic systems these often turn out to be little more than a smoke screen for one regime faction to purge another faction.[113]
And what happens if the leader himself embezzles public funds or makes some disastrous policy mistake? Nobody can challenge the leader, and on his own initiative the leader—being a human being—may well refuse to admit any mistakes. Instead, he is likely to blame all problems on “foreign enemies,” “internal traitors,” or “corrupt subordinates” and demand even more power in order to deal with the alleged malefactors.
For example, we mentioned in the previous chapter that Stalin adopted the bogus theory of Lysenkoism as the state doctrine on evolution. The results were catastrophic. Neglect of Darwinian models, and attempts by Lysenkoist agronomists to create super-crops, set back Soviet genetic research for decades and undermined Soviet agriculture. Soviet experts who suggested abandoning Lysenkoism and accepting Darwinism risked the gulag or a bullet to the head. Lysenkoism’s legacy haunted Soviet science and agronomy for decades and was one reason why by the early 1970s the U.S.S.R. ceased to be a major exporter of grain and became a net importer, despite its vast fertile lands.[114]
The same dynamic characterized many other fields of activity. For instance, during the 1930s Soviet industry suffered from numerous accidents. This was largely the fault of the Soviet bosses in Moscow, who set up almost impossible goals for industrialization and viewed any failure to achieve them as treason. In the effort to fulfill the ambitious goals, safety measures and quality-control checks were abandoned, and experts who advised prudence were often reprimanded or shot. The result was a wave of industrial accidents, dysfunctional products, and wasted efforts. Instead of taking responsibility, Moscow concluded that this must be the handiwork of the global Trotskyite-imperialist conspiracy of saboteurs and terrorists bent on derailing the Soviet enterprise. Rather than slow down and adopt safety regulations, the bosses redoubled the terror and shot more people.
A famous case in point was Pavel Rychagov. He was one of the best and bravest Soviet pilots, leading missions to help the Republicans in the Spanish Civil War and the Chinese against the Japanese invasion. He quickly rose through the ranks, becoming commander of the Soviet air force in August 1940, at age twenty-nine. But the courage that helped Rychagov shoot down Nazi airplanes in Spain landed him in deep trouble in Moscow. The Soviet air force suffered from numerous accidents, which the Politburo blamed on lack of discipline and deliberate sabotage by anti-Soviet conspiracies. Rychagov, however, wouldn’t buy this official line. As a frontline pilot, he knew the truth. He flatly told Stalin that pilots were being forced to operate hastily designed and badly produced airplanes, which he compared to flying “in coffins.” Two days after Hitler invaded the Soviet Union, as the Red Army was collapsing and Stalin was desperately hunting for scapegoats, Rychagov was arrested for “being a member of an anti-Soviet conspiratorial organization and carrying out enemy work aimed at weakening the power of the Red Army.” His wife was also arrested, because she allegedly knew about his “Trotskyist ties with the military conspirators.” They were executed on October 28, 1941.[115]
The real saboteur who wrecked Soviet military efforts wasn’t Rychagov, of course, but Stalin himself. For years, Stalin feared that a clash to the death with Nazi Germany was likely and built the world’s biggest war machine to prepare for it. But he hamstrung this machine both diplomatically and psychologically.
On the diplomatic level, in 1939–41, Stalin gambled that he could goad the “capitalists” to fight and exhaust one another while the U.S.S.R. nurtured and even increased its power. He therefore made a pact with Hitler in 1939 and allowed the Germans to conquer much of Poland and western Europe, while the U.S.S.R. attacked or alienated almost all its neighbors. In 1939–40 the Soviets invaded and occupied eastern Poland; annexed Estonia, Latvia, and Lithuania; and conquered parts of Finland and Romania. Finland and Romania, which could have acted as neutral buffers on the U.S.S.R.’s flanks, consequently became implacable enemies. Even in the spring of 1941, Stalin still refused to make a preemptive alliance with Britain and made no move to hinder the Nazi conquest of Yugoslavia and Greece, thereby losing his last potential allies on the European continent. When Hitler struck on June 22, 1941, the U.S.S.R. was isolated.
In theory, the war machine Stalin built could have handled the Nazi onslaught even in isolation. The territories conquered since 1939 provided depth to Soviet defenses, and the Soviet military advantage seemed overwhelming. On the first day of the invasion the Soviets had 15,000 tanks, 15,000 warplanes, and 37,000 artillery pieces on the European front, facing 3,300 German tanks, 2,250 warplanes, and 7,146 guns.[116] But in one of history’s greatest military catastrophes, within a month the Soviets lost 11,700 tanks (78 percent), 10,000 warplanes (67 percent), and 19,000 artillery pieces (51 percent).[117] Stalin also lost all the territories he had conquered in 1939–40 and much of the Soviet heartland. By July 16 the Germans were in Smolensk, 370 kilometers from Moscow.
The causes of the debacle have been debated ever since 1941, but most scholars agree that a significant factor was the psychological costs of Stalinism. For years the regime terrorized its people, punished initiative and individuality, and encouraged submissiveness and conformity. This undermined the soldiers’ motivation. Especially in the first months of the war, before the horrors of Nazi rule were fully realized, Red Army soldiers surrendered in huge numbers; between three and four million were taken captive by the end of 1941.[118] Even when they fought tenaciously, Red Army units suffered from a lack of initiative. Officers who had survived the purges were fearful to take independent actions, while younger officers often lacked adequate training. Frequently starved of information and scapegoated for failures, commanders also had to cope with political commissars who could dispute their decisions. The safest course was to wait for orders from on high and then slavishly follow them even when they made little military sense.[119]
Despite the disasters of 1941 and of the spring and summer of 1942, the Soviet state did not collapse, as Hitler hoped. As the Red Army and the Soviet leadership assimilated the lessons learned from the first year of struggle, the political center in Moscow loosened its hold. The power of political commissars was restricted, while professional officers were encouraged to assume greater responsibility and take more initiative.[120] Stalin also reversed his geopolitical mistakes of 1939–41 and allied the U.S.S.R. with Britain and the United States. Red Army initiative, Western assistance, and the realization of what Nazi rule would mean for the people of the U.S.S.R. turned the tide of the war.
Once victory was secured in 1945, however, Stalin initiated new waves of terror, purging more independent-minded officers and officials and again encouraging blind obedience.[121] Ironically, Stalin’s own death eight years later was partly the result of an information network that prioritized order and disregarded truth. In 1951–53 the U.S.S.R. experienced yet another witch hunt. Soviet mythmakers fabricated a conspiracy theory that Jewish doctors were systematically murdering leading regime members, under the guise of giving them medical care. The theory alleged that the doctors were the agents of a global American-Zionist plot, working in collaboration with traitors in the secret police. By early 1953 hundreds of doctors and secret police officials, including the head of the secret police himself, were arrested, tortured, and forced to name accomplices. The conspiracy theory—a Soviet twist on the Protocols of the Elders of Zion—merged with age-old blood-libel accusations, and rumors began circulating that Jewish doctors were not just murdering Soviet leaders but also killing babies in hospitals. Since a large proportion of Soviet doctors were Jews, people began fearing doctors in general.[122]
Just as the hysteria about “the doctors’ plot” was reaching its climax, Stalin had a stroke on March 1, 1953. He collapsed in his dacha, wet himself, and lay for hours in his soiled pajamas, unable to call for help. At around 10:30 p.m. a guard found the courage to enter the inner sanctum of world communism, where he discovered the leader on the floor. By 3:00 a.m. on March 2, Politburo members arrived at the dacha and debated what to do. For several hours more, nobody dared call a doctor. What if Stalin were to regain consciousness, and open his eyes only to see a doctor—a doctor!—hovering over his bed? He would surely think this was a plot to murder him and would have those responsible shot. Stalin’s personal physician wasn’t present, because he was at the time in a basement cell of the Lubyanka prison—undergoing torture for suggesting that Stalin needed more rest. By the time the Politburo members decided to bring in medical experts, the danger had passed. Stalin never woke up.[123]
You may conclude from this litany of disasters that the Stalinist system was totally dysfunctional. Its ruthless disregard for truth caused it not only to inflict terrible suffering on hundreds of millions of people but also to make colossal diplomatic, military, and economic errors and to devour its own leaders. However, such a conclusion would be misleading.
In a discussion of the abysmal failure of Stalinism in the early phase of World War II, two points complicate the narrative. First, democratic countries like France, Norway, and the Netherlands made at the time diplomatic errors as great as those of the U.S.S.R., and their armies performed even worse. Second, the military machine that crushed the Red Army, the French army, the Dutch army, and numerous other armies was itself built by a totalitarian regime. So whatever conclusion we draw from the years 1939–41, it cannot be that totalitarian networks necessarily function worse than democratic ones. The history of Stalinism reveals many potential drawbacks of totalitarian information networks, but that should not blind us to their potential advantages.
When one considers the broader history of World War II and its outcome, it becomes evident that Stalinism was in fact one of the most successful political systems ever devised—if we define “success” purely in terms of order and power while disregarding all considerations of ethics and human well-being. Despite—or perhaps because of—its utter lack of compassion and its callous attitude to truth, Stalinism was singularly efficient at maintaining order on a gigantic scale. The relentless barrage of fake news and conspiracy theories helped to keep hundreds of millions of people in line. The collectivization of Soviet agriculture led to mass enslavement and starvation but also laid the foundations for the country’s rapid industrialization. Soviet disregard for quality control might have produced flying coffins, but it produced them in the tens of thousands, making up in quantity for what they lacked in quality. The decimation of Red Army officers during the Great Terror was a major reason for the army’s abysmal performance in 1941, but it was also a key reason why, despite the terrible defeats, nobody rebelled against Stalin. The Soviet military machine tended to crush its own soldiers alongside the enemy, but it eventually rumbled on to victory.
In the 1940s and early 1950s, many people throughout the world believed Stalinism was the wave of the future. It had won World War II, after all, raised the red flag over the Reichstag, ruled an empire that stretched from central Europe to the Pacific, fueled anticolonial struggles throughout the world, and inspired numerous copycat regimes. It won admirers even among leading artists and thinkers in Western democracies, who believed that notwithstanding the vague rumors about gulags and purges Stalinism was humanity’s best shot at ending capitalist exploitation and creating a perfectly just society. Stalinism thus got close to world domination. It would be naive to assume that its disregard for truth doomed it to failure or that its ultimate collapse guarantees that such a system can never again arise. Information systems can reach far with just a little truth and a lot of order. Anyone who abhors the moral costs of systems like Stalinism cannot rely on their supposed inefficiency to derail them.
Once we learn to see democracy and totalitarianism as different types of information networks, we can understand why they flourish in certain eras and are absent in others. It is not just because people gain or lose faith in certain political ideals; it is also because of revolutions in information technologies. Of course, just as the printing press didn’t cause the witch hunts or the scientific revolution, so radio didn’t cause either Stalinist totalitarianism or American democracy. Technology only creates new opportunities; it is up to us to decide which ones to pursue.
Totalitarian regimes choose to use modern information technology to centralize the flow of information and to stifle truth in order to maintain order. As a consequence, they have to struggle with the danger of ossification. When more and more information flows to only one place, will it result in efficient control or in blocked arteries and, finally, a heart attack? Democratic regimes choose to use modern information technology to distribute the flow of information between more institutions and individuals and encourage the free pursuit of truth. They consequently have to struggle with the danger of fracturing. Like a solar system with more and more planets circling faster and faster, can the center still hold, or will things fall apart and anarchy prevail?
An archetypal example of the different strategies can be found in the contrasting histories of Western democracies and the Soviet bloc in the 1960s. This was an era when Western democracies relaxed censorship and various discriminatory policies that hampered the free spread of information. This made it easier for previously marginalized groups to organize, join the public conversation, and make political demands. The resulting wave of activism destabilized the social order. Hitherto, when a limited number of rich white men did almost all the talking, it was relatively easy to reach agreements. Once poor people, women, LGBTQ people, ethnic minorities, disabled people, and members of other historically oppressed groups gained a voice, they brought with them new ideas, opinions, and interests. Many of the old gentlemanly agreements consequently became untenable. For example, the Jim Crow segregation regime, upheld or at least tolerated by generations of both Democratic and Republican administrations in the United States, fell apart. Things that were considered sacrosanct, self-evident, and universally accepted—such as gender roles—became deeply controversial, and it was difficult to reach new agreements because there were many more groups, viewpoints, and interests to take into account. Just holding an orderly conversation was a challenge, because people couldn’t even agree on the rules of debate.
This caused much frustration among both the old guard and the freshly empowered, who suspected that their newfound freedom of expression was hollow and that their political demands were not fulfilled. Disappointed with words, some switched to guns. In many Western democracies, the 1960s were characterized not just by unprecedented disagreements but also by a surge of violence. Political assassinations, kidnappings, riots, and terror attacks multiplied. The murders of John F. Kennedy and Martin Luther King Jr., the riots following King’s assassination, and the wave of demonstrations, revolts, and armed clashes that swept the Western world in 1968 were just some of the more famous examples.[124] The images from Chicago or Paris in 1968 could easily have given the impression that things were falling apart. The pressure to live up to the democratic ideals and to include more people and groups in the public conversation seemed to undermine the social order and to make democracy unworkable.
Meanwhile, the regimes behind the Iron Curtain, which never promised inclusivity, continued stifling the public conversation and centralizing information and power. And it seemed to work. Though they did face some peripheral challenges, most notably the Hungarian revolt of 1956 and the Prague Spring of 1968, the communists dealt with these threats swiftly and decisively. In the Soviet heartland itself, everything was orderly.
Fast-forward twenty years, and it was the Soviet system that had become unworkable. The sclerotic gerontocrats on the podium in Red Square were a perfect emblem of a dysfunctional information network, lacking any meaningful self-correcting mechanisms. Decolonization, globalization, technological development, and changing gender roles led to rapid economic, social, and geopolitical changes. But the gerontocrats could not handle all the information streaming to Moscow, and since no subordinate was allowed much initiative, the entire system ossified and collapsed.
The failure was most obvious in the economic sphere. The overcentralized Soviet economy was slow to react to rapid technological developments and changing consumer wishes. Obeying commands from the top, the Soviet economy was churning out intercontinental missiles, fighter jets, and prestige infrastructure projects. But it was not producing what most people actually wanted to buy—from efficient refrigerators to pop music—and lagged behind in cutting-edge military technology.
Nowhere were its shortcomings more glaring than in the semiconductor sector, in which technology developed at a particularly fast rate. In the West, semiconductors were developed through open competition between numerous private companies like Intel and Toshiba, whose main customers were other private companies like Apple and Sony. The latter used microchips to produce civilian goods such as the Macintosh personal computer and the Walkman. The Soviets could never catch up with American and Japanese microchip production, because—as the American economic historian Chris Miller explained—the Soviet semiconductor sector was “secretive, top-down, oriented toward military systems, fulfilling orders with little scope for creativity.” The Soviets tried to close the gap by stealing and copying Western technology—which only guaranteed that they always remained several years behind.[125] Thus the first Soviet personal computer appeared only in 1984, at a time when in the United States people already had eleven million PCs.[126]
Western democracies not only surged ahead technologically and economically but also succeeded in holding the social order together despite—or perhaps because of—widening the circle of participants in the political conversation. There were many hiccups, but the United States, Japan, and other democracies created a far more dynamic and inclusive information system, which made room for many more viewpoints without breaking down. It was such a remarkable achievement that many felt that the victory of democracy over totalitarianism was final. This victory has often been explained in terms of a fundamental advantage in information processing: totalitarianism didn’t work because trying to concentrate and process all the data in one central hub was extremely inefficient. At the beginning of the twenty-first century, it accordingly seemed that the future belonged to distributed information networks and to democracy.
This turned out to be wrong. In fact, the next information revolution was already gathering momentum, setting the stage for a new round in the competition between democracy and totalitarianism. Computers, the internet, smartphones, social media, and AI posed new challenges to democracy, giving a voice not only to more disenfranchised groups but to any human with an internet connection, and even to nonhuman agents. Democracies in the 2020s face the task, once again, of integrating a flood of new voices into the public conversation without destroying the social order. Things look as dire as they did in the 1960s, and there is no guarantee that democracies will pass the new test as successfully as they passed the previous one. Simultaneously, the new technologies also give fresh hope to totalitarian regimes that still dream of concentrating all the information in one hub. Yes, the old men on the podium in Red Square were not up to the task of orchestrating millions of lives from a single center. But perhaps AI can do it?
As humankind enters the second quarter of the twenty-first century, a central question is how well democracies and totalitarian regimes will handle both the threats and the opportunities resulting from the current information revolution. Will the new technologies favor one type of regime over the other, or will we see the world divided once again, this time by a Silicon Curtain rather than an iron one?
As in previous eras, information networks will struggle to find the right balance between truth and order. Some will opt to prioritize truth and maintain strong self-correcting mechanisms. Others will make the opposite choice. Many of the lessons learned from the canonization of the Bible, the early modern witch hunts, and the Stalinist collectivization campaign will remain relevant, and perhaps have to be relearned. However, the current information revolution also has some unique features, different from—and potentially far more dangerous than—anything we have seen before.
Hitherto, every information network in history relied on human mythmakers and human bureaucrats to function. Clay tablets, papyrus rolls, printing presses, and radio sets have had a far-reaching impact on history, but it always remained the job of humans to compose all the texts, interpret the texts, and decide who would be burned as a witch or enslaved as a kulak. Now, however, humans will have to contend with digital mythmakers and digital bureaucrats. The main split in twenty-first-century politics might be not between democracies and totalitarian regimes but rather between human beings and nonhuman agents. Instead of dividing democracies from totalitarian regimes, a new Silicon Curtain may separate all humans from our unfathomable algorithmic overlords. People in all countries and walks of life—including even dictators—might find themselves subservient to an alien intelligence that can monitor everything we do while we have little idea what it is doing. The rest of this book, then, is dedicated to exploring whether such a Silicon Curtain is indeed descending on the world, and what life might look like when computers run our bureaucracies and algorithms invent new mythologies.