Now that we have developed in the previous chapters a more detailed analysis of what nihilism is, we can begin to develop in this chapter an analysis of where nihilism is. Nihilism is not merely the denial that life is inherently meaningful, as nihilism can instead be seen as a particular way of responding to the anxiety caused by the discovery of life’s inherent meaninglessness. The nihilist does not despair like the pessimist, detest like the cynic, nor detach like the apathetic individual. Nihilists can be optimistic, idealistic, and sympathetic, as their aim in life is to be happy, to be as happy and carefree as they were when they were children, as happy and carefree as they were before they discovered that life lacked the meaning they thought they’d find in it when they grew up.
As we have seen, the nihilist’s way of responding to life’s meaninglessness cannot be properly understood if reduced to an individual affair. On the one hand, nihilism is like a disease, a contagious attitude that can quickly spread from individual to individual. On the other hand, nihilism is contagious because the nihilistic way of life is an outgrowth of a way of life that nihilists are born into and share with others.
Considering the danger of being surrounded by people who do not care about the consequences of their actions, we might expect that society would be actively engaged in combating nihilism. Yet while the label of “nihilist” is used in everyday life as a term of criticism, the logic of nihilism can nevertheless be found to be championed by various elements of society. Hence, the spread of nihilism might not only be due to the contagiousness of a nihilistic attitude among individuals but might also be due to cultural influences that encourage a nihilistic attitude and help to make it so contagious. And it is just such cultural influences that this chapter will explore, as nihilism can be found on TV, in the classroom, on the job, and in politics.
Given that nihilism results from the desire to become free from the anxiety of freedom, it should perhaps not surprise us that contemporary pop culture would embrace nihilism—for if we turn to pop culture in order to be entertained, to be comforted, to be distracted, then pop culture already shares with nihilism at the very least the aim of stress reduction. Yet what should concern us about pop culture is not whether it attracts nihilists, but whether it helps to motivate people to become nihilistic.
Parents have long worried about the corrupting influences of pop culture, such as whether watching television can make children dumber or whether playing video games can make children more violent. Such concerns tend to focus on the content of pop culture rather than on the devices through which we consume pop culture. One reason for this is that watching a screen has become so normal that we no longer question this way of spending our time. In other words, we ask people what they’re watching, but we don’t ask people why they’re watching.
Watching as a leisure activity has of course existed since long before the existence of screens. However, with screens, we no longer need to go somewhere to watch something. The philosopher Günther Anders argued in his 1956 essay “The World as Phantom and as Matrix” that radio and television were helping to create what he called “the mass man.”1 Radio and television programs fill the home with conversation, with the conversation of others, making the conversation of those consuming those programs not only difficult but a nuisance. Moreover, television sets structure the layout of furniture so that everyone can watch, requiring that people don’t sit facing each other, but sit facing the screen. As the events consumed are recorded and replayed for consumption, radio and television not only make it unnecessary to leave the house in order to witness events but also lead to events being staged such that they can be recorded and replayed for consumption. So rather than providing us experiences of real life, radio and television provide us with a pseudo-reality (events staged for mass consumption) that we can pseudo-experience (consumption from our couches) in our pseudo-lives (consuming near others rather than being with others).
What is at issue for Anders is the way radio and television reshape what we think of as “experience,” what we think of as “communication,” and even what we think of as “intimacy.” We form relationships with characters in programs in ways that we do not form relationships with characters on a stage, as radio and television put characters in intimate proximity to us, making it seem as though they are talking to us, as though they are letting us into their homes just as we are letting them into ours. As the philosopher Theodor Adorno similarly argued in his 1954 essay “How to Look at Television,”2 such intimacy makes it easy for us to identify with television characters, particularly as they are portrayed in familiar settings, in familiar situations, in familiar conflicts. And yet though these characters may have jobs, families, and problems that resemble our own, their lives have a stability and security that in no way resembles our lives.
The sitcom family may get into trouble, but within 30 minutes (22 if you’re counting commercial breaks) the trouble will be resolved. Whatever disorder may have erupted to create both comic and dramatic tension will disappear, likely to never be brought up again, and order will be restored. Formulaic programming is thus deeply comforting, as it helps us to feel that situations that seem worth worrying about will work out for the best in the end, that the consequences of our actions don’t really matter. And it is of course precisely such need for comfort that brings us to screens to watch formulaic programming.
But that we know such programming is comforting does not mean that we know what else such programming may be doing to us. Adorno’s concern was that, along with our being comforted, such programs were also helping to induce in us a feeling of complacency, as what is comforting is specifically the return by the end of each episode to the status quo. Television therefore not only entertains us but also teaches us. Preservation of the status quo is good. Disruption of the status quo is bad.
Of course, in the age of prestige TV, it may seem that such analyses are outdated. Perhaps the programs that Anders and Adorno were watching in the 1950s were formulaic and reinforced a sense of complacency, but today’s TV shows are supposed to be complicated works of art made to resemble literature, not made to keep people staring at a screen between soap commercials. Yet while prestige programs like Breaking Bad, Mad Men, and Game of Thrones certainly do not use formulas that people who grew up watching TV would recognize, that doesn’t mean that these shows are not formulaic. Walter White kept breaking bad. Don Draper kept breaking promises. Lannisters kept breaking Starks. Prestige programs create their own formulas, formulas that the audience expects the show to adhere to, formulas that subsequent shows try to copy in order to piggyback off the original show’s success.
While these shows might not be selling soap, they are still in the business of keeping people staring at screens. Whereas traditional programming often tried to present a wholesome and idyllic dreamworld that was a better version of reality, contemporary programming often presents nightmare versions of reality meant to scare us from ever wanting to go outside again. In either version of programming what is important is the idea that screens offer an escape from reality, and reality is implicitly or explicitly presented as that from which we must escape.
In the era of binge-watching and handheld screens, the business of escapism—the business of keeping people staring at screens—is only getting more and more successful. Evading reality though staring at screens has become the status quo, the status quo that screens train us to conform to and to be complacent about, the status quo that we have come to accept as good to preserve and as bad to disrupt. Whereas previously staring at screens was treated as doing nothing, as being a “couch potato” in front of a “boob tube,” now, thanks to the aura of “prestige” and “critical acclaim,” staring at screens not only is considered to be doing something but is increasingly becoming the only way we know how to do anything.
In the era of binge-watching and handheld screens, the business of escapism—the business of keeping people staring at screens—is only getting more and more successful.
Staring at screens has become so commonplace that when someone asks us to look up from our screens, we often get confused and angry. This may occur, for example, when a teacher asks students to put their phones away. Though, of course, when the teacher tells students to stop staring at screens and to pay attention in class, that often means the students just need to stop looking at their screens and instead look at the teacher’s screen, at the giant screen in front of the class that everyone can stare at together.
The screen may be a chalkboard or a PowerPoint presentation, but even in the classroom there is still the expectation that all eyes need to be on a screen. The teacher is thus like a television executive fighting against competing content on competing screens for the attention of the audience. And like a television executive, the teacher often uses well-worn formulas (e.g., the “Socratic method”) in order to present content in a way that will be most appealing to the audience and in a way that will be most easy for the audience to absorb.
The Brazilian philosopher, activist, and educator Paulo Freire was very concerned about the expectation that teachers should be content deliverers and students should be content absorbers. In his 1968 work Pedagogy of the Oppressed, Freire argues, “Education is suffering from narration sickness.”3 When teachers are trained to educate by talking at students, and students are trained to be educated by passively listening to teachers, learning is reduced to repetition. In order to be successful, students need only repeat back to the teacher what they have heard from the teacher and what they have read from the reading assigned by the teacher. It would thus not be surprising to Freire that we now frequently use phrases like “artificial intelligence,” “smart devices,” and “machine learning” to talk about technology, as students have been treated like machines for so long it would only be natural that we would start treating machines like students.
Freire uses the metaphor of a bank to describe what narration sickness does to learning. Students are expected to listen passively to teachers because it is taken for granted that teachers have information and students do not. Information is a form of currency, a currency that teachers deposit into the minds of students. Students are thus thought of as empty receptacles—or banks—waiting to be filled. Such a view gives rise to a teacher/student relationship that is necessarily hierarchical, as teachers are expected to be informed experts, and students are expected to be uninformed novices. This relationship gives rise to a dynamic where teachers are all-powerful and students are all powerless, as teachers have the power to instruct as they please and to punish as they please while students are left with no option but to obey or leave.
The issue here for Freire is not that it is wrong to think teachers have more information than students, but rather that it is wrong to think that education is merely a process of information exchange. So long as information is what is valued by society, so long as learning is only seen by society as a form of economic transaction, then it makes sense for schools to use a top-down model of education. What information means may become increasingly irrelevant, but all that matters is that information is possessed. And because the possession of information is all that matters, not only are students motivated to plagiarize but students even argue that plagiarism should be seen as legitimate since it’s the best way to guarantee that the information they possess is correct.
Of course, it is hard to refute student arguments that plagiarism is wrong when the model of education in which they find themselves encourages an instrumental attitude toward learning. As we have seen, this instrumentality can be found not only in treating learning as a means to an end rather than as an end in itself but also in treating students and teachers as instruments of information storage and distribution. Learning is still spoken of in schools using the same hallowed language that has always surrounded it, giving it the aura of something intrinsically good. But because the language of learning does not match the practice of learning, students simply come to see education as hollow and school as a chore. And if students don’t reach such conclusions on their own, then schools provide standardized testing in order to guarantee that students see schooling as nothing but a process of standardizing human beings.
When students all have to learn the same information in the same way, creativity and diversity are discouraged while obedience and conformity are not only encouraged but required. As Freire writes,
It is not surprising that the banking concept of education regards men as adaptable, manageable beings. The more students work at storing the deposits entrusted to them, the less they develop the critical consciousness which would result from their intervention in the world as transformers of that world. The more completely they accept the passive role imposed on them, the more they tend simply to adapt to the world as it is and to the fragmented view of reality deposited in them. […] Indeed, the interests of the oppressors lie in “changing the consciousness of the oppressed, not the situation which oppresses them”; for the more the oppressed can be led to adapt to that situation, the more easily they can be dominated.4
As Freire argues, treating the goal of education as information regurgitation prevents students from developing the ability to think critically. While such a lack of critical development would seem like a failing of the banking model of education, Freire suggests that it should instead be seen as proof that this education system is working precisely as designed.
According to Freire, students are not learning how to criticize society but instead only how to conform to society. For this reason Freire argues that the banking model of education is an oppressive system designed by oppressors to teach the oppressed to accept their oppression. Freire’s argument against education is thus parallel to Nietzsche’s argument against morality. In both cases the problem is not that society is failing to create good citizens, but that what is meant by “good” is what is good for society rather than what is good for humans, for the people who have to live in the society sustained by these educational and moral systems.
The “narration sickness” identified by Freire can now be seen as the sickness Nietzsche identified as nihilism. The status quo is protected by valuing conformity as what is good, a value that is reinforced by receiving “good grades” for “good work.” Such moralistic language further reinforces conformity by entangling educational values with moral values, making clear to students that they should feel pride when they are obedient (“good student”) and guilt when they are disobedient (“bad student”). Students who plagiarize are thus often told that what they are doing is wrong because it runs counter to moral values, because plagiarizing is cheating, and cheating is something done by people who have a bad character.
The education system may be designed in such a way that it leads students to see plagiarism as perfectly reasonable given what is expected of them. But morality is designed in such a way that it protects the education system by making students feel personally responsible for plagiarism. Because the banking model of education prevents students from developing critical thinking, students are less likely to recognize the degree to which the education system itself is responsible for making plagiarism seem reasonable. And if they are able to question the education system, they are labeled as “troublemakers” and punished for having been “disrespectful” of their teachers. In other words, because what is currently valued as education has devalued education, morality is required to step in and fill the void, reinforcing hollowed out educational values (e.g., learning for the sake of learning) with moral values (e.g., learning for the sake of duty).
As Nietzsche warned, a society that only values its own survival, that only values the protection of the status quo, is a sick society, a society that creates good citizens but bad humans. Likewise Freire argues that the oppressiveness of the banking model of education is dehumanizing both for the oppressors and for the oppressed. For Freire, learning requires being able to have a dialogue, but to have a dialogue the participants must regard each other as equals in order to speak with each other rather than at each other. Reducing learning to the depositing of information prevents both teachers and students from being able to communicate with each other as equals rather than as merely the informed and the uninformed. Top-down information-centric education thus prevents human beings from being able to see each other as human beings, which in turn prevents both teachers and students from being able to genuinely learn from each other by being able to enter into dialogue with each other.
As Nietzsche warned, a society that only values its own survival, that only values the protection of the status quo, is a sick society, a society that creates good citizens but bad humans.
This lack of genuine learning is detrimental not only to students and to teachers but to society as a whole. While the banking model of education may be useful for preserving the status quo of society, it is detrimental for the future of society. As Nietzsche warned, social preservation merely creates social stagnation and, ultimately, social destruction. Education, like entertainment, can be used to make life easier and more stable, but if challenge and uncertainty are required for growth, then an easy and stable life is really just a slow and steady death. In other words, such a life is nihilistic.
A likely counterargument to Freire’s criticisms of the banking model of education is that he simply did not appreciate that the true purpose of education is to prepare students for the real world. The real world is top-down. The real world is information-centric. So if students were taught in the way that Freire advocated, taught to think of authority figures not as their superiors but as their equals, taught to be critical of conformity and to question rather than follow rules, they would simply be forced to suffer a rude awakening when they finished school and tried to get a job. In other words, the banking model of education isn’t nihilistic, it’s realistic.
Such a view of the relationship between education and employment is hard to argue against, as surely we need a particular kind of schooling to prepare students for the particular kind of working that they are most likely to enter into. But this argument of course raises the question of why we have come to accept this particular kind of working if it would require this particular kind of schooling. For if Freire is right that the way we teach students is dehumanizing and the counterargument is not that Freire is wrong but that we teach students in this manner because it prepares them for the real world, then this counterargument is really just another way of saying that the real world is dehumanizing and that we should just learn to accept it.
The idea that what we have come to accept as the real world is a world of dehumanizing work is an idea that was argued for most influentially by Karl Marx. In his essay “Alienated Labor,” an essay that Marx never finished nor published, we can find the philosophical underpinnings of Marx’s criticisms of capitalism. In the essay Marx analyzes the various ways in which trying to make money can end up making us less human, and the various ways in which we become blinded by money so that we also end up caring more about the dream of being rich than the reality of not being human.
According to Marx, “labor” is a process by which we make objects, objects that we need in order to find out who we are. When a child builds a sandcastle and desperately tries to make her dad look up from his screen in order to see what she built, she’s trying to get her dad not just to appreciate the sandcastle but also to appreciate her. Or to be more specific, the dad’s appreciation of his daughter’s sandcastle is his appreciation of her. In building the sandcastle, she has put herself into the sandcastle, and so her dad’s judgment of what she made is his judgment of her. We make sandcastles to find out if we’re creative, we make jokes to find out if we’re funny, and we make conversation to find out if we’re interesting. We make things to find out who we are because we identify ourselves in and through what we make.
In a feudal society where goods were exchanged through barter, people would have come to know each other through each other’s labor. Making things is thus not only how we find out who we ourselves are but also how we find out who others are. Labor is then necessary not only for discovering one’s own identity but also for building a community. The relationship between labor, identity, and community can be seen perhaps most clearly in the prevalence of the name Smith. Blacksmiths, goldsmiths, silversmiths, and so forth were known to the rest of their communities for their smithing and so came to be known as Smith (or its etymological cousin in other languages, such as Schmidt, Kowalski, Kovac, Ferraro, Herrera, and Faber). Likewise we still have today other such common occupationally suggestive names as Abbott, Archer, Baker, Barber, Carpenter, Cook, Farmer, Fisher, Glazer, Glover, Hunter, Judge, Knight, Mason, Painter, Shepherd, Tanner, and Taylor, to name a few. In other words, labor is so fundamental to identity that the labor of one’s ancestors can continue to define one’s family for generations.
It is due to this essential relationship between identity and labor that Marx focuses his attention on the rise of industrialization through the division of labor. The move from labor being performed by individual craftsmen to labor being performed by multiple people working on an assembly line certainly helped to make production faster and more efficient. However, to simply call this “progress” is to ignore what the division of labor does to the laborers. While we often focus our concerns about industrialization on the horrible working conditions in factories, Marx instead makes clear how the division of labor is itself detrimental to laborers. Working conditions can be improved, but by cutting labor up into tasks, tasks that can be performed mindlessly for hours on end, laborers became divided from their labor, and so became divided from their identity.
A crisis of identity arises therefore when we lose control over what we make. As Charlie Chaplin illustrated in the movie Modern Times (1936), workers on an assembly line make parts of parts of parts, working without knowing what they’re working on or why, thus becoming just another cog in a machine. And as Mike Judge illustrated in the movie Office Space (1999), moving from an assembly line to a cubicle has not helped alleviate the cog-like feeling of work and has perhaps even made it worse. For now that we have improved working conditions, now that we have health care, 40-hour work weeks, sick leave, vacation time, photocopiers, and coffee machines, there is increasingly little left for us to hate about work other than working itself.
Industrialization and subsequent revolutions in production have turned labor from a source of identity to a source of misery. The sense of community produced by labor no longer comes from sharing our creations with each other, but only comes from sharing our hatred of work with each other. This is why Marx described contemporary labor in terms of “alienation,”5 for as the products of our labor become alien to us, so we become alien to ourselves, to each other, and to what it means to be human. We cannot help but define ourselves through what we make, but ever since the Industrial Revolution “what we make” has come to mean only “how much money we make.” We are defined then not by showing others who we are, but by showing others our paychecks.
To become defined by a paycheck is to become defined by what one can consume rather than by what one can create, replacing pride in what we make with pride in what we own. We work in order to make money, renting out our minds and our bodies to the highest bidder. Minds and bodies are then no longer who we are, they are merely means at our disposal for making money and thus need have no more meaning for us than employees have for a CEO. Mind/body dualism is then not merely a metaphysical theory, it is also an effective management strategy.
A great way to succeed at making money is to internalize the division of labor. By trying to divide ourselves into physical selves and mental selves, we can maximize the output of the former and minimize the input of the latter. However, as Marx explains, being able to increase our work output comes at high cost:
What, then, constitutes the alienation of labor? First, in the fact that labor is external to the worker, that is, that it does not belong to his essential being; that in his work, therefore, he does not affirm himself but denies himself, does not feel well but unhappy, does not freely develop his physical and mental energy but mortifies his body and ruins his mind. The worker, therefore, feels himself only outside his work, and feels beside himself in his work. He is at home when he is not working, and when he is working he is not at home. His work therefore is not voluntary, but coerced; it is forced labor. It is therefore not the satisfaction of a need, but only a means for satisfying needs external to it. Its alien character emerges clearly in the fact that labor is shunned like the plague as soon as there is no physical or other compulsion.6
In other words, it is much easier to get through each workday if we can do our jobs without having to be aware of what we’re doing. Whether working on a factory floor or on an Excel spreadsheet, thought, reflection, and consciousness are often detrimental to our ability to survive work. The more automatic work can be, the more zombielike we can be at work. And the more zombielike we can be at work, the less work feels like work because zombies don’t feel at all.
If we spend a third of our lives sleeping, and we spend our time at work trying to put ourselves to sleep, then for most of our lives we aren’t just acting like zombies, we are zombies. To be a worker, to have to work for a living, is to be—as George A. Romero made clear in multiple subtext-rich movies—a member of the living dead. We are happiest when we can deaden our minds at work and deaden our bodies after work, for which reason we call the hour after work, the hour we spend “eating, drinking, procreating”7 ourselves into a stupor, our happy hour.
To be a worker, to have to work for a living, is to be—as George A. Romero made clear in multiple subtext-rich movies—a member of the living dead.
Of course, as Marx makes clear, even if labor no longer serves the purpose it had as “the satisfaction of a need,” labor is not purposeless, as it is now “a means for satisfying needs external to it.” We do not zombify ourselves for nothing, but in order to be able to afford to eat, to drink, and to procreate. The problem that Marx is pointing to here then is not that labor has become meaningless, but rather that, even though labor has become alienating and dehumanizing, we still find it meaningful enough to keep doing it. The meaning of labor has been replaced with the meaning of money. However, money is a piece of metal, or a piece of paper, or (more recently) a piece of code, something that is itself meaningless beyond the ability to exchange it for goods and services. The goods and services we can buy with the money we get from our labor must then be what drive us to keep laboring after our labor has become meaningless.
But because these goods and services are produced by laborers like us, laborers who are themselves working not for the sake of work but for the sake of money, then these goods and services have also lost the meaning they once had. Goods are mass produced and thus no longer provide any way to identify the individuality of their maker. Services may still be provided with a smile, but not because they still represent a genuine human interaction, but because service industry workers are trained by bosses to smile to avoid being fired, and trained by customers to smile to earn a tip.
Goods and services are then not meaningful because through them we learn about the people providing them, but are meaningful because of how they make us feel to have them. We might no longer care about who makes our food or who fixes our plumbing, but we still need to eat and to shower. And as we become able to afford better food and better showers, we become able to move from merely satisfying needs to instead fulfilling desires. It is the promise of being able to fulfill our desires that motivates us to keep working long after working has itself ceased to be either fulfilling or desirable. Labor is not in itself meaningful. Money we get from labor is not in itself meaningful. The goods and services we get from money are not in themselves meaningful. But the feeling of fulfillment we get from goods and services is meaningful.
This new meaning of labor that Marx is describing can be seen, however, to operate on the premise that to be human is to be empty. For if we are fulfilled not through what we do, but through what we can buy, through the goods and services that we can acquire, then without those goods and services, we are nothing. We are no longer able to fulfill ourselves, we no longer find our own creations to be desirable. Instead we seek fulfillment from acquisition and consumption because what we can acquire and consume is desirable. But if, as Marx suggests, we were previously able to fulfill ourselves through what we could do rather than through what we could own, then the desire for acquisition and consumption is not natural, but is itself something we have acquired and consumed.
That these desires are not natural makes sense since often the desire we have for goods and services has nothing to do with the goods and services themselves. What we desire is the status those goods and services have in society and the status that we thereby attain through association with them. If certain goods and services are seen by society as luxurious, then to have such luxuries is to be seen by society as living a life of luxury. And if we can have a life that others desire, then we can feel ourselves to be desirable. But of course, as Eddie Murphy illustrated in Coming to America (1988), such desirability by association can be deceiving, as we can never know whether people who claim to desire us would continue to desire us if misfortunate fell and we no longer had any association with such desirable goods and services. In other words, such fulfillment still leaves us feeling empty.
Yet, as Socrates warn us at the end of Plato’s Republic, such a feeling of emptiness does not lead us to realize that acquisition and consumption are actually unfulfilling pursuits. Instead this feeling leads us to endlessly pursue more and more acquisition and consumption. Plato writes,
Therefore, those who have no experience of reason or virtue, but are always occupied with feasts and the like, are brought down and then back up to the middle, as it seems, and wander in this way throughout their lives, never reaching beyond this to what is truly higher up, never looking up at it or being brought up to it, and so they aren’t filled with that which really is and never taste any stable or pure pleasure. They always look down at the table, they feed, fatten, and fornicate. To outdo others in these things, they kick and butt them with iron horns and hooves, killing each other, because their desires are insatiable. For the part that they’re trying to fill is like a vessel full of holes, and neither it nor the things they are trying to fill it with are among the things that are.8
If acquisition and consumption make us feel fulfilled—and if in the current world of work they are often the only things that make us feel fulfilled—then the fleetingness of that fulfillment will only make us more determined to acquire and consume as often as possible. As Plato makes clear, the people who would pursue such unfulfilling fulfillment are those “who have no experience of reason or virtue.” In other words, if a life of acquisition and consumption is the only life we know, then we would not see a life of meaningless labor for meaningless money for meaningless goods and services for meaningless fulfillment as a meaningless life, instead we would see it as the real world.
As Marx concludes, if we have come to see a meaningless reality as reality, this is not because reality is meaningless, but because having workers accept meaninglessness as reality must be a benefit to someone else. As Marx writes,
If the product of labor does not belong to the worker, if it is an alien power that confronts him, then this is possible only because it belongs to a man other than the worker. If the worker’s activity is torment for him, it must be pleasure and a joy of life for another. Neither the gods, nor nature, but only man himself can be this alien power over man.9
Having workers believe that they have no choice but to work for a living is a belief that clearly benefits people who do not work for a living, people who do not need to work for a living, at least so long as there are workers who they can live off of instead. The question that needs to be answered then is this: How and why did people accept this belief?
For Marx, the answer to this question is that workers were willing to work for a living because they believed the benefits (e.g., money) would outweigh the costs (e.g., dehumanization). For Plato, the answer to this question is that workers were willing to work for a living because they believed no other way of life was possible. So, to form such beliefs, the workers needed, on the one hand, to be able to alienate themselves from their humanity without worrying about the consequences, and on the other hand, to learn to conform to reality rather than question it. In other words, if you want people to accept the belief that working for a living is the only way to live, then you want people to accept nihilism.
If we can distract ourselves from what we are doing to ourselves—such as by staring at screens for hours on end every day—then we can work for a living without having to feel alive enough to care about the consequences. If we can be taught at school how to be compliant rather than be critical—such as by learning to accept that we are empty vessels and that authority figures have all the answers—then we can learn to accept that working for a living is not nihilistic, it is simply normal. Nihilism at home, nihilism at school, and nihilism at work are thus not different examples of the same nihilistic attitude; they are different parts of the same nihilistic system.
If we live in a nihilistic world, it is not because the world is nihilistic, but because the world we live in is perpetuated by our acceptance of nihilism. The more accepting of nihilism we become, the more susceptible to exploitation we become. If we believe that to be human is to be empty, that our lives are essentially meaningless, and that we need to acquire and consume to feel fulfilled, then trading our humanity for the ability to acquire and consume doesn’t seem dehumanizing; it seems like a bargain. Nihilism is therefore not best understood as an individualistic experience to be found wherever nihilistic individuals go, but as an experience generated by a system that feeds off nihilism, by a system that extends into every facet of human life.
To claim that nihilism is simply true is to not merely make a claim about the nature of reality, it is to make a claim that helps to shape reality. Nihilism cannot be seen as solely a moral or a metaphysical position without ignoring its political dimensions. If helping people to accept the normalcy of nihilism serves to help people become exploited and dehumanized, then arguments that treat nihilism as either an individual failing or as a cosmic truth are arguments that serve exploitative and dehumanizing ends.
When we think of nihilism as a way to describe an individual’s moral beliefs (or the lack thereof), we reduce nihilism to a matter that only individuals could resolve on their own. When we think of nihilism as a way to describe the universe’s meaningfulness (or the lack thereof), we elevate nihilism to a matter that only gods could resolve on their own. Either way of thinking about nihilism thus prevents us from recognizing the need to confront nihilism as a matter that could only be resolved collectively, at the level between that of individuals and of gods, at the level of the political.
The political relevance of nihilism was most clearly articulated by Hannah Arendt. In “Introduction into Politics” (1955)—a book that Arendt did not complete but that has since been published as a long essay—Arendt traces the history of the meaning of politics back to the Ancient Greek origins of the word. Arendt carries out this historical analysis in order to show us that what we think of as “politics” today is not what would have been meant by “politics” to Plato and Aristotle.
Today we tend to think of politics as governments, elections, borders, laws, militaries, taxes, domestic policies, foreign policies, and, of course, corruption. Or to put it another way, politics today is basically everything we elect representatives to think about so we don’t have to. But in the time of Plato and Aristotle politics was instead seen as an activity, an activity that we engage in not as a means to an end but as an end in itself, an activity that defined what it means to be human.
The question that Arendt is trying to answer in this essay is this: “Does politics still have any meaning at all?”10 For the Ancient Greeks, politics meant freedom. We might think that we today share this meaning of politics with the Ancient Greeks, as we think that politics is how to protect freedom. If we see ourselves as having been born free but also born vulnerable, then we see politics as a necessary evil, as an evil that is necessary only insofar as it helps us to live so that we can enjoy our freedom. Politics is for many a limit on our freedom, and so we imagine utopia would be a world that had no need for politics. As Arendt points out, such utopian thinking became especially prevalent after World War II, as the rise of totalitarianism and the development of the atomic bomb meant that politics became seen as not a way to protect freedom, but as a threat to freedom, as a threat to life itself.
For the Ancient Greeks, however, it would have made no sense to conceive of politics as either a way to protect freedom or as a threat to freedom, as for them politics is freedom. Politics was originally understood to be an activity engaged in by those who were capable of freeing themselves from what was seen as inhuman, for which reason to engage in politics was to be human. What was inhuman was coercion. To be forced to do something—whether by another person or by nature—was to live like an animal, like something less than human. Animals act out of necessity, but humans are capable of acting spontaneously without external influence. Or to put it another way, humans can act, but animals can only react. As Arendt points out, when Aristotle defines humans as political beings, he does not mean that humans have always been and always will be involved in politics, but rather that the beings who can engage in politics are the beings who are human. Ancient Greeks saw themselves—and only themselves—as human because they had achieved what others had not: the creation of the polis, of a space for politics.
Because our animal necessities—necessities like eating, drinking, and procreating—were seen as needs that were managed at home, the home was associated with coercion rather than with freedom. To be free thus required the ability to leave home, to leave behind the necessities of life that nature forces us to satisfy. In other words, to be free required someone else be enslaved. As Arendt writes,
Unlike all forms of capitalist exploitation, which pursue primarily economic ends aimed at increasing wealth, the point of the exploitation of slaves in classical Greece was to liberate their masters entirely from labor so that they then might enjoy the freedom of the political arena. This liberation was accomplished by force and compulsion, and was based on the absolute rule that every head of household exercised over his house.11
Rather than see slaves as people who were dehumanized by their servitude, the Ancient Greeks saw anyone who was not like them, who was not the head of a household, as born to serve. And so, as Aristotle argued, slaves were those who could and should take care of the inhuman necessities of the home in order to allow those capable of being human the “leisure” to be human.
Leisure—or liberation from the chore of having to take care of one’s needs and of one’s home—was not then conceived of as freedom, but as a precondition for freedom. Having secured leisure for himself, the head of a household was able to leave the private realm of the home and enter the public realm of the polis. The polis enabled a space for politics, and thus for freedom, because it created a place outside the home where citizens (i.e., adult Greek men who were the head of a household) could meet and could speak.
To be a citizen was to regard other citizens, and to be regarded by other citizens, as an equal, as having the same standing in the polis. As such, a political space was a space where citizens could speak with equals and as equals. Equality therefore was nothing more than a result of the requirement that the polis be a space for freedom. For the Ancient Greeks, freedom meant freedom from coercion not only in the sense of freedom from natural necessities but also in the sense of freedom from hierarchical inequalities.
The inequalities that created the polis (Greek > non-Greek, husband > wife, father > child) were therefore what allowed for the creation of the equality (citizen = citizen) of the agora. The agora, or the public square, was the specific place in the polis where citizens could meet and speak to each other freely, without having to worry about satisfying needs or having to worry about issuing commands. Domestic affairs as well as foreign affairs were then seen by the Ancient Greeks as unpolitical.
Managing the economy or managing the military involves giving commands. Because commands require a relationship of speaker who commands and of listener who obeys, the speaking involved in commands would not have been equivalent to what the Ancient Greeks considered to be free speech. The agora was a political space because it was a space where citizens could be human, a space where citizens could be among equals, a space where citizens could be free from coercion and free to speak.
Freedom of speech was not for the Ancient Greeks a way to voice opinions, but a way to share a world. As no individual citizen could possibly have a view of the world that was complete, to try to form a complete view of the world required individual citizens to come together and discuss their limited perspectives of the world with each other. As Arendt writes,
Only in the freedom of our speaking with one another does the world, as that about which we speak, emerge in its objectivity and visibility from all sides. Living in a real world and speaking with one another about it are basically one and the same, and to the Greeks, private life seemed “idiotic” because it lacked the diversity that comes with speaking about something and thus the experience of how things really function in the world.12
If the home was the private domain for satisfying animalistic needs, then the agora was the public domain for satisfying human curiosities.
Though we have not retained this view of politics in our actions, we have retained it in our language, such as when we describe politics as “consensus building.” To reach a “consensus” is to have come to share a view (-sensus) of the world with (con-) others. Similarly, we take for granted today that political agreements should be based on “common sense.” Though we might not realize it, this idea refers back to the concept of the sensus communis, which can be understood epistemologically as a judgment based on one’s own senses being in agreement, or politically as a judgment based on the sense experience of the members of a community being in agreement.
This is why the Ancient Greeks saw privacy and solitude as idiotic, as remaining closed off from the world, as staying silently within one’s own limited worldview. Citizens were seen as needing to join together to collectively expand their views in order to avoid having views of reality that were incomplete and incoherent, to avoid having views that were merely idiosyncratic. In other words, if politics means freedom, and freedom means becoming human, then an individual cannot become human alone. Becoming human is a political project, a project that is essentially public and collaborative. Or as Aristotle put it, “a human being is by nature a social being.”13
We can now see why for the Ancient Greeks politics would have been an end in itself rather than a means to an end. Politics was not originally about protecting life so that we could individually enjoy our freedom, it was about creating freedom so that collectively life could be made meaningful. This is what I take to be the core of Arendt’s analysis in this essay. In the modern era we have greatly increased the number of people who can participate in politics, but because we have greatly reduced the scope of politics, we have made such participation meaningless.
Arendt traces the reduction of the meaning of politics back to Plato. For Plato, truth was distinct from, and superior to, consensus. Plato argued that the political debate of the many could never reach truth as human experience did not belong to the realm of truth. Plato sought to create a space outside of the political realm where the few could engage in a more truthful form of debate, what Plato called the “dialectic” of philosophy as opposed to the “rhetoric” of politics. Whereas the agora was a place in the polis where any head of household could participate in discussions about the nature of experience, Plato’s Academy was a place within the polis meant to make the agora obsolete, a place where only his students would be able to participate in discussions about the nature of reality.
Though it was not political as it was not public, the Academy was still intended to be a space where participants could be free to speak by being free from coercion. To achieve such freedom, Plato and his students required the polis to provide them with the leisure necessary to leave the agora, just as the heads of households required slaves to provide them with the leisure necessary to leave the home. Hence, just as slavery was seen by the Ancient Greeks as a means to the end of political freedom, so politics was seen by Plato as a means to the end of academic freedom.
According to Arendt, Plato’s creation of “academic freedom” was an idea that had a much greater impact on politics than did his advocacy for a “philosopher-king,” as the former, unlike the latter, has actually been put into practice.14 What is important about the concept of academic freedom is that it requires freedom from coercion not only in the sense of freedom from necessities and freedom from inequalities but also freedom from coercion in the sense of freedom from politics.
Plato elevated the truth-seeking activity of philosophy above the consensus-building activity of politics, an elevation that he argued for quite literally by likening participation in the public realm of politics to being held prisoner in an underground cave. By arguing that academic freedom was necessary to reach truth, and that academic freedom meant freedom from politics, Plato established a separation between truth and politics. Truth could only be reached by philosophers if politics was not allowed to interfere with philosophy, just as previously it was believed that consensus could be reached by citizens only if domesticity was not allowed to interfere with politics.
After Plato, both political activities and domestic activities came to be seen as activities that were not in themselves meaningful, but that were means to the end of meaningful activity. The Ancient Greek creation of political space as a space for meaningful activity entailed the treatment of domestic space as a space for meaningless activity. Plato’s creation of academic space served, however, as a challenge to the meaningfulness of political space, bringing political activities down to the level of domestic activities.
Domestic activities and political activities both had in common that they were activities based entirely on human experience. Plato argued that, since human experience was particular rather than universal, and was always changing rather than permanent, human experience lacked the essential qualities of universality and of permanence that defined truth. Thus, if truth determined what was meaningful, then meaningful activity was not to be found in the home nor in the agora; it could be found only in the Academy.
In Plato’s Apology we find the claim that “the unexamined life is not worth living.” We can now see that this claim was not meant to be understood as a call for everyone to join the Academy, since of course there would then have been no one left in the home or in the agora to take care of the domestic and political activities that made the Academy possible. Rather, the unexamined life makes the examined life possible, so its worth is not intrinsic, but instrumental. According to Plato, unexamined lives are in themselves worthless, but they are nevertheless necessary since they still contribute to that which is worthwhile: the “harmony”15 of the polis.
Such harmony was to be achieved when everyone in the polis was doing what they were born to do. According to Plato’s “Myth of the Metals”16—which today might be better known as J. K. Rowling’s “Myth of the Sorting Hat”—one’s place in the polis was supposed to be capable of being determined by the nature of one’s soul. Thus, whether one led an examined or an unexamined life was not a matter of choice, but of birth. Rather, the only choice that did matter, according to Plato, was whether individuals would choose to carry out their fated tasks in life or if they would instead rebel by trying to take on a role in the polis other than the one that they were told they were born to perform.
Whereas the Ancient Greeks kept people in their place by using the threat of death, Plato instead used the threat of disharmony. With the “Myth of Gyges”17 Plato argued that someone who abandoned his role in the polis to become richer and more powerful would appear to be happy satisfying all of his desires. But because his soul would be in disharmony from having let his desires rather than his reason rule him, in reality his soul would be suffering. Plato thus likened disharmony to a disease, a disease that was invisible, for which reason we are blind to its corrupting influence on the soul and on the polis and so require philosophers to diagnose it. We are consequently led by Plato to trust philosophers rather than our own experience, since to trust our senses is to run the risk of mistakenly believing that the visible appearance of happiness is all that matters in life.
By establishing the idea of truth as belonging to a realm of reality outside of experience, Plato set the stage for political activity to be replaced by the activity of experts. According to Plato, we cannot trust our senses to accurately judge reality, and so if we do not want to risk being tricked by what merely appears to be true, we must instead trust the judgment of experts. Today we do this all the time. We’re not sure how we feel, so we go to the doctor. We’re not sure how something will taste, so we ask the waiter. We’re not sure if we should watch a movie, so we read reviews. And if we’re not sure if we can trust experts, then we ask Google. But if debating our judgments about reality with each other is how we become human, then replacing our judgment with the judgment of experts is to replace the political project of becoming human with the scientific project of becoming certain.
Yet against such a view it may be argued that the elevation of truth and certainty over experience and consensus has unquestionably led to advances in every scientific field, allowing us to gain innumerable insights into the nature of reality. Excluding citizens from scientific debate has without a doubt allowed scientists to work much more efficiently than would have been the case if scientists had needed the polis to reach a consensus every time they wanted to conduct an experiment. Furthermore, excluding citizens from scientific debate is not meant to be seen as having benefits only for scientists, as the ability to better understand reality allows scientists to help citizens lead better lives.
For this reason in politics too consensus has been replaced with certainty as democracy has been replaced with bureaucracy. Citizens no longer participate in political debates but instead participate in the act of voting to choose representatives. Representatives no longer participate in political debates but instead participate in the act of voting to choose policies. Such policies are created and managed by bureaucrats, by political experts who apply the methodologies of science to the problems of life. So scientists help citizens lead better lives because bureaucrats use the scientific understanding of reality to create for citizens the best life possible. From this perspective then it would be wrong to distinguish the political project of becoming human from the scientific project of becoming certain since certainty should be seen as necessary for the betterment of humanity. In other words, scientific progress is human progress.
Such an argument is certainly hard to argue against, for which reason we can see why the value of scientific progress was able to defeat competing values and become the highest value in just about every society on Earth today. Consequently, we now take for granted that humanity has advanced alongside science, which we can see, for example, when we trace human history from the perspective of scientific history as a steady progression from the Dark Ages to the Renaissance to the Enlightenment to Modernity. And yet Arendt argues that our faith in scientific progress has led us not to truth and certainty, but to nihilism and disaster.
According to Arendt, it is not an accident that the scientific politics of bureaucracies would have led to world wars and to the creation of the atomic bomb. Political progress has been marked by the end of slavery, by the victory of suffrage movements, and by the passage of civil rights legislation. Arendt points out that such progress required that the “brute force” that kept people in servitude was moved more and more out of the domestic realm and placed instead in the political realm as bureaucrats became increasingly confident that they would be able to keep such force under control. The consolidation of brute force in the hands of the government allowed for the liberation of women and minorities from the coercive violence that historically reigned in households. But such liberation from the household did not mean that the liberated were free to become human as it did for the Ancient Greeks.
The liberated were indeed free to leave the household and enter the public realm. However, as political activities had become the activities of experts rather than of citizens, the only activity left for modern-day citizens in the modern-day agora was that of work. In other words, the freedom of the liberated was not political freedom, it was only the freedom to join the labor market. As Arendt writes,
… the overall development of society—at least until it reaches the point where automation truly does away with labor—is moving uniformly toward making all its members “laborers,” human beings whose activity, whatever it may be, primarily serves to provide life’s necessities. In this sense, too, the exclusion of brute force from the life of society has for now resulted only in leaving an incomparably larger space than ever before to the necessity life imposes on everyone. Necessity, not freedom, rules the life of society; and it is not by chance that the concept of necessity has come to dominate all modern philosophies of history, where modern thought has sought to find its philosophical orientation and self-understanding.18
The end of slavery and the expansion of citizenship that came with it led to a massive increase in the number of people who needed to be able to work for a living, and a massive increase in the number of products needed to be able to keep the workers alive. This in turn led to a massive increase in the productive power of the state at the same time that brute force was becoming centralized in the hands of the state. And it was this combination of the productive power that states could marshal and of the brute force that states could wield that led politics to turn from being seen as a means to freedom to being seen as a threat to freedom.
Trusting experts rather than trusting experience has enabled advances in countless scientific fields. But war is also a field capable of being advanced scientifically, as is epitomized by the creation of the atomic bomb. It is for this reason that Arendt not only questions the equating of scientific progress with human progress but questions whether there is anything truly human about such progress. The distrust of experience that has stretched from Platonic metaphysics to Christian theology to capitalist bureaucracy has left us incapable of judging experience for ourselves, leading us to become much less willing to try to reach consensus with each other, and much more willing (and much more able) to try to destroy each other instead.
As Arendt points out, though humans have always needed to rely on prejudices so that we could make quick judgments when we did not have time to judge experience directly, the loss of trust in judgments from experience has left us with only our prejudices to rely on. It is in the nature of prejudices as a survival instinct to make us fearful, but when prejudices become completely detached from experience, they can make us paranoid, leading us to turn our prejudices into ideologies and self-fulfilling prophecies. Consequently, our fear of the threat of politics has turned not into a quest to reinvigorate politics, but into a quest to live in a world without politics. Marx saw this quest as utopian, but Arendt warns that such a world would be “simply appalling”19 since a world without politics would be a world without freedom.
Arendt concludes her analysis by comparing life in the modern world of bureaucratic politics—a world where politics has become something we try to evade rather than something we hope to pursue—to life in a desert. Arendt writes,
The modern growth of worldlessness, the withering away of everything between us, can also be described as the spread of the desert. That we live and move in a desert-world was first recognized by Nietzsche, and it was also Nietzsche who made the first decisive mistake in diagnosing it. Like almost all who came after him, he believed that the desert is in ourselves, thereby revealing himself not only as one of the earliest conscious inhabitants of the desert but also, by the same token, as the victim of its most terrible illusion. Modern psychology is desert psychology: when we lose the faculty to judge—to suffer and condemn—we begin to think that there is something wrong with us if we cannot live under the conditions of desert life. Insofar as psychology tries to “help” us, it helps us “adjust” to these conditions, taking away our only hope, namely that we, who are not of the desert though we live in it, are able to transform it into a human world.20
The purpose of this metaphor is to awaken us to precisely how lifeless our lives have become. To be in a desert is to be forced to be concerned with nothing other than bare survival, to be concerned with nothing other than the animal necessities that prevent us from experiencing human freedom.
Such is the state that Arendt believes we find ourselves in today, as our distrust of politics has led us not to distrust the scientific mind-set that has taken over politics, but instead to distrust each other. Our faith in scientific progress has culminated in our having lost faith in humanity, and precisely for this reason our faith in scientific progress has grown only stronger as it is scientific progress that is supposed to fix all that is flawed in humanity. Consequently, the more we suffer from scientific progress, the more we turn to scientific progress to cure our suffering. Like someone lost in a desert, we cling desperately to any guide who claims to know the way out, even if that guide was the one who led us into the desert in the first place.
It was the lifeless life of our desert world that Franz Kafka explored in his work—both in his work as an insurance claims investigator and in his work as a writer. Kafka’s writing is filled with horrifically realistic depictions that reveal how trying to live in a bureaucratic system—a system that no one can explain or defend but that everyone follows without question—can result in our suddenly waking up to discover that unknown judges have found us guilty of an unknown crime. Or, what is much more likely, it can result in our waking up to discover that we have somehow turned into a monstrous life-form that yearns to become human, a yearning that is experienced not as hope, but as torture.
It will perhaps come as no surprise that Arendt was a fan of Kafka’s. Arendt wrote several essays on the importance of Kafka’s work. For example, Arendt saw Kafka’s The Trial as a story of man who falls victim to the “bureaucratic machine.” As Arendt explains, it is a machine that “is kept in motion by the lies told for the sake of necessity, with the accepted implication that a man unwilling to submit to the ‘world order’ of this machine is thereby considered guilty of a crime against some kind of divine order.”21
Arendt’s praise of Kafka’s descriptions of desert life can help us to better understand her criticism of Nietzsche’s diagnoses. According to Arendt, Nietzsche took too psychological a view of nihilism, identifying nihilism as a disease, a disease which, even if it could infect an entire culture, could still be traced back to human biology. Much like Fyodor Dostoyevsky—who Nietzsche said was “the only psychologist, incidentally, from whom I had anything to learn”22—Nietzsche studied nihilism by looking inward. For Nietzsche, the inability to come to terms with the limitations of having been born weak, of having been born mortal, of having been born human, can lead individuals to attempt to escape such limitations, an attempt that has resulted in the creation of philosophies, of religions, and of cultures that help individuals to escape from life itself.
Dostoyevsky’s writings are filled with analyses of the rich and complex inner lives of his characters. Kafka’s writings are instead filled with characters who are ciphers, with characters who do not have complete names, who do not have backstories, and who often feel like they were born yesterday. For Kafka, just as for Arendt, what matters most for understanding the modern world, the world of bureaucracy, is not the relationship between mind and body, but the relationship between people and place. What is particularly important for Arendt is how one type of place—such as the agora—can motivate people to form relationships with each other and to share a world together, and how another type of place—such as a desert—can motivate people to become incapable of forming relationships with others and to become concerned only with individual survival.
It is for this reason that Arendt is so opposed to psychology, to the view that if we suffer from trying to live in a desert, then our suffering is the result of who we are rather than of what the desert is doing to us. To learn to adapt to the desert, to be “resilient,”23 can reduce our suffering, but Arendt warns that it is good that we are still capable of suffering, that our suffering is the canary in the coal mine, the alarm that tells us that we do not belong in the world in which we find ourselves. When this feeling of not belonging leads us to look inward, to blame ourselves, to try to fix ourselves, we become so focused on ourselves, on trying to figure out what is wrong with ourselves, that we only make the desert between ourselves and others worse. If being driven away from each other and being driven into ourselves is what creates nihilism, then individualistic responses to nihilism will never overcome nihilism but will instead only help to perpetuate nihilism.
The loss of politics, of consensus building, of coming together to share a world with each other, is for Arendt the result of the creation of nihilistic political systems. Such systems have not removed public spaces from the world, but has instead dehumanized those spaces, taking the possibility of political activity out of the public realm and leaving instead only the possibility of commercial activity, of working and of consuming. Working and consuming make us feel better because these activities help us to feel more at home in the world, in a world that is perpetuated by our working and consuming, by our caring more about being happy than about being human. But we cannot risk being satisfied with lifeless lives of inhuman happiness for, as Arendt concludes, such satisfaction is suicidal, as we find ourselves today in the “objective situation of nihilism where no-thingness and no-bodyness threaten to destroy the world.”24
It is this situation, according to Arendt, that has led certain philosophers to ask questions like “Why is there something rather than nothing?” and “Why is there anybody rather than nobody?” Arendt argues that these questions appear nihilistic but they must instead be seen as antinihilistic. To ask these questions is to contest the view of the world that might seem most logical—“There should be nothing and nobody”—by forcing us to question what it is about the world that makes such views seem logical. In other words, what is logical, what is rational, what is normal, cannot be taken for granted, but must be questioned and challenged. And it is precisely such questioning and challenging of what is taken for granted that used to be known as politics. It is thus only by returning political activity to the public realm, by reclaiming public spaces as spaces for freedom, by seeking consensus rather than seeking votes, by acting as humans rather than surviving as animals, that we can begin to overcome nihilism together rather than continue to suicidally adapt to it alone.