6

What Is the Future of Nihilism?

The future is a prominent theme in Nietzsche’s work. One of the earliest lectures he gave was subtitled “On the Future of Our Educational Institutions.” The subtitle of Beyond Good and Evil announces that the book is a “prelude to a philosophy of the future.” The final chapter of Ecce Homo is titled “Why I Am a Destiny.” Nietzsche frequently declared that he was writing for the future and that his readers had not yet been born.

We may be tempted to think such declarations were due to the simple fact that, during his lifetime, almost no one bought any of his books. However, in Beyond Good and Evil, Nietzsche writes,

True philosophers reach for the future with a creative hand and everything that is and was becomes a means, a tool, a hammer for them. … It has become increasingly clear to me that the philosopher, being necessarily a man of tomorrow and the day after tomorrow, has, in every age, been and has needed to be at odds with his today: his enemy has always been the ideals of today.1

To conceive of a philosophy of the future, to write for the future, is thus, according to Nietzsche, to be in opposition to the present and, in particular, to be in opposition to the “ideals” of the present. Such opposition arises because the future is viewed not only in terms of what will be the case but also in terms of what should be the case. In other words, to put forth a vision of the future is to engage in what Nietzsche calls “active nihilism.”2 Rather than sit back and let the present destroy the future (“passive nihilism”), to engage in active nihilism is to destroy the present to create the future, to destroy the destructive ideals of the present in order to create new ideals and bring about the future that we want.

The Land of the Free and the Home of the Nihilist

In the previous chapters of this book we have seen that nihilism existed in the past and that nihilism exists in the present, and so there is every reason to believe that nihilism will exist in the future. We have also seen a variety of arguments as to why nihilism should not exist in the future, arguments that all point to the danger that the more nihilistic we become, the more likely it is that we will have no future. From the Nietzschean perspective, then, the question that we need to ask is this: What are the ideals in the present that we must oppose in order to create a future without nihilism?

Nietzsche’s answer to this question would appear to be that we must oppose the ideal of asceticism, an ideal that manifests itself morally in the value of self-sacrifice and scientifically in the value of truth. The “no” of asceticism, the “no” to life, has become the moral “no” that demands we restrain our instincts and the scientific “no” that invalidates opposing perspectives. Nietzsche, however, wanted us to say “yes” to instincts, to multiplicity, to life. It is because morality and science are two sides of the same “no” that Nietzsche argues that the ascetic ideal has no opposition. Rather than accept the commonly held view that science is the enemy of religion and is therefore the enemy of the ascetic ideal, Nietzsche instead contends that science is the latest incarnation of the ascetic ideal and is just as dependent upon faith as is religion.3

From the Nietzschean perspective, then, the question that we need to ask is this: What are the ideals in the present that we must oppose in order to create a future without nihilism?

Arendt agreed with Nietzsche that we should challenge the ideal of truth, and the faith in science that it inspired, as such faith culminated in the death of politics and the birth of the atomic bomb. And yet, due to his opposition to the ideal of self-sacrifice, Nietzsche’s active nihilism led him to promote counterideals like the Emersonian ideal of self-reliance and the Darwinian ideal of self-overcoming. As Arendt argued, such individualistic ideals can result in the very passive nihilism they are meant to counteract. Nietzsche’s fear of what he saw as the “herd instinct” may have prevented him from considering that such herds arise not because of nihilistic instincts, but because of nihilistic systems, systems that can only be fought collectively and that are perpetuated when we instead try to act on our own.

For both de Beauvoir and Arendt, individualistic ideals such as self-reliance and self-overcoming are precisely what we must oppose if we are to create a future free from nihilism. For example, in her account of her time spent traveling across America in 1947, de Beauvoir writes,

What is most striking to me, and most discouraging, is that they are so apathetic while being neither blind nor unconscious. They know and deplore the oppression of thirteen million blacks, the terrible poverty of the South, the almost equally desperate poverty that pollutes the big cities. They witness the rise, more ominous every day, of racism and reactionary attitudes—the birth of a kind of fascism. They know that their country is responsible for the world’s future. But they themselves don’t feel responsible for anything, because they don’t think they can do anything in this world. At the age of twenty, they are convinced that their thought is futile, their good intentions ineffective: “America is too vast and heavy a body for one individual to move it.” And this evening I formulate what I’ve been thinking for days. In America, the individual is nothing. He is made into an abstract object of worship; by persuading him of his individual value, one stifles the awakening of a collective spirit in him. But reduced to himself in this way, he is robbed of any concrete power.4

De Beauvoir here makes clear that self-exaltation leads to self-destruction. America is an individualistic nation, a ruggedly individualistic nation, a nation where citizens are supposed to have the life and liberty necessary to pursue happiness. However, rather than happiness, what de Beauvoir found in America was a nation of individuals who had become “so apathetic while being neither blind nor unconscious.” The Americans who de Beauvoir met were able to recognize that poverty, racism, and oppression in America made political change desperately necessary in order to avoid “the birth of a kind of fascism.” But they felt so powerless as individuals to effect political change that they responded instead by effecting the only change they were individually powerful enough to achieve: they became unfeeling.

That de Beauvoir identified American apathy as a result of individuals having been “robbed of any concrete power” suggests a connection here to Arendt’s argument that we must think of nihilism as political rather than as psychological. What de Beauvoir described as the powerless power of America is what Arendt described as the lifeless life of a desert. To respond to the feeling of individual powerlessness by making oneself unfeeling is to respond to the suffering of finding oneself in a desert by adapting to the desert.

We seek to adapt because a system that idealizes individualism and champions autonomy as the key to happiness leads individuals living in that system to feel that they should be happy. Any unhappiness is consequently seen as a sign that something is wrong, though not wrong with the system, but with the individual. Individualism and autonomy are thus destructive insofar as they lead us to become obsessed with personal happiness and to view our unhappiness as something that divides us from others, as something that makes us abnormal, and as something that must be cured.

Rather than discover whether others are similarly unhappy, the ideal of personal happiness motivates us to fear revealing our own unhappiness to others and to instead pretend to be happy to avoid the risk of being seen as abnormal. Such fear entails that we cannot know whether the others who seem so happy and so normal are actually pretending too. And so we cannot know whether the seemingly happy herd that Nietzsche warned us to avoid may in fact be made up of individuals who are just as unhappy living in a desert as we are, and who are thus individuals we need to engage with if we are to have any hope of escaping the desert. So a system built on life, liberty, and the pursuit of happiness can induce nihilism by treating lifelessness, oppression, and unhappiness as personal feelings, as feelings that reveal a person’s pathological inability to be happy, the result of which is that we respond to our suffering with the nihilistic desire to change ourselves rather than with the political demand to change the system.

A system built on life, liberty, and the pursuit of happiness can induce nihilism by treating lifelessness, oppression, and unhappiness as personal feelings, as a person’s pathological inability to be happy, which motivates within us the nihilistic desire to change ourselves rather than the political demand to change the system.

Technology and Nihilism

If we are to create a future without nihilism by opposing individualistic ideals and the nihilism-inducing systems that champion them, then we must realize that such systems are not only political but also technological. Or to be more precise, we must realize that our politics is technological and that our technologies are political.

To say that politics is technological is not the same as merely saying that we engage in politics through technologies. It is of course obvious that political activities require technologies, such as when we use a paper and pencil to cast a vote or when we use Twitter to organize a protest. But what is less obvious is how technologies can influence and even shape our politics, such as when we take for granted that the only options for political action available to us are the options that show up on our computer screens.

We take such things for granted because we believe that technologies do not have the power to act independently. To suggest that technologies do have the power to effect change on their own is to sound like you are suggesting that technologies are alive. After all, even robots only act based on their programming, on programming written by humans. Consequently, we tend to think it absurd to worry about what technologies can do to people, as what we should worry about instead is what people can do with technologies.

Such a view is what has come to be known in philosophy of technology as the instrumental or neutral view of technology. It is this view that Heidegger warned was the most dangerous possible view to take of technology precisely because it is the view that is most common. After being exiled from academia due to his collaboration with the Nazis, Heidegger tried to resurrect his career by delivering a series of public lectures in Germany during the 1950s, the most famous of which was titled “The Question Concerning Technology.”

In the lecture Heidegger argues that we do not realize what technologies are doing to us because we have become too obsessed with what technologies allow us to do. We can turn rivers into electric plants, we can turn forests into newspapers, and we can even turn the Sun into a battery. Yet it is this mindset, this framing of nature as a means to our ends, that Heidegger sees as what is tricking us into thinking we are in control of technologies—what Heidegger refers to, echoing Nietzsche, as the “will to mastery”5—rather than realizing that we are being controlled by technologies. What Heidegger endeavors to show is that using technologies leads us to see the world through technologies, to think in accordance with the logic of technology.

Heidegger argues that the “essence” of technology is not its instrumentality, nor even its being technological, but rather its “way of revealing.”6 Technologies reveal the world to us in a particular way, in a way that, according to Heidegger, has changed in the modern era. Traditional technologies revealed the world to us as powerful, such as when a windmill shows the force of the wind or when a bridge shows the danger of trying to enter the water rather than cross over it. Modern technologies, however, reveal the world to us as powerful in a different sense, in the sense of the world being full of power, of power that we can store, that we can stockpile, and that we can use to do what we want when we want.

The logic of modern technology is characterized by Heidegger as the logic of “setting-in-order,”7 a logic that reduces reality—all reality—to the logic of means and ends, the logic where everything has meaning only insofar as we can use it in order to get something we want. We are of course aware of the pervasiveness of this logic, but we mistakenly assume that the ends of our activities are determined solely by us, solely by humans. So we assume that if we find ourselves in a technological world thinking instrumentally, it is only because we are using technologies as instruments, as our means to our ends.

However, what Heidegger points out is that if these ends are ours, it is not because we chose them, but because we have come to identify with them. When in an airport we instinctively look for a seat near an electrical outlet, we think we’re just looking for the best seat. But really we’re sitting in accordance with what is “best” for our phone, and for our laptop, and for whatever other devices we now unquestionably think it is “best” for us to have when we travel. In other words, not only do technologies shape how we see the world and how we act in the world but they also shape how we evaluate the world, how we determine what is “best” and what is “worst.”

Technologies have enabled us to feel more and more powerful, but only because we do not realize that in reducing reality to the logic of means and ends, we are ourselves becoming more and more reduced by this logic. When we seek out an outlet in order to plug in a device, we do not realize that we have ourselves become a means to an end, a means to the end of the device. In such a situation the device has turned us into an instrument of its ends. What is best for the device becomes what is best for us.

Devices frequently require that we organize our activities in accordance with what is necessary to keep them functioning properly. We are even becoming increasingly interrupted in our activities in order to help the device carry out functions. These are functions that often we did not choose for the device to perform, functions that we do not even understand. The device informs us that we need to download an update, and we click download. The device informs us that we need to click accept, and we click accept. The device informs us that we need to restart the device, and we click restart. The device informs us that we need to create a new password, so we enter a password—only the device then informs us that it does not accept our password, and it advises us on how to enter the best password, so we keep trying until we meet the device’s approval.

In other words, technologies are powerful; we are not. We only feel powerful to the extent that we align our ends with the ends of our technologies so that when we act in order to serve our technologies, we feel like we are acting in order to serve ourselves. But what is important for Heidegger is that whether we are acting in order to serve technologies or in order to serve ourselves, we are still acting only in accordance with the logic of setting-in-order. To live in accordance with such a logic is to have become dehumanized.

According to Heidegger, in reducing nature to a power source that can be called upon on demand, technologies have at the same time reduced humanity to a power source that can be called upon on demand. Making clear the connection between technology and nihilism, Heidegger warns, “Nihilism is the world-historical movement of the peoples of the earth who have been drawn into the power realm of the modern age,” for which reason “those who fancy themselves free of nihilism perhaps push forward its development most fundamentally.”8 We say that technologies are empowering us, but that is because we have elevated technologies under the guise that in so doing we have elevated ourselves. We take for granted that we live in a “technological world,” and in order to maintain the illusion of empowerment, the illusion that this world is for us, we have redefined ourselves as “technological beings,”9 as “technomoral creatures,”10 and as “informationally embodied organisms (inforgs).”11 Technologies therefore not only shape how we think, how we act, and how we value, but also redefine what it means to be human.

The French sociologist and theologian Jacques Ellul similarly warned that the illusion that we are in control of our technologies blinds us to how much technologies have come to control us. In his 1977 book The Technological System, Ellul warns that technologies not only influence us individually but also influence us politically. As Ellul argues, against the “simple view” that “the state decides, technology obeys,” we instead “have to ask who in the state intervenes, and how the state intervenes, i.e., how a decision is reached and by whom in reality not in the idealist vision.”12

Ellul points out that we cannot make decisions about technologies if we do not understand how they work. Lawmakers therefore are increasingly forced to turn to technology experts in order to make laws about technologies. But as laws about technologies would necessarily impact those very same technology experts, Ellul calls into question the possibility that technology experts could provide their expertise objectively, as any risk to technologies would be a risk to themselves.

It should come as no surprise therefore that political decisions rarely, if ever, come into conflict with technological progress. Consequently, companies like Facebook and Google can act in ways that endanger not only individual users, but entire societies. And yet tech companies neither fear the wrath of those users nor of those societies, for users now depend on tech companies much more than tech companies depend on users. Facebook and Google can abuse us—such as by violating our privacy—because, like abusers, they know we have nowhere else to go. Facebook and Google respond to allegations of abuse by essentially daring us to leave them, as they know that we’ll always end up coming back for more.

Yet, as Heidegger helps us to see, the true danger posed by companies like Facebook and Google is not that they violate our privacy, but that they redefine what we think “privacy” means. Facebook and Google—not to mention tech companies like Apple, Amazon, Tinder, and Twitter—all defend their privacy-endangering practices by simply pointing out that they are only giving users what users want. If users want to be social, then apps and devices need to be able to help them find others to be social with, and need to be able to find more and more ways for users to share more and more of their lives with others. And if users want to meet the right others, and learn about the right events, and find the right products, then apps and devices need to know as much as possible about the activities and interests of users.

Apps and devices study users, treating users as not only sources of power but as sources of information. Technologies have always required power in order to function, but apps and devices increasingly require information in order to function. But when we are motivated by our technologies to share more and more of our private lives with technologies and through technologies, we do not feel reduced by technologies to means to their ends, as instead we come to view the end of sharing as having always been our end. Consequently, it is only the dramatic abuses of users’ trust, only when tech companies are found to have been selling users’ information without permission, or when a data breach is found to have made users’ information public, that lead users to feel outraged. For users are motivated to see the abuses of trust that tech companies perform everyday not as violations but as simply the price that must be paid for wanting to be social.

Traditional definitions of privacy have thus come to be seen as outdated, and people who still want to live in accordance with a more traditional sense of privacy have come to be seen as antisocial. To be seen as antisocial, in a world where being social has become the norm, is to be seen as abnormal. Tech companies justify their practices by claiming that they are giving users what users want, but in a technological world, in a world where people can be ostracized for not being sufficiently technological, it is increasingly difficult to distinguish whether what users want is determined more by desire or by fear.

Though of course, from the perspective of Heidegger and of Ellul, technologies shape both our desires and our fears, and so tech companies cannot justify their practices by referring to what users want since what users want is shaped by those practices. The ubiquity of technologies has made it impossible for us to take a perspective on technologies that is free from the influence of technologies. But if we cannot be free from the influence of technologies, then we cannot make independent judgments about technologies. And if we live in a technological world, then this means that we cannot make independent judgments about the world in which we live. So technological progress makes us feel more and more powerful, and yet this progress is making us politically more and more powerless.

From a Nietzschean perspective we find ourselves in a technological world because our nihilistic need to avoid feeling alone, to avoid feeling powerless, and to avoid feeling feelings leads us to seek out new ways to avoid feeling human. We have then moved from escaping reality by using the imagination to enter worlds like Heaven and Hell to instead escaping reality by using movies and consoles to enter worlds like Hogwarts and Hyrule. From an Arendtian perspective, however, we find ourselves in a technological world not because humans are nihilistic and are constantly seeking new forms of escape, but because political systems that promise individual happiness and produce individual suffering lead us to feel that we are individually to blame for our suffering. And so we have moved from seeking psychological cures like pills and therapy to help us to adapt to the lifeless life of a desert to seeking technological cures like Netflix and Fitbit to help us to adapt to the lifeless life of a technological world.

In other words, for both Nietzsche and Arendt there would be no question as to whether technological progress has furthered or hindered the progress of nihilism. Instead the only question would be how we came to equate technological progress with human progress, and how we can prevent this equation from continuing to blind us to the nihilistic nature of technological progress.

Fighting Nihilism with Nihilism

If we are seeking a way to be able to destroy the life-denying values of the past and present so that we can create a future based on new life-affirming values, then technologies would seem to have the destructive and creative potential necessary to achieve such aims. Yet rather than produce active nihilism, in the previous section we saw that technologies are primarily producing passive nihilism. Ironically, the problem seems to be that technologies are not as “disruptive” as tech companies claim them to be.

As Heidegger and Ellul showed, technologies are certainly dangerous, but their danger has come not in the form of destroying values, but in the form of redefining our values. Technological progress has created a technological world, but this new world still has old values, and so if we want to create a future based on new values, and if we cannot imagine a future that is not technological, then either we need to change what we imagine to be possible, or we need to change the nature of technological progress.

From the perspective of ethics and politics, technological progress has not been revolutionary, it has been overwhelmingly conservative. Technologies are increasingly interfering with our ability to lead autonomous lives, to be able to think rationally, to be able to make decisions based on what we know to be true rather than merely on what is presented to us as true. And yet the tech companies that create these technologies are not justifying their interference by promoting new values, by trying to replace the humanistic values of autonomy, of rationality, and of truth that go back at least as far as the Enlightenment. Instead tech companies deny that any interference is intentional and argue that what is seen by critics as intentional interference is merely their attempt to develop tools that could be used to uphold humanistic values by helping us to become more autonomous, more rational, and more aware of the truth.

On April 10, 2018, Facebook CEO Mark Zuckerberg was called to testify before Congress about the role Facebook played in scandals surrounding interference in the 2016 U.S. presidential election. Zuckerberg did not defend Facebook’s role in the scandals by questioning the value of democracy, but instead defended Facebook as a tool in the service of democracy. Zuckerberg admitted that, as a tool, Facebook could be misused, and so he pledged to develop new tools—such as algorithms that could detect and delete “fake news”—to ensure that Facebook’s “tools are used for good.”13

Zuckerberg did not feel the need to define what he meant by “good,” however. This omission points toward a desire to take traditional values for granted rather than create new values. Facebook may have helped to destroy our traditional means for engaging in democracy—as candidates now hold virtual town halls and citizens now debate each other with memes rather than with meetings—but traditional values like the value of engaging in democracy have nevertheless remained intact. Indeed Facebook has gotten in so much trouble politically largely because it provides so many new means for engaging in democracy.

As Ellul foresaw, democratic states are being transformed into technological states. But Ellul was careful to point out that such a transformation does not mean that the tech experts, or “technicians,” creating these transformations have any interest in creating new values, in creating a technocracy. Ellul writes,

Does that imply the emergence of a technocracy? Absolutely not in the sense of a political power directly exercised by technicians, and not in the sense of the technicians’ desire to exercise power. The latter aspect is practically without interest. There are very few technicians who wish to have political power. As for the former aspect it is still part of a traditional analysis of the state: people see a technician sitting in the government minister’s chair. But under the influence of technology, it is the entire state that is modified. One can say that there will soon be no more (and indeed less and less) political power (with all its contents: ideology, authority, the power of man over man, etc.). We are watching the birth of a technological state, which is anything but a technocracy; this new state has chiefly technological functions, a technological organization, and a rationalized system of decision-making.14

As Ellul points out, we must not assume that just because technicians like Zuckerberg wield the power necessary to create technological states that they want to use that power to achieve any purposes that are ethical or political as opposed to merely economic. Technological states are seen by Ellul as merely technologically enhanced bureaucracies rather than as technocracies because the technicians who could rule have no interest in ruling. So technicians help to create states that are essentially apolitical, states that are focused solely on maintaining the status quo, on keeping the ends of the state constant while what changes is only the means available to achieve those ends.

Arendt described the rise of bureaucracy as the rise of “no-man rule.”15 In a bureaucracy decisions are made by determining scientifically what is best for society, what is best not for anyone in particular, but for the “everyman,” for the statistically average human who is meant to represent everyone because he is no one. Similarly, a theme that runs through the work of the French philosopher Michel Foucault is that, in a society governed by statistical modeling and scientific reasoning, humans are reduced to the behaviors and characteristics that can be identified as distributively common or “normal.” Consequently, what is found to be uncommon and uncharacteristic is viewed as “abnormal,” as statistical anomalies to be removed, whether by educational, legal, or medical means.

This method of decision-making is powerful because it creates an almost impenetrable aura of objectivity. Scientific reasoning allows bureaucrats to wave away accusations of bias and prejudice by simply arguing that numbers have no biases and math has no prejudices. Statistical argumentation even allows racism and sexism to seem like the product of natural superiority rather than the product of political superiority. However, so long as the statistical modeling and scientific reasoning is performed by humans, critics will always be able to counter the claims of bureaucrats by pointing out that measurements might be unbiased, but the humans determining what is to be measured and how are not.

The technological state described by Ellul is thus not the destruction of the bureaucratic state, but its perfection. If citizens want to live in a state free from the corrupting influence of self-interested bureaucrats, then a state governed by machines will be seen as far more trustworthy than a state governed by humans. For this reason “smart city”16 projects have become increasingly popular. Governance by nonhuman bureaucrats—otherwise known as algorithms—can make citizens feel safe from bias and prejudice and can make political leaders feel safe from being accused of being biased and prejudiced.

And so, in a technological state, no-man rule can become quite literal. Statistical modeling performed by algorithms can reduce humans to data sets, and the scientific reasoning that leads us to trust algorithms can reduce politics to cost-benefit analysis. But again, what is important to realize here is that this reductive treatment of humans and of politics is not a new technological project; it is merely the culmination of the project of the Enlightenment, of the project to create a science of humanity. Hence, even the value we ascribe to something as futuristic as artificial intelligence should be seen not as a new value, but as a very old value in shiny new packaging.

Governance by nonhuman bureaucrats—otherwise known as algorithms—can make citizens feel safe from bias and prejudice and can make political leaders feel safe from being accused of being biased and prejudiced.

Yet it is precisely because technological progress is not as truly revolutionary as tech entrepreneurs claim that technologies can nevertheless help us to combat nihilism. As Heidegger argued, the essence of technology is revealing. If technologies are not helping to create new values but are instead operating in accordance with old values, then the nihilism created by technologies can help to reveal to us the nihilistic nature of these values. We live in a technological world, in a world that is the realization of the dreams of the Enlightenment. To find this world becoming more and more nihilistic is to see revealed that these dreams are actually nightmares, nightmares from which we need to wake up before it’s too late.

Technologies may not be creating new values, but they are creating new forms of nihilism.17 As Nietzsche suggested, it is possible that we could become so nihilistic, that we could become so destructive, that we could destroy even our nihilistic values and the nihilistic systems that sustain them. So to end on a hopeful note, if the nihilism generated by technological progress doesn’t make us too self-destructive, then perhaps instead it will make us just destructive enough to force us to finally become creative. In other words, if nihilism doesn’t kill us, it might make us stronger.