How Do We Live Now? In the Aftermath of Ourselves

Benjamin Peters

Your computer—no, the world itself—is on fire. Or so says a news cycle fueled by more high-octane outrage than insight. Take a good look around. Modernity and its merciless waves of cutting-edge science and technology have brought forth many fruits—some of them sweet and many bitter: these pages survey crowds of imperial, eugenic, racialized, sexist, industrialist, dangerously distributed, and centralized cruelties of computing and new media, and the world burns not just with metaphorical rage. It is also literally burning. If the global emissions curve is not altered, climate experts predict that the wet blue ball we call home will soon be uninhabitable by human civilization as we know it. The COVID-19 pandemic has only increased the global need to collectively care for the lives of others.

Still, bleak realities do not necessarily call for more smash-the-machine thinking (or at least not all the time). Human life may soon be forfeit as we know it, and still, not all is wrong with global computing and new media: to retreat into either crude Luddite self-righteousness or burn-it-all rage would solve nothing except dispensing aspirin to the panged conscience of those of us privileged few who can choose to live with less technology in our lives. I can opt out of social networks; many others cannot. A generation ago the hip rushed online; today the self-proclaimed cool minorities are logging off because, unlike most, they can.

Still, by many standards, modern media and computing technology have ushered in a host of net positives to the world’s population. Access to information and the proliferation of knowledge has never been as high and widespread in world history: according to a recent United Nations report, far more people have access to a cellphone (six billion) than to a flushing toilet (four and a half billion), with the majority of new mobile users hailing from the Middle East, Asia, and Africa. Since at least the sociologists Malcolm Willey and Stuart Rice’s 1933 Communication Agencies and Social Life, most of the connections in a globally networked world have clearly been local. Even as real risks and new dangers attend the uneven distribution of knowledge, power, and connection worldwide, it is hard to deny that computing and new media, whatever else their costs, have kindled and rekindled real-life relations.

The puzzle remains: How then do readers singed by the fires described in these pages live in what the essayist Pankaj Mishra calls the age of anger?1 And perhaps more vexingly, what in the world should one live for? A satisfactory response must fall, of course, well outside of the scope of an afterword, and perhaps even a single lifetime. Nevertheless, a few remarks might be offered in the spirit of the ethical reflection that drives the project of critical scholarship.

The fact is the world of human suffering has never so clearly appeared on the brink of ruin: in fact, the human race has suffered devastating problems—epidemics, famines, slave trades, world wars, and the colonization of indigenous peoples—at almost every turn in world history, but perhaps only recently has the recognition of what sociologist Luc Boltanski calls “distant suffering” become so potent and widespread.2 Never before in world history, in other words, has the modern media user had such unprecedented access to witness secondhand the finitude and fragility of our human condition from afar. This mediated condition of being—both privileged to be removed from yet intimately privy to unaccountable human suffering—should shake us to our core and, with it, shed the comfort of the privilege to which so many (myself included) cling.

Media technologies and computing power tend to coincide with straightforward beliefs about progress—sometimes with an Enlightenment-era capital P. That is a mistake, or at least “progress” presents a mistakenly incomplete picture. Instead of looking for enlightenment in the distorting mirror of modern media technologies and always coming up disappointed, let us contemplate the more stoic, Zen, or, perhaps with the philosopher of technology Amanda Lagerkvist, existential possibility that, properly understood, new media technologies should help prepare us to die—or, better put, they should help prepare us to live now that we might die well.3 By this I mean that new media and computing technologies help accelerate the global spread of what sociologists Paul Lazarsfeld and Robert Merton called “the narcotizing dysfunction” of mass media in 1948:4 modern media acquaints the most privileged parts of the world with the least, while at the same time insulating the privileged. Digital connectivity bruises the sensitive souls among us while limiting our capacity, or perhaps worse, our motivation, to make a difference.

This feeling of double-bind despondency—aware but helpless, sensitive but hopeless—ensures, whether by the shock of front-page pessimism (pandemics, earthquakes, mass murders, despots) or by the exhaustion of reflecting on the enormous problems and biases that new media amplifies (racism, sexism, environmental degradation), that no one can lead a life among the menaces of modern media without receiving a kind of accidental exposure to our own and even our species’s mortality. Yet instead of hiding from or leaping to embrace media as memento mori, we must learn from their call to remember in advance our own death: then we may find in computing and new media a rigorous, even healthful training for our own death and the demise of all those we care about.

A call to face and embrace one’s own death may seem prima facie an over-harsh conclusion to a book about technology and media: in fact, there may be little more inimical to the modern mind that seeks prosperity, peace, and beneficent politics than such a call to reconcile ourselves with the brevity of human life and even to release that desperate desire to stave off our own annihilation.

But it is too late to wish otherwise. Encounters with death are not “for hire.” It is a must. Our globe already demands it at every scale of life: climate change and pandemics, two heirs to the nuclear age, are perhaps the current species-wide issue whose profound ethical register combines the apocalyptic (the world is ending as we know it) with arithmetic (the recent trend in corporate carbon footprint scorecards and epidemiologists’ models). But we cannot not act: our media compel us to it. Watch the TV news. Fight a Twitter firestorm. Organize a movement. Eat meat sparingly. Wear a mask. Measure your most recent travel in the carbon footprint and its weight in the ashes of burned dinosaur bones. On our planet, there is nowhere left to run, even online: the frontier is closed (and should have been foreclosed against long ago). The search to recombine technological solutions and efficiencies will, of course, continue apace, but it is impossible not to face the fact that, on a global scale, the needs of the many far continue to outweigh the efforts of the privileged few. The case for any individual, weighed in the balance of the globe and its problems, is a thoroughly losing proposition. The self is lost. You and I do not matter (not that we ever did).

And yet—and this is the point—far from all is lost: in fact, with the self out of the way, there is much to live for. Indeed, perhaps life begins anew with the loss of the self. Our species will no doubt insist on living, as it long has, in what we might call the aftermath of ourselves. In this view, accepting self-loss does not mandate disengagement, indifference, and ennui. In fact, it may speed the opposite: it may lead to a joining of hands, a sloughing off of the fantasies of infantile omnipotence and puffed-up power. In its place, a clarion call, with Kimberlé Crenshaw, to collective action across both the illuminating intersections of identity and the underlying unions of our species’ mutual interests.5 This is crucial, for accepting the loss of the self—and with it the concomitant bankruptcy of selfishness—also calls us to recognize and then carefully reclaim and reshape our common lot as mediated creatures. We cannot return the unasked-for gifts of new media and computing; modernity has saddled our species with gifts we neither seek nor can return (data means “that which is given” in Latin, Russian, and French). Nor can we escape the ever longer shadows of our individual futility in the face of the world’s towering problems. But at least—and this is no small step—we can begin to recognize in the loss of a self a partial antidote to the same despair and despondency that tries to fill the void of the mediated self in the first place: from it sounds out the fire alarm, a call to arms, collective action, and even care.

So the question of life becomes how do not I but we live now? Let us learn from our media technologies how to be something other than only ourselves. For example, the couplet “people are tech, tech is people!” resonates across this book as alternately a stirring call for collective action (the people of tech, unite and throw off your chains!) and as a despairing cry (read in the voice of Sol in the final line of the 1972 B-movie Soylent Green, “Soylent Green is people”!). Both of these lines offer variations on that hardy perennial of principles that, despite the temptation to alternately idolize or curse tech as an independent agent of change, behind every piece of tech is, in fact, the messy interdependencies of humans. Whether or not AI ever becomes sentient, there is already a human behind every machine. Artificial intelligence, by contrast, requires, integrates with, and obscures human labor: for example, behind most self-check-out registers at the grocery store stands a hidden grocer ready to error correct the smart scanner. Behind every automated search algorithm toils a content moderator. Capital, run amok, produces social relations that are opaque to those who inhabit them, and, as every tech executive knows, AI is another word for investor capital.

This volume harmonizes with the chorus of critics crying out “tech is human too,” but not in the sense of anthropocentric machines. Indeed, AI is often billed as an automated big “brain,” when, if anything, the better organ projection for complex filtering in machine learning might be the liver—a distinctly less sexy comparison! Instead, contributors examine the stubborn humanity that persists in the private eyes that must be trained to train automated image-recognition systems on the most abhorrent images, on the dual pleasures of apprehension in facial recognition systems as well as the pleasure of workplace refusal and resistance. Other contributors also demonstrate that, under the guise of meritocracy, computing industry hiring practices have long, and likely will continue to, baked disturbing cultural traditions of sexism (on multiple ends of the British empire), racism (on both ends of Siri), Western backwardness in relationship to the Islamic and Chinese civilizations, and the faux beliefs in the generalizability of code into representations of the world.

In the debates and conferences supporting this book, authors probed whether the category of the machine helps us rethink the human: How do we protect the content moderators behind every filter algorithm? Might soldier robots help deconstruct the gender divide? Might bullets that feel press for more reasonable Second Amendment limitations among the morally mechanical who feel it when machines, not our fellow flesh and blood, suffer? Might the largely male teams of engineers designing smart tech internalize more than the obvious gender lesson that the assistant robots that serve and sense others’ needs while hiding out of sight should no longer be gendered female by default (Siri, Alexa, Cortana)?

But the moral dilemmas of tech almost always eclipse the solitary self: the social consequences of how we train and educate and hire coders, content moderators, and human trainers of algorithms on trial in this volume. So too does the field of computer programming education stand accused of practicing a student bait-and-switch, in which a student programmer transforms, in the intro courses, from an aspiring universal analyst into an insider mage and specialist in more advanced topics. Behind every tech design lies vast fields of ethical, provincial, and untested baseline civilizational assumptions about what code, writing, and even language means. Embarrassingly, many of these assumptions, as the case studies on Chinese writing interfaces, Arabic keyboards, and English-inflected Siri show, are simply wrong. It seems that no matter how we try to code or paper over what we would rather keep hidden, our best representations of ourselves still betray a deeper, more disturbing reality that is also beautiful in its irreproducibility: the relations between humans are more diverse, rich, and full of problems than any palette of skins, colors, sounds, genders, or other categories could emulate.

Engineering science faces an analytic paradox: it has the tools to better know but not to solve the world’s problems, even though it often acts as if the opposite were true. Its basic tools for making more efficient and more accurate representations of the world are heralded as the great problem solvers in the current age of data, and yet no representation, no matter how complete or comprehensive, can model the world in ways that must address its problems. The slippage between media subject and object, between tech “user” and “used,” will continuously incentivize humans to push back against the distorted reflecting mirror of their own profiles online. Moreover, reality itself, as the philosopher Immanuel Kant recognized, cannot be limited to only that which can be represented. There is more to reality than meets even our boldest innovation. (Consider how many wires make tech “wireless” and how much coal is burned for “cloud” computing.) Yet, to these objections, the standard engineering response is to strive to represent more accurately and, especially after a public scandal, more ethically. What happens when building systems that better represent the diversity of the world only exacerbates the original problem? An emoticon that matches every iPhone user’s skin color solves nothing of the struggle for the promotion and reparation of those who suffer social-structural hardships, disadvantages, and biases. Universal representation is the wrong answer to the question of unequal infrastructure. (No one gets off easy: Is, say, the critic who interprets content as representative of larger cultural problems not guilty of the same?) In a world driven by universal representation logics, tech reveals its blind spot to a key condition of human existence: there has never been justice and at once there must be less injustice now.

This volume calls to redirect attention away from the universal and global, except in the case of urgent collaboration to abate a looming environmental catastrophe, and instead to attend to the immediate and local contexts of lived experience: perhaps we can observe that the globe, as such, does not exist, just as, again and again, the uneven warp and weft of institutions and local politics do. IBM, like other multinational corporations, is transnational but also distinctive, even filial and personal, in its discourse and discontents. Sound technologies all too often mute our many bodily differences. Script technologies like keyboards miss all but the most basic on-off-hold-swipe touch of digital typing. QWERTY is anything but text-predictive, and keyboards are “mute” to all but the most limited speech strategies. Siri cannot hear your anger. Your keyboard sublimates your oral voice based on assumptions about the Latin alphabet that the majority of the world does not share. Between these two endpoints, the continuum of embodied human experience still walks the earth unsounded and scriptless.

Tech cannot be universal for other reasons too. The varieties of local labor, political economies, and the illiberalities of free markets continue to pile on to the tendency for military research to fund your favorite tech. A roboticist recently quit their field because they could not serve an innovation without a military application—from DARPA research to sex dolls for soldiers. It is also a disturbing, if understandable, fact that the largest server storing child pornography belongs to the US government and its central police force. What appears to be a stable tech infrastructure can evaporate overnight, depending on the crumble of an empire and whether the Federal Communications Commission labels the tech as “platform media.” Still other contributors complicate tech universalist claims by showing that the past, present, and future of tech corporate culture belongs to local workplace resistance. While tech may stretch global, it knows even less how to be local: a world of tech is not the world of lived experience.

Tech will deliver on neither its promises nor its curses, and tech observers should avoid both utopian dreamers and dystopian catastrophists. The world truly is on fire, but that is no reason that it will either be cleansed or ravaged in the precise day and hour that self-proclaimed prophets of profit and doom predict. The flow of history will continue to surprise. As described in this volume, many attempts to diversify tech workforces, no matter how well intentioned, may end up exacerbating the very hiring biases driving the lack of diversity; as in the British case, chauvinist biases may pour in to fill the gaps of aggressive, and ultimately regressive, centralized hiring. In terms of surprising phrases, the once-untimely “series of tubes” turns out to be a suitable metaphor for the global material and mucky internet. Or, one can play countless games online while playing out no more than the lives of third-string monopolists, second-world designers, and first-person soldiers. Meanwhile, in real life, the world never fails to surprise: even Thomas Hobbes, infamous for declaring life “solitary, poor, nasty, brutish, and short,” died in bed, surrounded by loved ones, at the age of ninety-one. The value of tech criticism, like all criticism, must not be mistaken for moral panic talk, whose audience often mistakes the carrier of the unfortunate news for the cause of the crisis. Uncertain times call for uncertain media.

In the ongoing search for better heresies and causal explanations, let us pause, with historian of technology Ksenia Tatarchenko, before ceding tomorrow’s future to yesterday’s science fiction.6 May tech observers stop off-loading and outsourcing the imagination of better worlds without first attending to the earth. In the same spirit that Hegel noted that Minerva’s owl takes flight in the dusk, so too does ethics only stretch its wings after algorithms have already been, in the military jargon, “deployed.” The robots will never take over—that has never been the crisis. Rather, robotic analysis of the future took over our minds and language many decades ago. As Lewis Mumford opened his 1934 masterpiece Technics and Civilization, our species became mechanical before machines changed the world. Not only is tech human, people are the original machines.

The globe is ablaze, and few have the collective language to call to put it out. This book sounds out a call for that language. The challenge of anyone who lives in our broken world is not to delay to some future date the fact that the needs of the many outweigh the privileges of the few here and now. (Pandemics may make no finer point.) The difficulty of learning to love, live with, and care for others is perhaps the problem of all those who live. Those who overdraft from the accounts of the self live unknowingly on the credit of others. The resolution, if not solution, may come to all those who learn to live together now so as to pass on a better world, and soon enough from this world.

All editors harbor conceits about their books. The four of us hope this book proves a “no crap” book in at least two tenses: first, present-day readers acknowledge its message by acting on its seriousness and urgency. Second, as a consequence of the first, readers, in the hopefully not too distant future, will also be able to look back and see its message as all too obvious. As I conclude this afterword, the internet is turning fifty years old to the day, surely a fitting moment to stage an intervention and to declare its midlife crisis in full swing. The success of this book’s no-nonsense attitude will be measured by its self-evident irrelevance fifty years from now—by how we learn to live in the aftermath of ourselves. Until then, the question remains: Will its message—your computer is on fire—or the world as we know it burn up first?

Notes

1. Pankaj Mishra, Age of Anger: A History of the Present (New York: Farrar, Straus and Giroux, 2017).

2. Luc Boltanski, Distant Suffering: Morality, Media and Politics, trans. Graham D. Burchell (Cambridge: Cambridge University Press, 1999).

3. Amanda Lagerkvist, “Existential Media: Toward a Theorization of Digital Thrownness,” New Media & Society 19, no. 1 (June 2017): 96–110.

4. Paul F. Lazarsfeld and Robert K. Merton, “Mass Communication, Popular Taste, and Organized Social Action,” in The Communication of Ideas (1948); reprinted in Mass Communication and American Social Thought: Key Texts, 1919–1968, ed. John Durham Peters and Peter Simonson (Lanham, MD: Rowman & Littlefield, 2004), 235.

5. Kimberlé Crenshaw, On Intersectionality: Essential Writings (New York: The New Press, 2020).

6. Ksenia Tatarchenko and Benjamin Peters, “Tomorrow Begins Yesterday: Data Imaginaries in Russian and Soviet Science Fiction,” Russian Journal of Communication 9, no. 3 (2017): 241–251.