5
The Power of Internalization

Everything that has been said in the preceding pages about the origins of digital culture's techniques for self-representation and self-perception could be distilled into a single word: unlike the profiled, located, and measured criminals and patients in the nineteenth and twentieth centuries, today's users of social media, location-based services, or wearables do so voluntarily. Yet what, exactly, does the category of “voluntariness” mean for the present status of subjectivity? Should the decisions made and the measures taken of one's own volition be understood as emancipatory acts against prescribed and enforced manners of behavior? Or, in these cases, does “voluntariness” imply instead an ongoing process of problematizing the self?

Without a doubt, one of the more striking features of the present is that many processes of normalizing and regulating people, which just a few decades ago had been the mandate of state, scientific, or police authorities, have been handed over to individuals. This development is evident not only with respect to the circulation of personal information but in other contexts as well. In the field of reproductive medicine, for instance, methods of prenatal and preimplantation screenings, which make it possible to test young embryos for irregularities or genetic defects, have resulted in the fact that hardly any children are born any more with complications such as Down syndrome or cystic fibrosis. In this manner, the free and individual decision of the couple to follow the recommendations of doctors and geneticists fulfills a eugenically motivated agenda that had existed in authoritarian states during the first half of the twentieth century. In 1895, Alfred Ploetz, one of the founders of “racial hygiene” in Germany, recommended the “eradication of newborns” in order to elevate the collective genetic stock to a eugenically desirable level. During the “Third Reich,” as is well known, these ideas were used to legitimate the government's health policy. Today, this same sort of eradication is carried out in a discreet and efficient manner during the earliest stages of a future person's existence, either in the mother's womb or even before a fetus is implanted in the birth mother. The motivations behind these two practices may be fundamentally different – one is concerned with the genetic material of an entire population, the other with a single family's personal happiness – but their consequences remain the same. Both Ploetz's proposal to “prepare, with a small dose of morphine, a gentle death for weak or malformed children”1 and the aims of prenatal or preimplantation screening assert that life should be denied to human beings with flawed genetic constitutions.2

Over the last few decades, interventions undertaken at the end of human existence have shifted much like those employed at its beginning. Around the same time that “racial hygiene” became established in Germany, a debate was under way about “allowing the destruction of life unworthy of life,” which was sparked by Karl Binding and Alfred Hoche's 1920 book with that title. There the lawyer and the psychiatrist recommended, for economic and demographic reasons, the systematic killing of so-called “ballast lives” – a label that the authors applied to terminally ill, comatose, and mentally disabled people.3 This was exactly the sort of “euthanasia,” in fact, that would be implemented without restraint by the Nazi regime. Since the Second World War, of course, the connection between caring for a nation's population and killing the sick with medical assistance has been discredited. Over the last 25 years, however, the readiness in Germany to assist the death of people suffering from certain conditions has been revived, even though this issue was absolutely taboo from the 1950s to the 1980s. Carried out in the name of the “will of the patient,” this practice has been restricted to terminally ill individuals, just as the methods of prenatal and preimplantation screening have been used to benefit individual families. The binding nature of “living wills” or “advance healthcare directives” has been legally established since 2009; since then, euthanasia and the removal of life-support systems in the case of unresponsive patients have been permitted. For more than a decade, active forms of assisted suicide have been discussed by those interested in reformulating Germany's euthanasia laws. Those in favor of liberalizing the law evoke the individual right to a “dignified” death – self-determined, and without having to suffer senility or live in a nursing home. The previous sovereignty of the state to eradicate economically worthless life has thus given way to the sovereignty of the subject to end his or her own life with the help of medical assistance. The question remains, however, whether this development should be understood as a categorical change or as part of a continuum. Both models, after all, allow for the abolishment of lives that are supposedly no longer worth living.

What the biological sciences have presented as a transition from demographic and political interventions to the individual's freedom of choice – a tendency shared during the second half of the twentieth century by the field of genetics, which reoriented itself from a science concerned with populations to one concerned with individuals – applies just as well to the technologies of the self that I have described in this book. Up until the late twentieth century, technical limitations and problems of legitimation ensured that the generation of data about individual people – about their biographies, locations, and states of health – was restricted to the police and to scientific authorities dealing with exceptional circumstances. An individual would not become the object of registration unless a manhunt, an investigation, or his or her medical case history justified the costs and legal efforts involved. And the resistance to the first collective registration measures of the computer age, such as the censuses of 1983 and 1987, arose from the concern that the electronic storage of personal data would be equivalent to fabricating charges of wrongdoing.

Under the conditions of present-day media technology, the acquisition and dissemination of personal data do not need to be justified by a crisis scenario. On the one hand, people leave behind traces of their identity and location whenever they use a credit card or search for things online; on the other hand, a general appetite for self-registration, self-location, and self-measuring has developed that resembles the bio-political tendency toward self-eugenics. But what, exactly, do these inclinations say about the disposition of the present? What functions and effects are associated with voluntarily taking over such regulatory processes? According to their most influential theorists, two major epochs can be identified in the history of modern power techniques. Michel Foucault spoke about the “disciplinary power” that, as of the eighteenth century, enclosed and organized individuals within the spaces of newly established institutions such as schools, barracks, factories, hospitals, and prisons. As Gilles Deleuze remarked in his famous essay “Postscript on the Societies of Control,” the epoch of disciplinary power gradually dissipated over the course of the twentieth century. “We are in a generalized crisis in relation to all the environments of enclosure,” he writes, noting that “ultrarapid forms of free-floating control” have replaced former disciplinary spaces – flexible and open “corporations” have taken the place of factories, while “perpetual training” seems to be replacing schools.4 Yet this perceptive essay, which is now more than 25 years old, had been written before a new human image was shaped by digital culture and the biological and neurological sciences. Deleuze's imagined scenario is defined by hegemonic gestures and top-down hierarchies, as is clear from his following remarks: “The conception of a control mechanism, giving the position of any element within an open environment at any given instant (whether animal in a reserve or human in a corporation, as with an electronic collar), is not necessarily one of science fiction.”5 Perhaps it is true that, in the late twentieth century, prisons and factories with punch cards were no longer necessary to regulate deviant and subordinate people, but even the mobile control mechanisms of the time – as the example of the electronic ankle bracelet makes clear – were instruments of coercion and authority.

The electronic collar of the present, in contrast, is a product of the wearables industry, a wristband or a smart watch made by a company like Fitbit, which has adopted the following creed: “We believe you're more likely to reach your goals if you're encouraged to have fun, smile, and feel empowered along the way.”6 Moreover, the insurance company Generali describes the “philosophy” behind its new “Vitality” program, which is organized around collecting data from fitness wristbands, with these words: “This is what's important to us: working together to inspire you to lead a more active life and to eat more health-consciously – without any pressure at all and simply because it's what you want to do.”7 Past the disciplinary power of the eighteenth and nineteenth centuries, and beyond even the control power of the twentieth, the trajectory since the turn of the twenty-first century seems to have led to a third form of authority that could be called preventative power or the power of internalization. It ensures that data registries and conceptions of normal life no longer have to be established by external authorities but are, rather, collectively internalized. Create a profile! Share your own location! Become transparent! Dispose of disabled children! Commands such as these, which were once issued by authorities, are now desires that people satisfy without a second thought.

Competitive individuality

Our general readiness to be visible and quantifiable at all times has spawned forms of self-presentation that resemble product promotions. That the principles of criminological registration have been adopted by the field of marketing was already made clear in my discussion of the profile concept: methods developed by the FBI at the end of the 1970s to identify criminals are now being used today to identify consumers. It is characteristic of digital culture, however, that these methods are not only being applied by companies or advertising agencies in order to entice potential customers. They are also being implemented by individuals to market themselves. Users of social media, as Zygmunt Bauman has observed, “are simultaneously promoters of commodities and the commodities they promote.”8 Up until 25 years ago, it was hardly even possible to play both of these roles at once. For the great majority of people, as I mentioned at the beginning of this book, there was simply no platform for representing oneself to the public, and the creators of cultural goods who wanted to be in the public eye typically outsourced the advertisement of themselves and their products to others. The new media system has caused this division of labor to disappear. Almost every author, filmmaker, and musician today who puts out something new will wage a personal marketing campaign on his or her profile and inundate friends and followers with product references during the weeks before the release. To refuse to do so and to insist on the separation of artistic creation and PR work has become an exotic position.

The marketing-strategic approach to one's own self no longer creates a rift in the contemporary human image. Georg Lukács's diagnosis of the “reification” of social relations, which was central to the political and economic critique during the second half of the twentieth century, has become an enigmatic category under today's conditions.9 Even in the census rulings issued by the Federal Constitutional Court, traces of Lukács's concept are recognizable in the prohibition against “forcefully registering people in their entire personality … and thus treating them like an object,” and in the constitutional directive stating that “an ‘inner space’ must be preserved for the sake of the free and self-determined development of an individual's personality.”10 In digital culture, this connection between “free development” and a protected “inner space” no longer exists; the development of the self is rather dependent on permanent media representations. One's own person is understood as a publicly circulating simulacrum whose attractiveness and value have to be confirmed and reinforced in a continuous process.

This activity has become so ingrained that forms of evaluation and rating are now second-nature in current modes of communication. On YouTube, young gamers regularly conclude videos of themselves with comments such as “If you like the content, hit the like button.” Few people are unfamiliar with the hopeful impulse to check in on their own social media postings every few minutes to see whether the number next to the thumbs-up and heart icons has increased. Yet this general imperative to evaluate things characterizes more than just our manners of speech and behavior; it has also been integrated into the mechanical operations of programs and services as a necessary component of daily transactions. In the case of Uber, for instance, no transaction is complete until the driver has been given a rating between one and five stars – thus, it is no longer a personal decision to evaluate another person but, rather, a technical default setting.

In a compelling book about downward economic mobility in Germany, the sociologist Oliver Nachtwey has recently discussed the “competitive individuality” that pervades the present: “It is a signature of our time,” he observes, “that market and competitive mechanisms are implemented in nearly all spheres of society.”11 The incessant opportunities for evaluation during social interactions are a clear example of this. The proliferation of this phenomenon is also evident in the talent shows that are on television today (much like the playful reinterpretation of the formerly genuine anxieties surrounding registration). Ever since Big Brother and Popstars debuted in 2000, dramatic and artificially protracted decisions about which contestants should be kicked off a given show have become a familiar ritual. Heidi Klum, as is well known, relishes in drawing out this selection process in her program Germany's Next Topmodel by tormenting the candidates with misleading insinuations about which of them has failed to make the cut; the obligatory close-up shots focusing on the tears of the rejected girls are the climax of every episode. Today, such decision processes are a common feature of countless reality shows, and this surplus of repetitive drama has numbed us to the fact that staged competitive situations of this sort are still a relatively recent phenomenon. Competitions like this were not a part of television entertainment until the late 1990s, and it is telling that their appearance coincided perfectly with the emergence of surveillance shows. Expedition Robinson and Big Brother were the first programs in which groups of contestants were both relentlessly observed by the camera and gradually whittled down by an internal voting system. Total surveillance and internal competition – two categories that had once been fundamentally distinct – were united in these tense moments of television history as though they had always belonged together. This casual synthesis, however, makes it doubly apparent that the source of collective anxiety shifted around the turn of the twenty-first century. Unchecked surveillance by external authorities is no longer a cause of dread. The contestants on these programs have no qualms at all about the hidden cameras and microphones surrounding them – in fact, they welcome their presence. What causes panic, in contrast, is the idea of losing a competition, earning negative evaluations, and being “voted off.” In this respect, these television shows revolve around the same screening procedures as those used in every one of today's employment assessment centers.

The willingness to market oneself is therefore not a consequence of new media technology alone. Although this readiness has been enhanced by new technological formats, over the past 25 years the seemingly incessant need to be assessed has had just as much to do with economic conditions – with the immense structural changes that have been taking place since the 1980s in corporations and the job market in Western Europe and North America. With Germany as his example, Oliver Nachtwey has analyzed in detail the constant pressure faced by people whose professional careers consist of a fragile chain of temporary contracts. In his study, he defines the figure of the “self-employed employee” or “entreployee” – someone working intermittently within one project-based corporate structure or another – as the “model of modern subjectivity,”12 much as Nikolas Rose and Ulrich Bröckling had used the concept of the “entrepreneurial self” to describe similar circumstances a few years earlier. Evaluation and self-representation are unavoidable facts of life for project collaborators whose employment status is always probationary: “Modern capitalism,” according to Nachtwey, “does not function without collaboration, without the voluntary participation of individuals.”13 The fact that, since the 1990s, career advisers have been focusing so heavily on the “profiles” of job candidates should be understood in this very context (I discussed this phenomenon in the first chapter). The profile, as I see it, is the undeniable nodal point of “competitive individuality.” To the extent that permanent jobs have been replaced by short-term and project-based contract positions, it has become all the more necessary to maintain a carefully cultivated profile in order to tip the balance in one's favor during the next round of applications – that is, during the next competition. (In the world of academia, for instance, this mentality is now par for the course. Ever since tenured professorships or long-term research positions became the exception instead of the norm, part-time or limited-term academics have had to dedicate more and more energy to perfecting their “research profiles” for future applications. Recording one's own accomplishments, which previous generations of scholars regarded as a burdensome administrative chore, has become a primary activity for those under constant evaluation and in constant need of recommendations. Maintaining a research profile now requires nearly as much attention as the research itself.)

The governability of the self in digital culture

Mechanisms of internalization are at work whenever people form ideas about “normal” life by means of self-tracking, and whenever they take over the role of external registering authorities by cultivating their own “profiles.” Although these methods can strengthen the autonomy of the subject by emancipating him or her from intermediary authorities, they also contribute to the growing need to fulfill regulatory requirements. At the heart of this necessity are two sets of ideas that operate on the threshold between the present and the future: “risk” and “precaution.” The reason why so many people wear a fitness wristband, check their heart rate on smartphones, purchase “behavior-based” life-insurance policies, or have their own genome tested for potential genetic diseases is to minimize their future health risks by exercising precaution. A social consensus has formed about the urgent importance of preventative measures for our safety and well-being. Oddly enough, this trend is in stark contrast to the growing acceptance of deviance that now defines the aesthetic realm. No one will bat an eye if a colleague shows up at work with a red mohawk and an arm covered in tattoos, but heads will shake and criticism will be voiced if someone rides a bike without a helmet, smokes cigarettes throughout the day, or neglects to be screened for colon cancer at the age of 45. A mere 25 years ago, exactly the opposite attitude prevailed.

In the human sciences, pioneering ideas about prevention happened to come from the very fields that played a prominent role in the history of measuring techniques. At the beginning of the twentieth century, Hugo Münsterberg remarked that thinking about “prevention” was a central aspect of psychotechnics.14 In his introduction to behaviorism published in 1925, John Watson stated: “It is the business of behavioristic psychology to be able to predict and control human activity. To do this it must gather scientific data by experimental methods. Only then can the trained behaviorist predict, given the stimulus, what reaction will take place.”15 In the 1940s, this sort of analysis would be adopted by the new field of cybernetics and refined for military purposes.16 In the field of criminology, electronic data processing was likewise embraced in the name of prevention. As early as 1968, Horst Herold referred to crime prevention as “the most important yet most neglected means of fighting crime.”17 More than just a tool for comparing information, data processing as he envisioned it would contribute above all to the systematic prevention of future criminal acts.

In today's digital culture, these cybernetic and criminological approaches to prevention go by the terms “micro-profiling” and “predictive analytics.” The belief is that the electronic analysis of large data sets should make it possible to predict crimes in certain areas of a city and within certain sectors of the population. With this alleged knowledge of the future, preventative measures can then be put in place – such as an increased police presence – and manhunts can be initiated in advance of any wrongdoing. In this respect, preventative knowledge serves to enhance the control that authorities can exert over individuals, just as Münsterberg, Watson, or Herold had hoped all along. Regarding the current status of subjectivity, however, it is crucial to stress that these same preventative measures are being voluntarily implemented by individuals as control mechanisms over themselves. The racial profiling practiced by the police at train stations and on public transportation, which places individuals with certain ethnic features under stricter surveillance on account of predictive suspicion, is reflected in the self-made profiles of those networking on LinkedIn, while the GPS tracker placed around the ankle of someone on parole is reflected in the fitness bands worn by self-trackers. The interaction between “techniques of domination and techniques of the self,” which Michel Foucault emphasized in his lectures on the art of government, is therefore vividly present in digital culture.18 “The contact point,” Foucault observed, “where the way individuals are driven by others is tied to the way they conduct themselves is what we can call, I think, government.”19 According to Ulrich Bröckling and his co-authors, this has the following implications for the “governmentality” of the present: “Government is not related first and foremost to the suppression of subjectivity but above all to its ‘(self-)production’ or, to be more precise, to the creation and promotion of technologies of the self that can be tied to the government's goals.”20

Profiles, location-based services, and measuring practices are paradigmatic elements of this promotion. They stabilize a political constellation that accommodates totalitarian methods of registration (according to Orwell's standards and those of his readers in the 1970s and 1980s) and yet is even more problematic because it is difficult to escape and thus difficult to critique. However inconsequential it might be, the one consolation in the novel Nineteen Eighty-Four was that it presented, from the perspective of both the characters and its conventional narrative, a clear dividing line between despotism and freedom, mendacity and the truth. The authorial narrator describes the infrastructures of state power from a distance; he is not caught up in its labyrinth, and this external position allows him space for reason and critique. Moreover, the large class of so-called “proles” in the novel, who live without rights or obligations but constitute 85 percent of Oceania's population, are illustrative of the (often overlooked) fact that the overwhelming majority of people in Nineteen Eighty-Four are not burdened by any state oversight at all. There are no telescreens watching over them in their apartments. If there is any remaining hope for humanity in Orwell's dystopia, it lies in this segment of society: “The proles had stayed human.… If there was hope, it lay in the proles!”21 What, in today's digital culture, is analogous to these outsider positions? It is evident that refuges of this sort, which even Orwell's horrifying vision allowed to exist, are far more difficult to locate and personalize today. With regard to media technology, the area beyond the reach of registration has diminished to the extent that the general desire for self-datafication has grown. At least since 1989, the category of “class” has played a subordinate role in social, economic, and political debates, and this is largely because the prominent discourse about personal motivation – about everyone's “passion” to make his or her own success story come true – has pushed the realities of class distinction and the notion of class solidarity into the background.22 The despotic surveillance state, which, 35 years ago, many people in West Germany feared was looming, never came to be. That said, the place and effectiveness of critique have undoubtedly eroded in today's “liquid modernity.” In 1983 and 1987, the opponents of the census were able to attack the office buildings where all of the surveys were kept. It is not possible to throw bombs at the cloud.

During the last quarter of the twentieth century, what forces were responsible for this internalization of regulatory processes? In his recent book on what he calls the “society of singularities,” the sociologist Andreas Reckwitz has drawn attention to the fact that, over the past 25 years, “the technological complex of computers, digitality, and the internet has led to the ongoing fabrication of subjects, objects, and collectives as unique.”23 With the format of the “profile,” according to Reckwitz, “the digital subject attempts to demonstrate his or her particular and nonexchangeable personality.”24 The basic thesis of Reckwitz's extensive analysis, according to which late-modern society has in many ways been defined by a logic of the singular and particular, has a number of points of contact with the ideas presented in this book. The troubling development that I have described here – that is, the recent transformation of police and criminological methods of identification into technologies of self-empowerment – can be understood precisely in the sense that Reckwitz has in mind. In today's culture, the social imperative to be “unique” and “authentic” has become so powerful that even certain formats of registration, which had long been reserved for stigmatizing deviant subjects, are now being used to produce this uniqueness – the “profile” as well as the tattoo, location technology as well as devices for measuring bodily functions. Just 50 years ago, all of these methods could have been said to generate individuality only in the sense that they made it possible to recognize deviance. Today they are supposed to guarantee, for each of their users, a fulfilling sense of authentic subjectivity.

In the early 1980s, when Reagan and Thatcher's “new economy” inspired Michel Foucault to formulate his ideas about “governmentality,” two concepts were beginning to take hold and merge together. Today they are inextricably linked, and they have done much to redefine our conception of humanity. The first comes from the sphere of the political economy, and the second from the sphere of media technology. The new economic liberalism, which embroiled formerly non-economic sectors of life in the logic of economics, gave rise to increased competition, the rhetoric of self-motivation, and the diminution of the so-called “welfare state.” Meanwhile, as Fred Turner has shown, the Californian success story of digital culture hinged on the idea that personal computers and the creation of “virtual communities” were vehicles of personal autonomy and self-empowerment. In the late twentieth century, individuals were thus encouraged from two fronts to emancipate themselves from the constraints of state institutions. This new economic mentality and the vision of cyberspace utopias combined to promote self-government over external government, open competition over government regulation. In the mid-1990s, terminology that had emerged from the political left around 1968 – “self-responsibility,” “self-determination,” “flat hierarchies” – was adopted by dot-coms and startups and assimilated into the principles of the new economy. Today, this fusion is almost seamless. The free-market economy and the pursuit of profit have formed a proud alliance with the practices of ethics and cultural critique. The figurehead of this collaboration is the “social entrepreneur,” a term that itself blends together the social and the economic, and its signature accomplishment is the “sharing economy,” which has converted private and seemingly non-monetizable aspects of life – one's own bed, closet, and passenger seat – into lucrative sources of income.25

During the Super Bowl on January 22, 1984, Apple introduced its new Macintosh computer in one of the most famous television commercials ever aired. Lasting one minute, the advertisement depicted an Orwellian scene of uniformed party members marching down a hallway full of telescreens to hear an address from “Big Brother.” From a gigantic video screen, the latter intones: “Our unification of thoughts is a more powerful weapon than any fleet or army on earth.” The camera cuts to a young woman in red running shorts – everything but her is black and white – who is racing toward the screen with a sledge hammer and being chased by the government's henchmen. Just as “Big Brother” is ending his speech, the woman throws the hammer at the screen, causing it to explode. A voiceover makes the following announcement: “On January 24th, Apple Computer will introduce Macintosh. And you'll see why 1984 won't be like 1984.” More than 30 years have passed since then, but from the perspective of today's digital culture it is debatable whether things are really so different from Orwell's prophecy. All around the developed world, people are staring at the same screen in the same casing and are adhering to the same model of self-representation. Having been taken over by individuals, processes of regulation have become more efficient than ever before. The promise of developing our own identities is now a more powerful weapon than the unification of our thoughts.

Notes