3
Cavity Searches: Bodily Measurements and the Quantified-Self Movement

The newfound desire to measure one's own bodily activity resulted from an awareness of human inadequacies. This, at least, is the starting point taken in the essay that first popularized the burgeoning practice of “self-tracking” in digital culture. In April 2010, the technology journalist Gary Wolf, who had founded the so-called “quantified-self movement” two years before, published an article in the New York Times Magazine that focuses on the divide between the objectivity of electronically collected bodily data and the inaccuracy of subjective forms of perception and expression. “[M]any of our problems,” the author observes, “come from simply lacking the instruments to understand who we are. Our memories are poor; we're subject to a range of biases; we can focus our attention on only one or two things at a time…. We lack both the physical and the mental apparatus to take stock of ourselves.” And from this sobering diagnosis he draws the following conclusion: “We need help from machines.”1

In this much-discussed text, Wolf stresses that, whereas faith in the evidential value of numerical data is uncontested in scientific, economic, or political contexts, the “cozy confines of personal life” had long managed to remain untouched by this instrument of knowledge. Beyond keeping a journal, any form of detailed self-documentation seemed somewhat ridiculous, or even creepy. This attitude, according to Wolf, is currently undergoing a radical shift. In recent years, he has observed a “self-tracking explosion”: “Sleep, exercise, sex, food, mood, location, alertness, productivity, even spiritual well-being are being tracked and measured, shared and displayed.” The essay mentions a number of techniques used by “quantified-self” enthusiasts to do such things as count their every step, constantly record their blood pressure, track the hours and quality of their sleep, or painstakingly document changes in their mood. All together, these are people who, as Wolf remarks, would have been dismissed as oddballs not long ago but today are regarded as pioneers of a thriving movement. They are replacing the “vagaries of intuition with something more reliable,” and they are dismissing what had formerly been considered an “inchoate flow of mental life” for the clearly differentiated, identifiable, and cross-referenced elements of a “quantified self.” As for the reasons behind the growing popularity of self-tracking, Wolf cites four technological developments: smaller and more accurate electronic sensors; micro-computers in the form of smartphones; social media as a place for sharing one's findings; and the seemingly unlimited memory capacity of the “cloud.” These four elements of digital culture have made it possible to collect data about one's own body and psyche – a formerly expensive procedure reserved for scientific and medical laboratories – simply by using a smartphone to unite a number of previously separate testimonies about one's identity: diary entries, bills of health, medical records, personal IDs, and so on.

The goal is to attain a better understanding of human beings through measurements, and Wolf places this approach in opposition to another technique, one that represented, throughout the twentieth century and beyond, the zenith of our ability to know anything about ourselves: “A hundred years ago,” he reflects, “a bold researcher fascinated by the riddle of human personality might have grabbed onto new psychoanalytic concepts like repression and the unconscious. These ideas were invented by people who loved language. Even as therapeutic concepts of the self spread widely in simplified, easily accessible form,” he goes on, “they retained something of the prolix, literary humanism of their inventors. From the languor of the analyst's couch to the chatty inquisitiveness of a self-help questionnaire, the dominant forms of self-exploration assume that the road to knowledge lies through words.” Proponents of the quantified-self movement are taking “an alternate route”: “Instead of interrogating their inner worlds through talking and writing, they are using numbers” – a dichotomy that the author also underscores by juxtaposing what might lie beneath the surface with what is simply on it. Wolf thus draws a sharp distinction between the goals and methods of self-measuring and those of therapeutic culture: “When we quantify ourselves,” according to his essay, “there isn't the imperative to see through our daily existence into a truth buried at a deeper level. Instead, the self of our most trivial thoughts and actions, the self that, without technical help, we might barely notice or recall, is understood as the self we ought to get to know.”

What Gary Wolf overlooks here – or intentionally fails to mention – is the fact that the preference for measurable data over mere words is of course not a radically “alternate route” of human understanding but rather a new rendition of an old anthropological debate. From their very beginning, the human sciences have wrestled with the question of whether it was best to interpret human beings through linguistic or bodily signs. In the history of knowledge about the self, Freud's “talking cure” did not, as Wolf would like us to believe, set in place a non-referential foundational paradigm; rather, the psychoanalytic method around the year 1900 should be understood as a prevalent stage within a long scientific dispute that first came to a head in the famous “physiognomy debate” between Johann Caspar Lavater and Georg Christoph Lichtenberg in the 1770s. In his aphorisms and disquisitions against the principles of physiognomy, Lichtenberg contested the idea that intellectual inclinations and personal characteristics could be deciphered from human facial features, but his insistence on the anthropological value of language (“Ten words from the language of a people are more valuable to me than one hundred of their speech organs preserved in alcohol”2) remained, within the newly founded human sciences and among their most prominent representatives, a marginal position throughout the entire nineteenth century. In contrast, Gall's phrenology, Broca's craniometry, Lombroso's criminal anthropology, Fechner and Wundt's psychophysics, or Galton and Bertillon's anthropometry cemented the belief in the objective measurability of human beings, their bodies, and their psyches. Processes of standardization, classification, and discrimination were validated on the basis of such measurable findings. And even when psychoanalysis established itself as an instrument of self-knowledge, it did not, as Wolf claims, occupy an exclusive or monopolistic position within this sphere of knowledge. Competing disciplines such as psychotechnics or (later on) behaviorism, which endeavored to evaluate the “inner man” on the basis of measurable data instead of words, exerted a similar amount of influence over the course of the twentieth century.

Not least for these reasons, it is instructive to examine the popularity of self-measuring techniques in today's digital culture, which for the past decade have been subsumed under umbrella terms such as “life-logging” or the “quantified self,” in terms of their historical and scientific genealogy. Although the origin stories of their advocates tend to suggest otherwise, these movements did not arise from nothing. Without a doubt, their specific practices are due to the technological conditions of the last 15 years, but their basic approaches to acquiring knowledge can be traced back to far older scientific models. One of these models pertains to the conviction of self-trackers that there must be a consistently reliable – “natural,” as it were – translational code between bodies and the data in question. The measurements are thought to speak for themselves, and the channels that translate the obscure and amorphous interiority of “bodily sensations” and “moods” into numbers and graphs are believed to be invulnerable to distortions, errors, and misreadings. Mechanically generated data have no history, no contingencies. This is the guiding belief of the quantified-self movement, just as it has been constitutive of the measuring principles followed by the human sciences since their beginnings.

The activity of self-tracking raises fundamental questions about the status of the subject in digital culture, questions that have also been addressed in my discussions of the profile concept and the widespread use of location technologies. Which aspects of these methods should be understood as emancipatory, and which should be understood as suppressive? In his essay, Wolf speaks about the ambition to “democratize” knowledge about our own bodies by means of smartphones and freely accessible, digital measuring instruments. Yet this attempt is opposed by another tendency or constellation, which Wolf himself mentions in passing. This is the “policeman inside all of our heads,” who has been brought to life by the self-trackers’ uncurbed desire to be recorded.3 The question remains to what extent these two views of the self – one emancipatory, the other from the perspective of the police – supplement each other or come into conflict.

Fitbit

In his article, Wolf also mentions a young company from San Francisco that, in the fall of 2009, put a device on the market that can count the steps of its wearers, calculate the number of calories they have burned, and measure the quality of their sleep. At this time, the so-called “Fitbit” tracker still had the form of a clasp and was meant to be attached to a belt or pocket. It was not until 2013 that the company began to install the sensor in colorful plastic bracelets, which, together with the availability of its own smartphone app, drastically increased the sales of the device and turned it into a distinctive feature of the growing quantified-self movement. Today, Fitbit has a market value of around 6 billion dollars and has become the gold standard of the so-called “wearables” industry. In 2017, the company was responsible for a quarter of the more than 100 million wearable sensors sold worldwide.4

The assortment of Fitbit products now includes around ten models, from a simple clip to a digital scale. The company's most popular devices are currently its screenless “Flex” wristband, which can be purchased in Germany for 79 euros, and its “Charge” smart watch, which sells for 149 euros. In the Fitbit Flex, a motion sensor measures the total number of steps taken in a day in the form of flashing points (one point represents 2,000 steps, and five the daily target of 10,000). To monitor the quality of their sleep, which the wristband records by tracking a body's movement during the night, users have to activate the Fitbit app, which translates the tracker's findings into graphs and tables. This app is also needed to see how many calories have been burned throughout the day. The more expensive Charge model presents most of its measurements directly on the watch's screen. In addition to counting calories and steps, it can also measure one's heart rate and rank the activity required by various athletic activities. It is also able to track its wearer's location with GPS (in case someone wants to review his or her jogging route, for instance), but this information is only displayed on the user's synchronized smartphone.

What, exactly, makes Fitbit's products so attractive for so many millions of people? How do they function differently from earlier devices designed to measure and improve one's own health and fitness? On Fitbit's website and in the remarks of “quantified selfers,” one particular argument in favor of wearables recurs again and again, which is that such wristbands and related products enable, for the first time ever, utterly exhaustive data sets to be collected. Mobile self-tracking devices are, as Gary Wolf stressed, “meant to be carried on the body at all times,”5 and Fitbit's own website concentrates on this omnipresence as well: “Every moment matters and every bit makes a big impact. Because fitness is the sum of your life. That's the idea Fitbit was built on – that fitness is not just about gym time. It's all the time.”6 Whereas, in past decades, the ambition to lead a healthy life consisted of interspersing brief periods of physical activity into one's sluggish workaday existence – the morning jog, time at the gym, taking the stairs instead of the escalator – Fitbit now promises to translate such periodic attentiveness to one's well-being into an ongoing condition. The instrument attached to one's body simultaneously serves as a constant means of both control and motivation. According to one of the company's YouTube videos, “Fitness doesn't follow a formula. It's the sum of your life.”7 In 2014, Fitbit attempted to illustrate this claim with a global advertising campaign that involved the recitation of a long string of compound words, each of which ended in “-fit.” By wearing the products, that is, the customer could not only become “racefit” or “hikefit” but also “lovefit,” “kissfit,” or “dadfit,” and thus aspects of life such as love, intimacy, and parental responsibility were lumped in with more conventional components of a “healthy” lifestyle. Accordingly, the campaign's hashtag was #itsallfit. The company's goal is therefore to record, in the name of fitness, data about one's entire existence, in which there is no longer any distinction between work and family, body and soul, productivity and downtime, or even between being awake and being asleep. For, “When it comes to reaching your fitness goals,” as the company claims, “steps are just the beginning. Fitbit tracks every part of your day – including activity, exercise, food, weight, and sleep – to help you find your fit.”8

In all of this emphasis on the seamlessness of life, it is possible to see a technological constellation that has become characteristic of digital culture on account of the general use of smartphones and the comprehensive availability of wireless networks over the past decade. The ubiquity of networks is one of the most important topics in media theory today and, as a concept, its pervasiveness is especially clear in advertising campaigns and promotions for self-tracking devices. Wearing a Fitbit product is related to using a treadmill at the gym in the same way that today's constant access to wireless networks is related to “dialing up” to the internet during the era of modems and desktops. This omnipresence of networks, moreover, also marks the central difference between digital fitness wearables and previous methods and devices for measuring oneself, such as the bathroom scale. Although the latter, which has been around since the 1920s, may have forged a similar connection between data, bodies, and the desire for optimization – it, too, involves outsourcing one's self-assurance to a technical apparatus – the measurements that it produces are ephemeral and can only be viewed by the person standing on it.9 Fitbit and other self-tracking instruments, in contrast, are not concerned with people measuring themselves in the privacy of their own homes, and they are also unconcerned with the obsessive commitment of professional athletes, who often train in solitude to achieve new records. They are rather meant to encourage social and technological interaction among their users; in the quantified-self movement, body-consciousness and communication are inextricably intertwined. Fitbit's commercials underscore this alliance in two ways. On the one hand, they hardly ever show anyone alone, but rather couples, families, or groups of friends and colleagues who get together to exercise, take a walk, or prepare a healthy meal. On the other hand, these short clips always draw attention to the fact that Fitbit's products, in addition to being measuring instruments, are also tools for communication that can display, for instance, incoming calls or texts received on one's synchronized smartphone. Even the basic “Flex” model fulfills this function by relaying signals through its five blinking points. “Stay connected with your friends while working out,” says one of the company's commercials,10 which regularly show people interrupting their workouts to pick up the phone.

That Fitbit places a high value on networking is also apparent from the way it is always encouraging users to “share” their own data. Recall that Gary Wolf counted the establishment of social networks among the four technological preconditions behind the success of the quantified-self movement. Fitbit confirms this idea by its concerted efforts to combine the processes of self-measurement and self-recording with the practice of promulgating and comparing information within a community. Under the heading “Motivation & Friends” on its websites, one reads: “Use Facebook and email to find and connect with Fitbit friends so you can send motivational messages, share stats, and cheer each other on.”11 The network that every Fitbit customer is supposed to be building is thus simultaneously social and competitive: “Stay encouraged to move more by using your steps to climb the leaderboard, or compete with friends and family in Fitbit Challenges,” as the company recommends.12 On the one hand, the community of Fitbit members is thus one of friends and family; on the other hand, however, it is also a community of competitors. “Fitbit tracks every part of your day,” and it does so in the name of a playfully presented but perpetual competition.13

This blending of the social and the competitive is significant, however, because the digital methods of self-measurement have by now been embedded in contexts that extend beyond that of their limited use by individuals or groups of personal acquaintances. Such broader contexts include, most importantly, private and state-run insurance agencies, which now increasingly tie their fees, services, and discounts to information provided by their customers’ self-tracking devices. At the present moment, the dual function of wearables could not be clearer to see. The rhetoric of the quantified-self movement is about self-empowerment and nothing else. Thanks to the ease and reliability with which we are now able to monitor our own bodily functions (or so the argument goes on many blogs and forums), it is largely possible to do away with the traditional healthcare apparatus – places such as medical practices, laboratories, and pharmacies, whose services are too expensive and whose level of care is insufficient. The paternalistic relationship between doctors and patients has been supplanted by emancipated self-trackers: “These new smartphone apps and tracking devices,” according to the technology writer Richard MacManus, “were putting people in control of their day-to-day health data for the first time.… [W]e're moving into a world where we are taking responsibility for our own health, or at least for the measurement and regular monitoring of key health data.”14 This promise of autonomy, however, is supposed to spring from an ensemble of data that, as the advertisements for Fitbit and similar products make clear, derives entirely from permeable networks. “Self-Knowledge Through Numbers,” which is the motto of the quantified-self movement, is thus meant to be achieved within a completely open and interconnected web of relations that can only contribute to the knowledge of the self-tracker by appropriating knowledge from others.

Unprecedented sovereignty awaits, but only at the price of making oneself as identifiable as possible, and the conception of humanity entailed by digital self-tracking oscillates between these two poles. This ambivalence is especially apparent in the latest innovations devised by the insurance industry, which are subsumed under the umbrella term “smart insurance.” Since July 2016, the company Generali, which, with some 14 million clients, is the second-largest insurance firm in Germany, has offered a supplementary program called “Vitality,” which can be purchased in conjunction with a life- or disability-insurance policy. The cost of annual premiums and the opportunity for rebates and special benefits are determined by the number of so-called “Vitality points” that the insured person has accumulated throughout the previous year. After signing a contract, new customers are obliged to take a health test on the company's website and to download the “Vitality app” on their smartphones, which in turn has to be synchronized with a fitness wristband. Every step and every athletic activity is tallied and added to one's Vitality account (as of fall 2017, it is also possible to earn additional points by buying healthy groceries in participating stores, which will keep track of scanned barcodes and relay this information to Generali). When a new client is deemed to have met all the criteria of the health test, he or she is awarded the base-level status of “bronze.”15 High daily step counts, visits to the gym (which, like affiliated grocery stores, records this information and passes it along to the insurance company), health-conscious purchases, and regular preventative medical check-ups all add to the number of points. “Gold” status is achieved at 30,000 points, at which stage the customer's premium is reduced by up to 11 percent; a 16 percent reduction results from achieving the “platinum” status, which is granted at 45,000 points. Moreover, Generali has partnered with a variety of companies, including Adidas, Expedia, and some department stores, that will grant discounts of up to 20 percent for “gold” members of the program, and up to 40 percent for “platinum” members. Points expire at the end of every contract year, but customers are allowed to retain the status that they have earned.

“Know your health,” “Improve your health,” and “Enjoy your rewards” are the three edicts of Generali's Vitality program, which is the first of its kind in continental Europe (similar programs have existed in Great Britain and the United States for somewhat longer). “The goal is to motivate our clients to lead healthy lives and to reward their progress in doing so. This will redefine insurance in Germany.”16 Even statutory health-insurance agencies in Germany are beginning to develop similar rewards programs.17 This proclaimed redefinition of insurance involves, above all, gaining access to a previously unknown and constantly updated abundance of information about every client in order to learn details about his or her exercise routines, eating habits, and general state of health (a client's blood sugar and cholesterol levels, for instance, will be conveyed to the company after each preventative check-up). “Here, every step you take counts. And all of these should be kept track of as well,” states the Vitality program's website in language that could have been taken from a Fitbit commercial. The difference, however, is that the playful competition depicted on Fitbit's website, with its virtual prizes and friendly contests, has been transplanted into the hard economic reality of life and disability insurance, where decisions are made about one's health benefits and premium payments.

The relentless quantification of fitness has thus been accompanied by a new form of individualization in the fields of health insurance and healthcare. “Vitality points” (and similar concepts such as the so-called “health score” devised by the Swiss company Dacadoo, which is used to measure the physical and mental conditions of clients on a constantly updated scale between 1 and 1,000) are intended to transform the complex matter of health into a precisely calculable value that is regulated by the individuals themselves, and whose fluctuations can be traced back, almost in real time, to clearly identifiable causes such as physical activity or nutrition.18 All of this focus on self-empowerment is therefore intervening in some of the basic operations of social and healthcare policy, which have been effective for more than a century. The concept of the “welfare state,” as developed in late-nineteenth-century Europe, is beginning to transform into a concept that could be called the “welfare self.” In this way, the practices of the quantified-self movement and their applications in the insurance industry are fueling a tendency that sociologists such as Ulrich Bröckling and Thomas Lemke first described more than a decade ago – namely, a new understanding of health as a commodity for which individuals have to take their own responsibility.19 Accordingly, sicknesses and impairments are viewed less as the negative effects of social constellations than as the result of personal negligence, a lack of “motivation,” and the failure to seek “preventative care.” (Breast or colon cancer, for instance, are no longer regarded first and foremost as strokes of fate – they rather raise the question of why the patient had let things advance so far.) In this light, it is interesting that the Vitality website contains a freely accessible survey that any visitor can fill out to check his or her “Vitality Age.” The first pages of the test request information about body measurements, eating habits, and workout routines before then turning to issues of “mental wellbeing”: “During the last thirty days, how often have you felt worthless or so depressed that nothing could cheer you up?”20 The content of this survey makes it clear that, according to the logic of the insurance program, one's lifestyle, physical condition, and mental health form an inseparable unit that is defined by clear cause-and-effect relations. Those who have poor eating habits or exercise too infrequently not only jeopardize their blood pressure and cholesterol levels but also their courage to get on with life. Self-tracking devices enable one to create a seamless protocol of this sort of lifestyle, and the comprehensive and all-encompassing promise of self-recording is confirmed by the fact that no distinction at all is made between the physiological and psychological implications of the measurements in question.

Genealogies of self-tracking

Measuring human beings: according to the understanding of the quantified-self movement, the function of this is to promote the autonomous acquisition of knowledge about one's own state of health and to improve one's own well-being. Whereas “profiles” in digital culture enable people to present their own biographies, and GPS technologies enable people to locate themselves in space, the devices and methods of self-measurement are supposed to improve people's understanding of their own bodies. Yet in this third context, too, it is instructive to examine the history of these devices and methods. When and under what circumstances did people begin to measure the human body, both its rigid structures (such as bone structures and the shape of the skull) and its more inconspicuous physiological expressions (including heart rate, blood pressure, breathing, and sweating)? Who was doing the measuring and who was being measured? And what sort of significance was attributed to these measurements?

In the recent essays and books that have come out about the quantified-self culture, the authors usually start off by mentioning a few of the movement's “precursors” from previous centuries – rigorous self-observers such as Benjamin Franklin, whose diary is full of detailed behavioral plans, or the obese doctor from South Carolina named John Lining, who in 1740 kept a yearlong log documenting all of his meals, drinks, and excretions in relation to the outdoor and indoor temperature, the time of day, and the air pressure, in order to reach conclusions about his own metabolism. These protagonists are meant to situate today's self-tracking methods within a long tradition – “personal tracking is not new,” as one study notes21 – but they lead down a minor, if not false, genealogical trail.

While these may indeed be comparable examples of people paying close attention to their own bodies and keeping detailed records of their observations, the presumptions and goals of the self-examining diarists from the late seventeenth and eighteenth centuries differed in many respects from those of the people who use today's “wearables.” For one thing, their activity was based on Calvinist and puritanical principles – a set of religious morals that has more or less disappeared as a driving motivation. Second, their self-measurements and recordings took place in private spaces – in their own homes and in their personal diaries – and they were only made publicly available in the form of scientific reports. Third, the knowledge of these measurers was therefore linked to the peculiarity of their situation, and this self-understanding could manifest itself either as extraordinary virtue (in Franklin's case) or in the awareness of one's own eccentricity or idiosyncrasy.

Such idiosyncrasy, however, is in complete opposition to the fervent desire for comparison, competition, and the creation of standard values that has defined the quantified-self movement from the beginning. In a TED talk delivered in 2011, Gary Wolf projected four adjectives onto the wall to illustrate the goals associated with this new measurement culture: “thin,” “rich,” “happy,” “smart.” The objective of self-tracking, he says, “is to make us better in every way: thin, rich, happy, smart,” and these new devices and methods “will make it easier for us to conform to expert advice about optimal human existence.”22 The quantified-self movement is not concerned with observing unique individuality, but rather with collecting data sets that can be compared and related to standardized norms. And it is precisely for this reason that it is methodologically unproductive to portray, as so often happens, the history of self-tracking as a succession of individual people who happened to analyze and measure their own bodies over the past few centuries. This history began instead at the moment when standardized and systematically implemented techniques made it possible to record, compare, and interpret human bodily measurements. For a sound genealogical analysis, the question of who was taking these measurements – the person being measured or someone else – is of the utmost importance because the devices and methods of today's quantified-self culture derive from scientific disciplines in which the distinction between the measurer and the measured was especially sharp.

In various contexts of knowledge during the second half of the nineteenth century, clear efforts were made to come to certain conclusions about human beings – about their inner lives, their biological dispositions, their affiliations with certain groups, and their immutable identities – on the basis of exact quantifications. Several new apparatuses, recording techniques, and measuring procedures were developed to answer such grand questions about humanity – questions that had previously been addressed in rather speculative terms – with scientific precision. The methods developed at the time can be categorized into two main areas of knowledge. On the one hand, anthropological principles were established for classifying and hierarchizing large cohorts of people by means of measuring their bodies, as with Paul Broca's craniometry or Cesare Lombroso's criminal-anthropological notion of the “born criminal” in the 1860s. This approach to recording the human physique led 20 years later to Francis Galton's and Alphonse Bertillon's variations of anthropometry, which included reliable methods – such as fingerprinting, which is still in use today – for identifying repeat offenders. Around this same time, on the other hand, new physiological measuring techniques were developed that were meant to yield more accurate information about human bodily functions. The second half of the nineteenth century gave rise to apparatuses such as the sphygmograph and the kymograph, which measured blood pressure and heart rates; the pneumograph, which measured the force of chest movements during respiration; and the plethysmograph, which could measure the volume of blood contained in a given organ. Unlike the measurements taken by Broca, Lombroso, or Bertillon, these methods of quantification were not intended to determine the general physical structures of large groups of people but rather to capture the fleeting and dynamic bodily expressions of the individual.

Interest in these devices quickly spread from the field of somatic medicine to other disciplines. In the 1860s, a new science called “psychophysics” studied the ways in which mental phenomena reacted to precisely measured physical stimuli. In 1879, a professor at Leipzig named Wilhelm Wundt founded the first institute for experimental psychology and attempted, by means of equipment typically used in physical or medical laboratories, to gain insights into the operations of human consciousness. As Wundt's student Hugo Münsterberg once remarked, such questions had previously “seemed the exclusive region of the philosophizing psychologist.”23 It was believed that the inner lives of human beings – their feelings, longings, fantasies – could be detected in the graphs produced by heart-rate and blood-pressure plotters. Münsterberg himself, whose professorial career took him to the United States in the 1890s, is largely responsible for popularizing the methods and results of Wundt's experiments. In his laboratory at Harvard University, which according to his own account consisted of “twenty-seven rooms overspun with electric wires,”24 he attempted to transfer his teacher's knowledge about the relationship between physical reactions and mental processes into practical contexts such as economics, pedagogy, and criminal justice. In its optimism about the potential of measurements, this version of applied experimental psychology, which Münsterberg and others referred to as “psychotechnics,” calls to mind the rhetoric of today's quantified-self movement: “With electrodes and the galvanoscope,” as he wrote in his 1914 work Grundzüge der Psychotechnik, “we are able to demonstrate how the activity of sweat glands depends on changes in a person's consciousness; with the sphygmograph and the pneumograph, we can establish the extent to which emotional fluctuations influence a person's pulse and breathing patterns.”25 Precisely calculated body currents were thus regarded as media for the production of truth, and so it can be said that, toward the end of the nineteenth century, the scientific foundation was laid upon which today's self-trackers – who deduce their fitness, mood, and normality from data collected by instruments – continue to stand.

Yet what is telling about this foundation is that, around the year 1900, measuring human beings primarily meant measuring deviants and outliers. Paul Broca's skull measurements and the conclusions drawn from them about the size of the brain served above all to legitimize the hypothesis that dark-skinned people, by their very physical nature, possessed a lower capacity for intelligence than whites. Cesare Lombroso's anthropological criminology, in turn, relied on the investigation of thousands of skulls from the cemeteries and prisons of Turin in order to prove that criminal behavior derives from atavistic abnormalities. In the words of Lombroso's German translator and proponent Hans Kurella: “Lombroso has rediscovered in criminals certain features in common with the skulls of Neanderthals: the pronounced development of the brow, the thickness of the cranium, and a prominent bulge in the occipital bone.”26 According to this understanding, criminals are “evolutionary throwbacks in our midst,” whose biological condition ineluctably gives rise to their delinquent biographies.27 In Lombroso's later studies and in the work of his many students, this basic assumption led to the development of a multi-branched system for classifying deviants, a system that identified nearly a dozen types of “born criminals” according to specific physical and behavioral abnormalities.28

Given that it was intended to be used by the police, Alphonse Bertillon's anthropometry was by definition a means of measuring deviant subjects. According to Bertillon, his new identification system was a reaction to the “empty hope” of accomplishing anything with the recently created police archive containing photographs of every convicted criminal in Europe's major cities. The 100,000-odd photographs that the Paris police force had collected by 1880, for instance, had long ceased to be sorted and classified in any systematic manner. Photographs, moreover, are unreliable pieces of information if the goal is to determine the identity of recidivist criminals, whose outward appearances change with time and can be intentionally altered. Bertillon overcame this shortcoming by taking around a dozen measurements of every suspected criminal in areas of the body that would remain unchanged over the course of an adult lifetime. Among other things, he measured the length of their forearms, the length of their middle and little fingers, the length and width of their skulls, the size of their feet, and the length of their arm spans. He then classified each of these measurements into three different categories: “large,” “medium,” and “small.”29 From these classifications, it was possible to generate for every delinquent a so-called “anthropometric signature.” The latter provided such an accurate representation of the people in question that, beginning in the 1880s, the police forces in large European cities soon found it easier and easier to answer the question of whether a given suspect had in fact already committed a crime within their precincts. For, altogether, the data sets provided by the measured body parts were so distinctive that, according to Bertillon, “only twelve out of every sixty thousand people will share approximately the same measurements.”30 In place of the amorphous and seemingly untamable mass of criminal photographs, Bertillon created a tightly knit archive consisting of hundreds of filing compartments, each of which contained just a few ID cards. These, then, could be used in conjunction with traditional means (such as names and the photographs on file) to reveal the identity of wanted criminals. “The majority of repeat offenders,” Bertillon predicted, “will give up the hope that their tricks to remain at large will continue to be effective.”31

No genealogy of the quantified-self should fail to mention that the development of measuring techniques during the second half of the nineteenth century was inextricably linked to the endeavor of identifying criminals. As Manfred Schneider noted in his book about the relationship between modern autobiographies and the human sciences, “The question of identity is a question of identifying deviance,”32 and this statement applies just as well to the efforts of craniometry, anthropometric criminology, and psychotechnics. Not without a degree of pride, Bertillon was able to conclude one of his lectures with the following remarks: “In a word, the main objective of this new method is to determine the personality of everyone on a firm basis – to secure for every individual a reliable, permanent, and unchanging individuality.”33 From this it can be gathered that the notion of “personality” at the end of the nineteenth century was synonymous with an ensemble of data that, having been derived from intensive observation, situated human beings within the limits of what is “normal” and “tolerable.” In this respect, bodily measurements took on a meaning that was similar to that of the first pedagogical and psychological “profiles” from the beginning of the twentieth century.

In terms of epistemology, this method was legitimized by the fact that the anthropologists, criminologists, and psychophysicists proceeded from the belief that an indisputable relationship existed between external features and internal conditions – between physical degenerations and corresponding mental, spiritual, and moral anomalies. In the words of Marcel Krause, Broca and Lombroso presupposed that there was an “absolutely transparent representational relationship”34 between the skull, the brain, and one's mental disposition (the field of phrenology had made this same presupposition in the early nineteenth century, though its approach was speculative and did not rely on exact measurements), and Münsterberg's experiments likewise associated every rise in blood pressure and every acceleration of the heart rate with one mental difficulty or another.

In this light, it is instructive to examine the central categories of measurement used by the quantified-self culture. Over the past few years, the most popular unit has been the “step,” the counting of which no longer even requires a special device (such as a Fitbit wristband). Among the standard features of the most popular smartphones, this function is already pre-installed, as in the “Health” app that has come with every iPhone since the 5s model and cannot be deleted. Whether they are working out or not, the owners of today's smartphones count their steps on a daily basis without controlling these values or even paying much attention to them. Outside of the military context, however, where did this need first arise? Under what conditions did it become necessary in the human sciences to calculate the unit of the “step” with technical precision? In Hans Gross's Criminal Investigation: A Practical Handbook, which since its original German publication in 1893 has served as the foundational textbook in the field of criminology, there is a lengthy passage in which Gross explains that the future police investigator “will indeed be unable to ‘go for a walk,’ in the sense of strolling with mind at rest, enjoying peacefully the beauties of nature.” Instead, he goes on, “In all the walks he makes, either for pleasure or duty, an ordnance or survey map should be in his hand.”35 By way of an example, Gross then illustrates why it is important for investigators to pay close and constant attention to routes and distances: “A witness estimates an important distance at, let us say, 200 yards: let him be brought out of doors and say how far might be 100, 200, 300, 400 yards; if now these distances be measured, one can easily judge if and with what degree of accuracy the witness can judge distances.”36 Gross recommends that future criminologists should turn the calculation of distances into a professional priority:

As this judging of distances is often necessary, it becomes important to measure before-hand from a convenient window certain visible fixed points and to note the distances for future examinations. For years the author had many occasions for doing so from his office-room window and knew for instance: to the left corner of the house – 65 yards; to the poplar tree – 120; to the church spire – 210; to the small house – 400; to the railway – 950. By these distances he has often tested witnesses. If the witness proves fairly accurate in his estimates, his evidence may be considered important for the case under investigation.37

Around the beginning of the twentieth century, the police investigator was a human step counter – a Fitbit wristband made of flesh and blood – but this tool was not used to benefit his own well-being but rather to improve the productivity of the justice system. Things would be much easier, according to Gross, if all of this counting did not have to be done by the human investigator himself but could rather be accomplished by technical means. And, in fact, he goes on to discuss just such an apparatus in his Handbook. In a section devoted to the “equipment of the investigating officer,” he lists what the officer's “travelling office box or bag” ought to contain. One of the items on this list is a so-called pedometer, which he describes as follows: “A pedometer, though not perhaps indispensable, is most useful; it is the shape and size of a watch. If one wishes to measure a long distance, one sets the needles on all the dials (units, tens, &c.) at zero, puts the instrument in the pocket and walks off.”38 Gross ends this discussion by addressing the question of where the investigator ought to keep this small apparatus on his body. Although it might seem obvious to wear it around one's wrist, Gross adds the following remark: “For greater certainty one may put the instrument in one's boot, when every step will certainly be registered. Thus better results are obtained than by merely counting paces, while the great advantage accrues that he who carries it can look about him and devote his attention to other matters, which is quite impossible while continually counting.”39 Like precursors to today's users of “Nike+,” who slide their GPS-supported step counters into the designated pocket on their running shoe, the police officers envisioned by Gross are supposed to patrol their districts with this new apparatus stuck in their boot. At the beginning of the twentieth century, the “step” thus became an object of exact calculation, and the aim of these calculations was to help police investigators to take control of exceptional situations in which mere places have become crime scenes and every stray detail might provide a crucial piece of evidence.

c3-fig-5001.jpg

The automatic step counter kept in Hans Gross's “travelling office box” around the year 1900 (on display at the Museum of Criminology in Graz).

Around this same time, there happened to be another area of science that dealt with exceptional circumstances and likewise placed the “step” at the center of its attention. Unlike criminal investigations, however, in this field the step was not used as an instrument for convicting deviant subjects but was rather treated as an expression of deviance itself. Among the possible manifestations of a certain group of psychopathological symptoms, which Richard Krafft-Ebing referred to as “obsessional ideas” in 1867 and which would be systematically studied a decade later by Carl Westphal, the psychiatrists of the late nineteenth and early twentieth centuries repeatedly described a disorder known variously as “counting compulsion,” “obsessive counting,” or “arithmomania.”40 If, as in Westphal's classic definition, obsessional ideas are those that “rise to the foreground of consciousness against the will of the affected person, cannot be driven away, and impede and frustrate one's normal train of thought,”41 then the overwhelming and uncontrollable urge to count things ranks among their most tormenting realizations. Documented cases of arithmomania – in Westphal's work, this phenomenon is still clearly distinguished from the “genuine manias” upon which diagnoses of schizophrenia were based in the early twentieth century42 – include the compulsive counting of banknotes, spoken words, and people passing by on the street. One of its most common manifestations, however, was the obsessive counting of steps.

In his 1892 dissertation Über Zwangsvorstellungen [“On Obsessional Ideas”], for instance, Georg Joachim related the medical history of a 31-year-old woman from Berlin who, after separating from her husband, “found herself in the most miserable situation”: “Since that time, she has been compelled, wherever she might be, to divide up every object she sees and to count the number of its constituent parts.… Whenever the patient is out on the sidewalk, she avoids cracks and has to count every cobblestone she steps on. If she fails to satisfy these compulsions, then she is overcome by feelings of great anxiety and discomfort.”43 Around 30 years later, Walter Jahrreiß recorded a number of similar cases, now under the diagnosis of schizophrenia. About the 19-year-old patient “Karl W,” he observed: “When walking, he had to compulsively count his steps up to six and then restart at one.”44 About the 57-year-old “Michel S”: “Years later, he began to count things, and he did not know the cause of this. At first he counted his steps. In all of this, the number four played a large role. He always calculated whether his steps were divisible by four. If they were not, he grew anxious.… Later, he had to count his steps in such a way that their total number, when added to the number of the day on the calendar, had to yield an odd number.”45 In his notes on an interview with another patient, Jahrreiß wrote: “He stood up from his chair, took a few steps to the left, then said: ‘O God, that was all wrong, I should have walked in the other direction.’ Then he went to the door and stood before it, hesitated a little, and finally took a big step through the doorway. Then he counted the number of stairs on his way down to the clinic. He considered it very lucky that there happened to be thirteen of them.”46

Around the year 1900, obsessive step-counting also features in literary depictions of insanity. Alfred Döblin, who was trained as a psychiatrist, began his most famous short story – “The Murder of a Buttercup” (1905) – with the following words: “The gentleman in black had been counting his steps at first, one, two, three, up to a hundred and back again, as he made his way along the wide road edged with firs up to St. Ottilien, swaying so far to right or left with each movement of his hips that he sometimes staggered; then he forgot it.”47 Döblin's portrayal of the businessman Michael Fischer, who frantically hacks off the head of a flower while taking a walk through the woods and is haunted by this act throughout the rest of the story, has been interpreted by literary scholars as “an exact description of obsessional neurosis.”48 The mounting agitation and delusions of the protagonist, however, are anticipated by the arithmomania depicted in the story's opening sentence, an obsession that recurs soon after Fischer is through with his “attack” on the buttercup: “After a short time, he began again to count his steps, one, two, three.”49

Ever since obsessive ideas and behavior were first noticed, psychiatrists have been trying to explain this peculiar need for “excessive precision” (this is how, at the end of the 1860s, one of the earliest diagnosed patients in Germany described his own condition).50 “It does not follow from my observations,” wrote Carl Westphal in 1877, “that sexual excess of any sort (masturbation, etc.) plays an especially common role in the etiology of this disorder.”51 This opinion, however, would soon be contested by the nascent psychoanalytic interest in obsessive disorders at the end of the nineteenth century. Freud devoted two of his early essays to what he called the “burdensome ceremonial” of counting and other obsessive behaviors. In “The Neuro-Psychoses of Defence,” which was published in 1894, he interpreted such disorders as failed attempts to repress early erotic stimulations: “In hysteria, the incompatible idea is rendered innocuous by its sum of excitation being transformed into something somatic” – something such as obsessive impulses.52 Two years later, in “Further Remarks on the Neuro-Psychoses of Defence,” Freud expressed this notion in clearer terms: “The nature of obsessional neurosis can be expressed in a simple formula. Obsessional ideas are invariably transformed self-reproaches which have re-emerged from repression and which always relate to some sexual act that was performed with pleasure in childhood.”53

However the meticulous counting of one's own steps happened to be understood around the year 1900 – whether as an unsuccessful process of repression or in the sense proposed by the French psychiatrist Alexandre Cullerre, who believed that epileptic and depressed patients would at first make use of arithmomania “as a way to free themselves from their bleak thoughts, and then this originally arbitrary act would gradually develop into an irresistible impulse to make pointless calculations”54 – this activity, at least when it was not employed to reconstruct the events of a crime, was always attributed to compunctions that were thought to be imposed upon those engaged in it. There is a reason, after all, why the medical community categorized this activity as a compulsive disorder, as it still does. The impulse to count things in a constant manner is one that overpowers the subject in question; it threatens to take away the subject's sovereignty over his or her perception and mental faculties. What is more, the results of the subject's own addictive counting cannot be trusted, as Leopold Löwenfeld pointed out in 1904. In a case study of a patient who, among other things, “counted his pulse and how often he swallowed saliva,” Leopold observed: “These tallies were … purely imaginary; the patient's results never agreed with his actual heart rate.”55 Psychiatrists around the turn of the twentieth century, in other words, did not even trust the data that their arithmomanic patients collected in their own minds.

The rampant and collective step-counting in today's digital culture – the current obsession with wearing Fitbit devices and monitoring health apps on smartphones – is no longer an inner compulsion but rather a voluntary decision. A pathological disposition has transformed into a desire for fitness; “self-reproaches which have re-emerged from repression,” in the sexual sense, have transformed into a program of self-empowerment. In the obsessive “observation of numbers” characteristic of neurotics, Freud suspected that certain “penitential measures” were at work.56 Yet for what sins are the millions of health-conscious people today, glued as they are to their wearables, supposed to be performing penance?

Measuring, classifying, discriminating

In the late nineteenth and early twentieth centuries, the scientific focus of measuring techniques on identifying deviance ultimately led to a widespread enthusiasm for discrediting and marginalizing entire portions of the human population by means of bodily measurements. Around 35 years ago, the historian of science Stephen Jay Gould wrote a highly acclaimed book, The Mismeasure of Man, about these effective and influential efforts. Gould's study is a piercing critique of the “biological determinism” espoused by Broca, Lombroso, and the twentieth-century inventors of intelligence tests: “I would rather label the whole enterprise of setting a biological value upon groups for what it is: irrelevant, intellectually unsound, and highly injurious.”57 His analyses are concerned above all with two fundamental fallacies that have influenced the measurement of intelligence in fields such as craniometry and quantitative psychology: on the one hand, the practice of reification (the scientific tendency “to convert an abstract concept … into a hard entity”); on the other hand, the practice of ranking (“our propensity for ordering complex variation as a gradually ascending scale”).58 This dual reduction of heterogeneous phenomena (such as intelligence) to a “unitary ‘thing,’”59 which can then be placed within a specific ranking, is what led to “the mismeasure of man” and thus inspired the title of Gould's book. These methodological deficiencies are consequential, he believes, because craniometry or Goddard's and Terman's intelligence tests did not use numbers to advance objective knowledge but rather to confirm social prejudices and “illustrate a priori conclusions.”60 Rankings such as “white – black – ape” or “upright citizen – criminal – ape” had already been established features of Broca's and Lombroso's social-Darwinian worldview long before their many measurements provided ostensible evidence to confirm their ideologically motivated hypotheses.

Even if Gould's approach raises certain methodological questions of its own – he, too, is interested in gathering evidence to prove a-priori hypotheses, and in many places he simply trumps criticized historical results with more accurate measurements of his own – his book is nevertheless an important point of reference for the issues under discussion. First and foremost, it makes it easy to see the differences between the quantitative sciences in the decades around 1900 and the self-tracking techniques in digital culture. Like the scientists described by Gould, today's proponents of the quantified self consider themselves “servants of their numbers,”61 but their use of such data has nothing at all to do with explaining social strata or marginalizing certain groups. This difference is especially clear in the concept of “disposition,” the understanding of which has shifted. For craniometrists and the American popularizers of intelligence tests, the biological (and, as of the 1880s, genetic) endowment of human beings was the decisive factor in their development and social status. The being of the criminal or problematic schoolboy was everything, and this was registered in the shape of his skull, the size of his brain, and in his genetic material, while his becoming, which was shaped by external and dynamic factors, was utterly ignored. Gould thus refers to the quantitative sciences under consideration with the fitting term “theories of limits”: they assert that the quantifiable differences between ethnicities, genders, or populations are not only innate but, above all, immutable.62 Such a classification, however, which is determined by nature and cannot be overcome by the individual, contradicts all of the basic principles of the quantified-self culture, which is concerned with permanently shifting and optimizing limits in order to satisfy, day by day and step by step, the wishful thinking of leading a more healthy, productive, and fulfilling life.

From cementing the boundaries between groups on the basis of data to expanding the limits of individuals on the basis of data – this broad arc seems to separate the earlier quantitative sciences from the self-trackers of today. A closer look, however, reveals certain similarities in their respective methods and motivations, and the distance between them begins to shrink. In Gould's book, categories such as the “cranial index” or the “intelligence quotient” receive such harsh criticism because they convert a multifaceted phenomenon into a “measurable entity.”63 Is it not possible to understand concepts such as Generali's “Vitality points” or Decadoo's “health score” as reductive categories of the same sort? Complex realities such as “health” or “mood,” which vary from one person to the next, seem to be treated here as precisely calculable and universally valid quantities. The two methodological fallacies that Gould identifies in historical efforts to measure intelligence – again, reification and ranking – seem to apply just as well to today's measurements of health and wellness.

At the same time, while the quantified-self project might at first glance seem to focus on the individual, its aims have shifted more and more toward accessing collectives. Unlike in the efforts of Broca, Lombroso, or Goddard, the issue here is not one of deploying biology to legitimate the minority status of women, blacks, criminals, or the poor. Based on digital self-monitoring, these new insurance programs are rather intended to differentiate other types of groups – namely, the healthy from the sick, the fit from the unfit, the cautious from the negligent. This has resulted in new hierarchies or rankings, and although they are not meant to represent any innate classifications, they nevertheless correspond to those of the earlier quantitative sciences in an important respect: their omission of socio-economic factors. A century ago, Lewis Terman, who revised the Stanford–Binet IQ test, expressed his conviction that “class boundaries had been set by innate intelligence.”64 When Gary Wolf introduced his four terms – again, “thin,” “rich,” “happy,” and “smart” – as the leading goals of self-tracking, he likewise evoked an inner human authority: not innate intelligence, as in Terman's case, but rather sheer determination or willpower, which he thus implied is stronger than any social force. According to Terman's argument, people remain poor on account of their lower intelligence; according to Wolf, the poor are those who lack sufficient motivation and self-discipline. Clearly, these are not categorically different positions. The social Darwinism of skull and brain measurers seems to have transformed today into a sort of mentality-based Darwinism, and in this sense it is no coincidence that one of the recent advertisements by Fitbit depicts the “evolution” of its tracking devices by alluding to the famous scientific illustration known as “The March of Progress.”

c3-fig-5002.jpg

An advertisement on Fitbit's website (2017).

If the aim of the influential corporations in digital culture truly is to create a “new man,” then the motif of this advertisement – more than being just a playful allusion – illustrates this ambition with unusual clarity. Such genealogies place the community of self-trackers in a difficult situation. As is well known, this is a community defined by its political and moral sensitivity and by its anti-racist and anti-sexist principles. Yet, in the early twenty-first century, its members are busy perpetuating certain measuring practices that were devised in the late nineteenth century in the name of sexism, racism, and social discrimination.

Introspection and data generation

In his manifesto for the quantified-self movement, as we have seen, Gary Wolf distinguished the act of measuring one's own body from the “prolix, literary humanism” of psychoanalysis, and he contrasted the language-based efforts of psychoanalysts to delve into the deep layers of human consciousness with the practice of making technical recordings “of our most trivial thoughts and actions.”65 By formulating these juxtapositions, Wolf was in fact reviving an old methodological dispute about the most effective ways to study human beings. This dispute, as you might by now expect, was at its hottest around the year 1900. In their case studies, Freud and Breuer demonstrated the power of the “talking cure,” which enables analysts to treat hysterical bodily symptoms by prompting patients to recall formative memories and thereby come to terms with neurotic complexes. According to this notion of therapy, the self is an ensemble of biographical impressions, some of which have been processed better than others, and the art of psychoanalysis lies in exhuming the origins of painful and “repressed” impressions from the past.

In the early twentieth century, however, this hermeneutic approach to the inner life of human beings – this somewhat vertical perspective on its secrets and riddles – was opposed in the human sciences by a horizontal approach that was no less influential. Disciplines such as experimental psychology and its derivative schools (including psychotechnics and behaviorism) were not deeply interested in accessing human beings via language or in discovering the biographical origins of disorders; their aim was rather to stimulate and record the superficial expressions of the human body. Instead of plumbing the introspection of their patients, they measured them; instead of producing memories and words, they focused on the production of body currents and data; instead of waiting around for the delayed outbreak of latent complexes, they were concerned with immediate reactions to external stimuli. These currents and reactions were so subtle, however, that they could not be registered by human perception. As Hugo Münsterberg noted in 1914, for instance, “the evidence that even the slightest fluctuations in feeling are reflected in changes to one's blood circulation, in involuntary muscle movements, and in the activity of one's sweat glands” is difficult to detect, and technical instruments are therefore needed to make the bodily expressions of mental activity “perceptible where they might escape the usual attention of the viewer.”66 From the beginning, then, quantitative psychology required the assistance of apparatuses and technical media. Pulse recorders, blood-pressure recorders and pneumographs did the work that psychoanalysts hoped to accomplish with just their ears, a pen, and some paper. To the extent that certain human sciences lost faith in the ability of people to reveal things about themselves by means of memories and language, increased efforts were thus made to use technology in order to assemble some sort of truth from the fragmented signals produced by the human body.

Beside Münsterberg's psychotechnics, one of the sharpest critics of the hermeneutic approach to the inner lives of human beings was behaviorist psychology, which was developed in the United States. Whereas Wilhelm Wundt's experimental school had been busy taking exact measurements of human consciousness since the 1870s, John Watson, the founder of behaviorism, took things a step further by eliminating the category of consciousness altogether. In his essay “Psychology as the Behaviorist Views It,” which was published in 1913, he declared: “The time seems to have come when psychology must discard all reference to consciousness; when it need no longer delude itself into thinking that it is making mental states the object of observation.”67 The only thing that experimental psychology ever accomplished, in his view, “was to substitute for the word ‘soul’ the word ‘consciousness.’”68 Instead of hunting for some mysterious psychological essence of man, Watson preferred the pragmatic approach of simply studying human activity and behavior. Like the experimental psychologists before him, he studied the relation between “stimulus” and “response” – between external impulses and internal transformations of human behavior – but he did so not to illuminate our inner being but rather to identify recurring behavioral patterns. The only concern of behaviorism was to observe and measure effects; regarding the subjective and emotional origins of these effects, it had no interest whatsoever. “Its theoretical goal,” as Watson explained about the new science, “is the prediction and control of behavior.”69

The aspiring methods of behaviorism, which became one of the most influential schools of psychology, can be seen as a direct critique of the representational approach used by language-oriented psychology. In this view, words are dubious emissaries of the inner condition; to behaviorists, the culture of language seemed too convoluted to provide an adequate illustration of the instantaneous relationship between stimuli and reactions. B. F. Skinner, who was long the most preeminent representative of the behaviorist school, repeatedly underscored the incongruity between the inner life of emotions and linguistic expression. As far as he was concerned, feelings should simply be understood as reactions to stimuli, but reports about them should be regarded as the result of particular linguistic contingencies related to the society in question: “We cannot measure sensations and perceptions as such,” he wrote, “but we can measure a person's capacity to discriminate among stimuli, and the concept of sensation or perception can then be reduced to the operation of discrimination.”70

In this light, the connections between today's quantified-self methods and the perspectives of psychotechnics and behaviorism are especially clear. Today's self-trackers likewise consider language to be an unreliable medium for understanding human beings. Fitness wristbands, smart watches, and apps used to quantify one's mood are intended to provide information about their users by means of data produced by their own bodies. Ulrich Raulff once described the establishment of quantitative psychology as an act of “turning away from questions of being toward questions of cause and effect.”71 This same constellation motivates the culture of self-tracking, with its ubiquitous recordings, yet there is one important difference that is central to the conception of human beings in the culture of digital self-recording. In their work, behaviorists such as Watson and Skinner repeatedly stress that the shift of their psychological interest from “consciousness” to “behavior” involves a radical critique of the autonomous subject. In Skinner's understanding of psychology, for instance, there is no room for the idea of independent agency. A person should rather be understood as a sort of intersection: “[H]e is a locus, a place at which many genetic and environmental conditions come together in a joint effect.”72 He asserts, moreover: “There is no place in the scientific position for a self as a true originator or initiator of action.”73 There is thus a glaring paradox in the culture of self-tracking: although it has adopted the epistemological principles of psychotechnics and behaviorism, thus regarding human beings as producers of superficial data whose inner lives defy scrutiny, it somehow draws entirely different conclusions from these practices about the status of the subject. By recording and managing his or her own blood pressure, heart rate, and daily movement, the “quantified self” is supposed to become the very “originator or initiator of action” that Skinner had outright dismissed. Here we encounter the same discontinuity as that which characterized the transformation of the electronic ankle bracelet into the GPS-equipped smartphone: The technical ensemble is more or less the same, and the basic function of recording remains intact, but, in both cases, a former instrument of control was transformed into a tool of self-empowerment. This historical similarity between the methods for determining someone's location and those for measuring bodies is all the more apparent in the fact that, as mentioned above, it was one of B. F. Skinner's students, Ralph Schwitzgebel, who in the late 1960s had developed the first prototype of the electronic ankle bracelet. His aim in doing so, moreover, was to implement a form of “behavioral control,” which was of course the principle goal of behaviorism as stated by John Watson. The apparatuses, technologies, and even the methodological counterparts of self-tracking (such as psychoanalysis) are thus all part of the same scientific lineage. It is not for nothing that the Generali insurance company has advertised its “Vitality” program with the following words: “Vitality is a unique behavioral-based shared value insurance model,”74 though with the opposite promise of promoting the autonomy of its customers by means of behavioristic control mechanisms.

Tied up with this paradox is the issue of the methodological complications posed by measuring oneself. In the scientific disciplines whose premises are borrowed by the quantified-self culture, it is generally regarded as a dubious practice to unite the measuring authority and the object of measurement into a single entity. Hugo Münsterberg repeatedly stressed the need to have professional guidance in all psychotechnical experiments; an “untrained average person,” he wrote, would be incapable of executing the measurements.75 John Watson, too, made a similarly categorical remark in his introductory lecture on behaviorism: “You will soon find that instead of self-observation being the easiest and most natural way of studying psychology, it is an impossible one; you can observe in yourselves only the most elementary forms of response. You will find, on the other hand, that when you begin to study what your neighbor is doing, you will rapidly become proficient in giving a reason for his behavior.”76 B. F. Skinner was of the same opinion: “In self-knowledge, the knowing self is different from the known. In self-management, the controlling self is different from the controlled.”77

Today's self-tracking movement is not at all bothered by such basic methodological debates. For the companies who make the products and the customers who use them, it is beyond any doubt that the measurement methods of smartphones and their accompanying apps provide reliable and useful data. This collective trust, however, is frequently and clearly refuted by the very medical disciplines whose research depends on measuring the phenomena in question. So far, the sharpest critique of the devices and apps used by the quantified-self movement has come from professional sleep researchers. As mentioned above, one of the functions of current fitness wristbands like Fitbit and special apps such as Sleep Bot, Wake Mate, Sleep Advisor, or Sleep as Android is to provide accurate measurements and recordings of the quantity and quality of one's sleep. According to the descriptions of these products, such measurements require no effort whatsoever; it is enough to keep a smartphone near your body or wear a fitness band overnight, and, simply on the basis of your movements, these devices supposedly produce an informative protocol of the restful or restless stages of your slumber. “Use a Fitbit tracker to record your sleep at night,” the company's website recommends: “Then, use the sleep tools in the app to set a weekly sleep goal, create bedtime reminders and wake targets, and review your sleep trends over time.”78

The deficiencies of these recording techniques have been described more and more in recent years. All that the sensors in wristbands or apps are able to register are the body's movements throughout the night, which are then tabulated and graphed out to illustrate the soundness of someone's sleep. Measurements of this sort are utterly crude in comparison with the work of medical sleep researchers, who have been monitoring brain waves and REM cycles since the middle of the twentieth century. Published in 1968 by the American Department of Health, the first manual devoted to standardizing the terminology employed in the field of sleep research states that the scientific quality of any sleep assessment can only be guaranteed so long as the activity of each patient is recorded by at least one electroencephalogram, one electromyogram of the chin, and two electrooculograms of the eyes.79 Compared to the data produced by such techniques, of course, the meager informative value of the data created by self-tracking devices cannot be denied. In light of the obvious nature of this discrepancy, however, perhaps it is more interesting to ask why these devices and apps, whose inaccuracy is no secret to their users, have enjoyed such great success. There seems to exist a general yearning to record things in digital culture, a tendency toward self-Taylorization that outweighs even our awareness of the unreliability of the measurements themselves. This peculiar longing, moreover, has suppressed other perspectives about new technologies, which were once influential during the early stages of the digital age – for instance, the fear concerning the potential health risks posed by the devices, not least that of their “radiation,” which was a big issue in the discussions about so-called “electrosmog” that were widespread around the turn of the twenty-first century. The website of the sleep-tracking app Sleep as Android makes the following suggestion about where the phone should be positioned overnight: “The phone needs to keep contact with the mattress in order to capture your movements. We recommend putting it on the mattress near your body. Good positions include: under the pillow.”80 In other words: keep the phone next to your head! Just 15 years ago, such remarks would have mobilized a slew of cultural critics and citizen action groups. By now, however, the desire for self-measurement has canceled out any fears we might have about the unpredictable effects of technologically induced radiation.

Lifting the veil

In order to explain why the quantified-self movement, with all of its emphasis on individual agency, happens to rely on sciences that have historically treated the measured subject as an utterly passive entity, it will be helpful to revisit once more Gary Wolf's essay from 2010. In this text, he cites someone who has been meticulously tracking his own alcohol consumption with an electronic diary. By entrusting this information to a computer instead of to another person, he allegedly bypassed the threat of social shame and was thus less likely to underestimate his drinking. Wolf's comment about this man's realization is as follows: “After all, it is silly to posture in front of a machine.”81 This statement underscores the essentially unproblematic relation that self-trackers have to the truth. If the person doing the measuring is identical to the person being measured, there is supposedly no reason to doubt the willingness of the test subject to produce reliable data. For self-trackers, one's inner life and one's external bodily signals exist in perfect harmony. The person leading the investigation and the object of the investigation itself are accomplices.

This complicity, however, represents a crucial difference between today's passion for self-recording and its background in the history of science. In the fields of anthropometry, psychotechnics, and behavioral electronics, the relationship between the person taking measurements and the person being measured was one of rivalry. The instruments and methods developed around the year 1900 were meant to uncover hidden knowledge. “Under certain conditions,” wrote Hugo Münsterberg, “both doctors and legal practitioners have an interest in bringing to light thoughts and moods that are being kept secret.”82 Behind this veil, he thought, lies the true nature of psychiatric disorders and the guiltiness or true identity of a criminal suspect – things that, though not openly expressed by the people under investigation, could be revealed by the length of their bones or the charts produced by their bodily currents. These quantitative sciences were based on the presumption that patients or suspects were inclined to refute themselves: what they left unspoken, that is, could supposedly be demonstrated by their heart rates or by radius of their skulls. In the rhetoric of these sciences, bodily manifestations were congruent with the content of the mind.

What was it at the end of the nineteenth century that gave rise to so many instruments designed to establish the truth? Looking into the matter, one finds that these developments were largely motivated by the unreliability of witness statements. It was this phenomenon, in fact, that necessitated the alliance between applied psychology and the criminal justice system. With its countless digressions about criminal psychology and the art of detecting clues, Hans Gross's Criminal Investigation: A Practical Handbook is primarily an effect brought about by this “crisis of the witness” around 1900. Writing around the same time, Hugo Münsterberg remarked: “It is perhaps no exaggeration to say that there is even a new special science that deals exclusively with the reliability of memory.”83 The main contribution of psychotechnics was its invention of methods and apparatuses for overcoming this lack of reliability – “prosthetic means of establishing the truth,” which, as Münsterberg believed, were more trustworthy than the fragmentary statements of witnesses or the often violently coerced confessions of suspects.84 “Everyday life,” he wrote, “provides all sorts of opportunities to observe how feelings, often unintentionally or even against the intentions of individuals, are expressed in involuntary behavior and in the perceptible functions of the circulatory system and sweat glands. When we see how a person blushes or turns pale upon hearing a particular name – how tears well in his eyes, his speech begins to stutter, and his hands begin to quiver – we can regard these things as symptoms of inner agitation.”85 The aim here was thus to formulate a reliable semiotics of guilt. As in Broca's or Lombroso's work a half-century before, this was based on a clear representational relationship between bodily signals and inner conditions. Unlike Broca and Lombroso, however, Münsterberg was not interested in incontrovertible biological conditions, which he rejected, but rather in the fleeting emotional states of people under interrogation.

Convinced that changes in a person's complexion, heart rate, or pace of breathing are precise reflections of his or her inner life, Münsterberg designed a so-called polygraph, the prototype of today's lie detectors. Withheld truth and concealed guilt – inner complexes that Freud, working at the same time, was attempting to detect and resolve by listening to his patients’ words – could now be revealed by means of an apparatus. In this case, the code of translation was clear: regular frequencies and steady lines on the chart signified probity and innocence, while any abrupt spikes were signs of suspicion that indicated inner discord. Admittedly, Hugo Münsterberg was cautious enough to take into account the risk of misinterpretation, stating that “symptoms of mere agitation, which the legal process can bring about in the innocent as well, can be misinterpreted as signs of guilt.”86 Yet, despite these reservations, he considered the polygraph to be an apparatus that was far more reliable than any previous methods for extracting the truth, such as the use of torture “to wrest the facts of the case from the soul of the accused.”87

Up to his death in 1916, Münsterberg was regularly called upon to testify in courts of law as an expert in psychology. During the years around the First World War, his studies with the polygraph machine were refined by other scholars, and, beginning in the 1930s, the apparatus came to be used widely by the American justice system. The reputation of the “lie detector” at this time was that of a merciless device that could cut through any attempted resistance to reveal even the most deeply hidden secrets; accordingly, suspects and doubtful witnesses were not leaping at the chance to be subjected to this bodily interrogation. (Even years later, at the beginning of the Watergate scandal, Richard Nixon was quoted as saying, “I don't know anything about polygraphs, and I don't know how accurate they are, but I know they'll scare the hell out of people.”)88 Delinquents had to be hooked up by force to the truth-telling machine; in this context, the measurements can only be seen as the result of an act of coercion undertaken against the will of the subject.

This ensemble of bodies could not have differed more from the other established ritual at the time for extracting the truth – namely, the psychoanalytic session. In his essay on the achievements of Hugo Münsterberg, Ulrich Raulff provides an excellent juxtaposition of these two emblematic scenes from the early-twentieth-century human sciences. In the office of the psychoanalyst, the patient lies down on a sofa and the analyst sits behind him or her and takes notes – the postures are comfortable, the cushions are soft, and there is no eye contact or physical contact between the people involved. And then there is the scene with the polygraph, which takes place in a sparse and brightly lit room. The examiner sits across from the subject, who is perhaps being restrained by police officers, and in between them there is a device that is attached to the subject with multiple wires. “If you are unable or unwilling to tell the truth,” writes Raulff, “perhaps you will simply sweat it out? This is not to say that language no longer plays any role here. The role has merely changed from engaging in a protracted wrestling match with meaning to performing a verbal vivisection. Like little lancets or thorns, key words are thrust against a body that has to be pierced in order for the secret truth to emerge.”89 From today's perspective, it is interesting that Raulff, in any essay written more than 30 years ago, did not hesitate to depict this comparison as an opposition between willingness and coercion, conversation and interrogation. Psychoanalysis takes place as an agreement between the doctor and the patient; polygraph measurements are a struggle in which the lying delinquent has to bend to the machine's powers of veracity. In the age of the quantified self, these two independent methods have in some sense come together. Just as the patients of an analyst typically come to his or her practice willingly, today's self-trackers wear their costly Fitbit wristbands of their own volition. As Gary Wolf has insinuated, the subject in this case is still regarded as someone lacking something, but it is no longer a lack of honesty or an unwillingness to confess that has created the need for such measuring devices. Rather, it is the lack of attention being paid to one's own fitness and one's own well-being.

Witnesses for the prosecution

Despite all the invocations of the autonomous subject, the way that digital self-recording functions is always easiest to see when the individually generated data enter a broader context of knowledge encompassing more than the individual. The current bonus programs offered by insurance companies have made this clear. Methods of self-tracking may indeed help the individual to improve his or her health and productivity, but at the same time these collected data emanate outward – not because of misuse and not because of carelessness, but rather because the practice of the quantified self has been situated, from its beginning, within large-scale networks. In the insurance industry, such data can be used to make precise assertions about a given user's state of health; the measurements are thus taken to represent a sort of truth about his or her “normality,” thereby perpetuating a scientific presumption whose roots extend back to the nineteenth century. Recently, however, another application of fitness bands has emerged that makes these genealogies even clearer by transferring the “truth” in its juridical sense, which is the sense that Bertillon and Münsterberg had in mind, into the era of the quantified self. Fitbit devices, in other words, are now being admitted as evidence in courts of law.

At the end of 2014, a personal injury claim was made in Ottawa by a fitness trainer who, on account of an accident suffered four years earlier, could no longer continue in her line of work. Among other evidence submitted to the court was the plaintiff's Fitbit device, which was intended to demonstrate, according to her lawyers, “that her activity levels are still lower than the baseline of her age and profession to show that she deserves compensation.”90 The lawyers also presented comparative information gathered by a company that evaluates self-tracking data for the insurance industry, and thus they were able to use statistics to demonstrate the physical impairment of their client. This Canadian case, which was ultimately decided in favor of the claimant, was apparently the first legal process worldwide in which data from a fitness wristband were used as evidence. On the one hand, the ruling seems to attest to the new sovereignty of self-trackers, given that the Fitbit sensors supported the legitimacy of the claim more dependably than any statement from a medical expert ever could have.91 On the other hand, the case represents an important threshold in digital culture to the extent that it illustrates that the personal data collected by a self-tracking instrument can be transformed at any time from its playful and innocent function of counting calories into a legally admissible piece of evidence in a court case. Kate Crawford, a journalist who followed the proceedings, was quick to point out the important implications of this change. Although in Ottawa, she wrote, the Fitbit device may have been used to support the plaintiff's injury claim, “wearables data could just as easily be used by insurers to deny disability claims, or by prosecutors seeking a rich source of self-incriminating evidence…. Will it change people's relationship to their wearable device when they know that it can be an informant?”92 The colorful Fitbit wristband has become an instrument that preventatively counts its wearer's every step in order to make it easier, one day, to apprehend him in the event of one violation or another.

In fact, the first case of this sort took place in April 2015. A woman from Pennsylvania claimed to have been pulled out of her bed and sexually assaulted by a stranger. Over the course of the investigation, however, the police became aware that she had been wearing her Fitbit device during the night in question, and she permitted them to analyze the data. The latter made it unmistakably clear that she had been awake and active during the supposed time of the crime. The case was dismissed, and the claimant was ordered to pay a fine for having issued a false statement. “Never lie while wearing a Fitbit,” as one American reporter commented on the case.93 Today's self-tracking devices are polygraphs that are more comprehensive and exhaustive in scope than anything Hugo Münsterberg could have imagined. And whereas the measured delinquents of the twentieth century had to be hooked up to polygraphs by force, the millions of customers using Fitbit, Jawbone, Nike+, and other truth-telling wearables do so willingly. As one writer remarked about the plaintiff in the Pennsylvania case: “[T]he device became a witness against her.”94 Even if the playful applications of today's self-tracking devices make it easy to forget their historical background, their original intention still occasionally rises to the surface.

Notes