Business ideas and practices do not simply spread of their own accord, not even when they appear to yield clear profits. They need pushing. Sometimes they need cultural and political barriers to be forcefully broken down before they are later adopted, until eventually they come to appear entirely natural. The idea of ‘scientific advertising’, pioneered in the 1920s by the firm James Walter Thompson (JWT), with support from John B. Watson, is a case in point.
JWT were the first of the large Madison Avenue firms to believe that advertising could target consumers scientifically thanks to psychological profiling techniques, such as surveys. They believed individuals could be influenced through such techniques to make purchases, even against their own better rationality. Today, the notion that advertising relies on detailed psychological insights into our intimate emotions and behaviours seems quite obvious. But its journey from Madison Avenue in the mid 1920s to the status of global common sense was not straight-forward.
JWT would not have succeeded in exporting scientific advertising around the world had it not been for the contract they won with General Motors (GM) in 1927.1 By this time GM already had a powerful international presence, with production plants scattered across Europe. The deal that JWT struck with GM was that they would open an office in every country where GM was already located, so as to furnish the car giant with local marketing expertise. In return GM would grant JWT an exclusive account for all of its markets around the world. In 1927 alone, JWT opened offices in six European countries. Over the next four years, it would open further offices in India, South Africa, Australia, Canada and Japan. Thanks to the security offered by its corporate behemoth of a patron, JWT became an international player, and its particular style of marketing expertise went global. The capacity of US businesses to export to global markets, which surged after World War Two, was greatly aided by the fact that such networks of business intelligence had already permeated much of the capitalist world. Knowledge of foreign consumers was already on hand.
Following the acquisition of the GM contract, JWT set about accumulating consumer insight on an unprecedented scale. In less than eighteen months, over 44,000 interviews were done around the world, many in relation to cars, but also on topics such as food and toiletry consumption.2 This was the most ambitious project of mass psychological profiling ever attempted. A detailed map of global consumer tastes was being built up from scratch. And yet this was not achieved without encountering some resistance.
JWT researchers quickly discovered that their techniques were not widely understood or appreciated beyond their home market. The level of consumer intimacy that they were seeking was often simply refused. In Britain, several researchers were arrested for conducting door-to-door surveys.3 Another British interviewer found the job of consumer profiling so difficult that he was reduced to chasing people down the street shouting questions at them. A researcher conducting surveys in flats in Copenhagen in 1927 was met with such hostility that one resident threw him down a flight of stairs. And another, also in Copenhagen, was arrested for trying to get into people’s homes by impersonating an inspection officer. The German Automobile Manufacturer’s Association threatened to sue JWT for ‘business espionage’.
The globalization of consumer intelligence required a combination of luck, guile and brute force. The challenge that JWT had set itself was deeply problematic. It wasn’t simply to observe people in public or invite public opinions, as magazines had been doing for some time. It was to acquire a new level of intimacy with the consumer, which often meant observing the housewife in her home. Researchers didn’t only want to know what these people thought or said about certain products, they also wanted to see the products in the home, watch how the consumer behaved. This knowledge could only be acquired through a degree of snooping around and asking somewhat personal questions.
The story of JWT’s painful arrival in Europe points towards one of the gravest challenges that confronts the project of mass psychological measurement: how does one get ordinary people to cooperate? There is a political dimension to any social science, whereby the researcher must either negotiate with their research subjects in the hope of winning their consent, or else they must use some degree of force and privilege to oblige people to be studied and measured. Either that, or they operate in a more clandestine fashion.
When Wilhelm Wundt established his psychology laboratory in Leipzig, he used his own students and assistants as the focus of his experiments. Their full consent was deemed necessary for the type of science he was seeking to carry out. More commonly today, psychologists offer monetary incentives to their research subjects, who are typically hard-up students from other disciplines. For a counter-example, consider the history of statistics, which (as the word indicates) has always been intimately entangled with the violent power of states in order to ensure that the population is measured accurately and objectively. States are able to do what JWT initially struggled to do in observing people en masse. Similarly, Frederick Taylor was dependent on his aristocratic connections in order to peer inside numerous Philadelphia workplaces during the 1870s and 1880s.
The term ‘data’ derives from the Latin, datum, which literally means ‘that which is given’. It is often an outrageous lie. The data gathered by surveys and psychological experiments is scarcely ever just given. It is either seized through force of surveillance, thanks to some power inequality, or it is given in exchange for something else, such as a monetary reward or a chance to win a free iPad. Often, it is collected in a clandestine way, like the one-way mirrors through which focus groups are observed. In social sciences such as anthropology, the terms on which data is gained (in that case, through prolonged observation and participation) are a matter of constant reflection. But in the behavioural sciences, the innocent term ‘data’ usefully conceals a huge apparatus of power through which people can be studied, watched, measured and traced, with or without their consent.
Evidently, this political dimension was still visible in the 1920s, when JWT were expanding oversees. In the years since, however, it has receded from view. Questions of what people think or feel, how they intend to vote, how they perceive certain brands, have become simple matters of fact. This is no less true of happiness. Gallup now surveys one thousand American adults on their happiness and well-being every single day, allowing them to trace public mood in great detail, from one day to the next. We are now so familiar with the idea that powerful institutions want to know what we’re feeling and thinking that it no longer appears as a political issue. But possibilities for psychological and behavioural data are heavily shaped by the power structures which facilitate them. The current explosion in happiness and well-being data is really an effect of new technologies and practices of surveillance. In turn, these depend on pre-existing power inequalities.
Building the new laboratory
In 2012, Harvard Business Review declared that ‘data scientist’ would be the ‘sexiest job of the twenty-first century’.4 We live during a time of tremendous optimism regarding the possibilities for data collection and analysis that is refuelling the behaviourist and utilitarian ambition to manage society purely through careful scientific observation of mind, body and brain. Whenever a behavioural economist or happiness guru stands up and declares that finally we can access the secrets of human motivation and satisfaction, they are implicitly referring to a number of technological and cultural changes which have transformed opportunities for psychological surveillance. Three in particular are worth highlighting.
Firstly, there is the much-celebrated rise of ‘big data’.5 As our various everyday transactions with retailers, health-care providers, the urban environment, governments and each other become digitized, so they produce vast archival records that can be ‘mined’ with sufficient technological capability. Much of this data is viewed as a prized possession by the companies that acquire it, who believe that it holds untold riches for those wanting to predict how people will behave in future. Many, such as Facebook, are inclined to keep it private, such that they can conduct analysis of it for their own purposes, or sell it on to market research companies.
In other circumstances, this data is being ‘opened up’ on the basis that it is a public good. After all, we the public created it by swiping our smart cards, visiting websites, tweeting our thoughts, and so on. Big data should therefore be something available to all of us to analyse. What this more liberal approach tends to ignore is the fact that, even where data is being opened up, the tools to analyse it are not. As the ‘smart cities’ analyst Anthony Townsend has pointed out with regard to New York City’s open data regulations, they judiciously leave out the algorithms which are used by e-government contractors to analyse the data.6 While the liberal left continues to worry about the privatization of knowledge as enacted by intellectual property rights, a new problem of the privatization of theory has arisen, whereby algorithms which spot patterns and trends are shrouded in commercial secrecy. Entire businesses are now built on the capacity to interpret and make connections within big data.
The second development is one that can only be truly understood in cultural terms. To put it simply, the spread of narcissism has been harnessed as research opportunity. When JWT first sought to profile European consumers in the 1920s, this was experienced as an invasion of privacy, as indeed it was. More recently, tolerance for surveys has fallen all over again, though more out of impatience on the part of potential participants than anything else.7 People simply cannot be bothered to share details of what they like, think or want with researchers holding clipboards any more. But when Facebook asks its one billion users that faux-innocent question ‘What’s on your mind?’, we pour our thoughts, tastes, likes, desires and opinions into the company’s massive databank without a further thought.
When obliged to report on their inner mental states for research purposes, people do so only grudgingly. But when doing so of their own volition, suddenly reporting on behaviour and moods becomes a fulfilling, satisfying activity in its own right. The ‘quantified self’ movement, in which individuals measure and report on various aspects of their private lives – from their diets, to their moods, to their sex lives – began as an experimental group of software developers and artists. But it unearthed a surprising enthusiasm for self-surveillance that market researchers and behavioural scientists have carefully noted. Companies such as Nike are now exploring ways in which health and fitness products can be sold alongside quantified self apps, which will allow individuals to make constant reports of their behaviour (such as jogging), generating new data sets for the company in the process.
There is a third development, the political and philosophical implications of which are potentially the most radical of all. This concerns the capability to ‘teach’ computers how to interpret human behaviour in terms of the emotions that are conveyed. For example, the field of ‘sentiment analysis’ involves the design of algorithms to interpret the sentiment that is expressed in a given sentence, for example, a single tweet. The MIT Affective Computing research centre is dedicated to exploring new ways in which computers might read people’s moods through evaluating their facial expressions, or might carry out ‘emotionally intelligent’ conversations with people, to provide them with therapeutic support or friendship.
Ways of reading an individual’s mood, through tracking his body, face and behaviour, are now expanding rapidly. Computer programmes designed to influence our feelings, once they have been gauged, are another way in which emotions and technology are becoming synched with each other. Already, computerized cognitive behavioural therapy is available thanks to software packages such as Beating the Blues and FearFighter. As affective computing advances, the capabilities of computers to judge and influence our feelings will grow.
Facial scanning technologies hold out great promise for marketers and advertisers wanting to acquire an ‘objective’ grasp of human emotion. These are beginning to move beyond the limited realms of computing or psychology labs and permeate day-to-day life. The supermarket chain Tesco has already trialled technologies which advertise different products at different individuals, depending on what moods their faces are communicating.8 Cameras can be used to recognize the faces of unique consumers in the street and market products at them based on their previous shopping behaviours.9 But this may be just the beginning. One of the leading developers of face-reading software has piloted the technology in classrooms, to identify whether a student is bored or focused.10
The combination of big data, the narcissistic sharing of private feelings and thoughts, and more emotionally intelligent computers opens up possibilities for psychological tracking that Bentham and Watson could never have dreamed of. Add in smartphones and you have an extraordinary apparatus of data gathering, the like of which was previously only plausible within university laboratories or particularly high-surveillance institutions such as prisons. The political, technological and cultural limits to psychological surveillance are dissolving. The great virtue of the market, for neoliberals such as the Chicago School, was that it acted as a constant survey of consumer preferences which extended across society. But mass digitization and data analytics now offer a rival mode of psychological audit that potentially extends even further, engulfing personal relations and feelings which markets do not ordinarily reach.
In moving beyond survey techniques, researchers believe that they can now circumvent the quasi-democratic, political aspect of finding out what people value, but without simply depending on the market either. By analysing tweets, online behaviour or facial expressions in a largely clandestine fashion, a degree of detached objectivity becomes possible which is not available to the researcher who actually has to confront people in order to collect data. Watson’s dream of freeing psychology from its reliance on the ‘verbal behaviour’ of the subject looks to be nearly realized. The truth of our emotions will, allegedly, become plain, once researchers have decoded our brains, faces and unintentional sentiments.
As we move beyond the age of the survey, many of the same questions are being asked, but now with far more fine-grained answers. In place of opinion-polling, sentiment-tracking companies such as General Sentiment scrape data from 60 million sources every day, to produce interpretations of what the public thinks. In place of users’ satisfaction surveys, public service providers and health-care providers are analysing social media sentiment for more conclusive evaluations.11 And in place of traditional market research, data analytics apparently reveals our deepest tastes and desires.12
One interesting element of this is that our quasi-private conversations with each other (for instance via Facebook) are viewed as good hard data to be analysed, whereas the reports we make to interviews or surveys are considered less reliable. Our conscious statements of opinion or critique are untrustworthy, whereas our unwitting ‘verbal behaviour’ is viewed as a source of inner psychological truth. This may make sense from the perspective of behavioural and emotional science, but it is disastrous from the point of view of democracy, which depends on the notion that people are capable of voicing their interests deliberately and consciously.
These developments have generated a new wave of optimism regarding what can be known about the individual mind, decisionmaking and happiness. Finally, the real facts of how to influence decision-making may come to light. At last, the truth of why people buy what they buy might come to light. Now, over two centuries after Bentham, we might be about to discover what actually causes a quantifiable increase in human happiness. And in the face of a depression epidemic, mass surveillance of mood and behaviour might unlock the secrets of this disease, so as to screen for it and offer tips and tools to avoid it.
The unspoken precondition of this utopian vision is that society becomes designed and governed as a vast laboratory, which we inhabit almost constantly in our day-to-day lives. This is a new type of power dynamic altogether, which is difficult to characterize purely in terms of surveillance and privacy. The accumulation of psychological data occurs unobtrusively in such a society, often thanks to the enthusiastic co-operation of individual consumers and social media users. Its rationale is typically to make life easier, healthier and happier for all. It offers environments, such as smart cities, which are constantly adapting around behaviour and real-time social trends, in ways that most people are scarcely aware of. And in keeping with Bentham’s fear of the ‘tyranny of sounds’, it replaces dialogue with expert management. After all, not everybody can inhabit a laboratory, no matter how big. A powerful minority must play the role of the scientists.
We received a glimpse of this future in June 2014, when Facebook published a paper analysing ‘emotional contagion’ in social networks.13 The public response was similar to that of JWT’s survey subjects in Copenhagen and London in 1927: outrage. This one academic paper made headlines around the world, though not for the quality of its findings. Instead, the discovery that Facebook had deliberately manipulated the newsfeed of 700,000 users for one week in January 2012 seemed like an abuse of research ethics.14 It turned out that this platform, on which friendships and public campaigns depended, was also being used as a laboratory to probe and test behaviour.
Will this sort of activity still prompt outrage in another ten or twenty years, or will we have grown used to it? More to the point, will Facebook still bother to publish their findings, or will they simply run experiments for their own private benefit? What is troubling about the situation today is that the power inequalities on which such forms of knowledge depend have become largely invisible or taken for granted. The fact that they combine ‘benign’ intentions (to improve our health and well-being) with those of profit and elite political strategy is central to how they function. The only way in which such blanket administration of our everyday lives can now be challenged is if we also challenge the automatic right of experts to deliver any form of emotion to us, be it positive or negative.
How happy were you yesterday? How did you feel? Do you know? Can you remember? It’s possible that, even if you don’t, someone else could tell you. As the digital and neurological sciences of happiness progress, they are nearing the point where experts are more qualified to speak about your subjective state than you are. Or to put that another way, subjective states are no longer subjective matters.
Twitter is a case in point. Twitter’s 250 million users produce 500 million tweets per day, producing a constant stream of data which can potentially be analysed for various purposes. This is one of the more dramatic examples of big data accumulation in recent years. Ten per cent of this stream is made freely available at no cost, opening up enticing opportunities for social researchers, both in business and universities. The rest of the stream, up to the complete fire-hose of every single tweet, is available for a range of fees.
The research challenge is how to make sense of so much data, which involves building algorithms capable of interpreting millions of tweets. At the University of Pittsburgh, a group of psychologists has built one such algorithm, aimed at capturing how much happiness is expressed in a single 140-character tweet. To do this, the researchers created a database of five thousand words, drawn from digital texts, and gave every word a ‘happiness value’ on a scale of 1–9. A tweet can then be automatically scored in terms of its expression of happiness.
The Pittsburgh project is designed to spot trends in happiness at an aggregate level, analysing 50 million tweets every day. It is not in itself interested in the happiness levels of individual users. Instead, it can identify some clear patterns in how happiness fluctuates across the population, both over time and over space. Happiness maps have been developed on the back of this data; the researchers now know that Tuesday is the least happy day of the week, and Saturday the most happy. This project might not actually report back on how happy you were last week. But a range of similar projects could, typically under the auspices that it would be for your own well-being, health or safety.
One such project is the ‘Durkheim Project’, developed by researchers at Dartmouth College, named after Émile Durkheim. Durkheim is known as one of the founders of sociology, and author of Suicide, an analysis of variations in national suicide rates in the nineteenth century. Durkheim was drawing on the new statistical data on death rates that had recently accrued over a number of decades in Europe at the time. The Durkheim Project aimed to go one better: drawing on analysis of social media data and mobile phone conversations, suicide would be predicted.
The targets of this analysis are former US military veterans, who are known to have a higher risk of suicide than the rest of the population. The question is how to identify those who need help before it is too late. With support from the Department of Veterans Affairs, who are able to access medical records as an additional source of data, the Durkheim Project aims to provide an early warning system that a certain individual is showing higher risk of suicide. This requires sophisticated forms of data analytics capable of extracting meaning from large quantities of data, again through learning what specific words are likely to mean. Sentences and grammatical constructions of suicidal people are studied and taught to computers. Without any intrusion into the individual’s life, their feelings are being tracked. A similar project, at University of Warwick, UK, has used real suicide notes to teach computers how to spot suicidal thoughts within grammatical constructions.
If individuals can be co-opted into such psychological surveillance programmes, then the possibilities for measurement increase accordingly. The growth of mobile devices as tools for ‘Health 2.0’ policies, aimed at capturing the well-being of individuals on a moment-by-moment basis, means that the medical gaze can penetrate much further into everyday life, beyond the boundaries of the surgery, hospital or laboratory. ‘Mood tracking’ is now a particular wing of the larger quantified self movement, in which individuals seek to measure fluctuations in their own mood, either out of concern or just curiosity.15 Apps such as Moodscope (based on a well-known psychiatric affect scale, PANAS) have been built to facilitate and standardize the tracking of one’s own mood.
Smartphone apps such as Track Your Happiness developed at Harvard or Mappiness at London School of Economics, which prod people every few hours for details of their present mood (reported as a number) and present activity, enable economists and well-being specialists to accumulate knowledge which was impossible to imagine only a decade ago. It turns out that people are happiest while having ‘intimate relations’, though one wonders what reporting this via a phone does for the quality of that experience.16
When researchers first began trying to collect data on the happiness of entire societies during the 1960s, they encountered a problem. This is another of those technical problems which cut to the heart of utilitarianism: to what extent can you trust people’s own reports of their happiness? The way people report happiness is likely to be skewed by a couple of things, though this of course assumes that there is something ‘objective’ about happiness to be reported in the first place. Firstly, they may forget how they actually experience their day-to-day lives and end up with a sunnier or gloomier overall take than is actually representative of their mood. We might consider this to be a form of delusion, although people are of course at liberty to narrate their lives however they see fit.
Secondly, they will be influenced by cultural norms regarding how to answer a survey question. If the question is, ‘Overall how happy do you feel with your life?’ or ‘How happy were you yesterday?’, some individuals may immediately react in certain ways, due to culture or upbringing, which lead them towards certain types of answer. They may feel that it is defeatist to complain and so exaggerate their happiness (a distinctly American problem), or conversely that it is vulgar to declare oneself happy and so under-report it (a more frequent phenomenon in France).
As happiness economics grew over the course of the 1990s, there emerged various strategies for getting around this problem. The goal was to access happiness as we actually experience it, rather than as we say we experience it. Obviously, this is as much a philosophical problem as a methodological one. What would it mean to access the ‘truth’ of happiness, without going via the individual’s own conscious reflections on it? Unperturbed, psychologists and economists have developed various ways of doing just this. One technique is the day reconstruction method, in which individuals participate in a happiness study by sitting down at the end of every day and producing a diary of how happy they felt at various times and what they were doing. This has some obvious flaws in terms of the possibility of misremembered experiences. But it takes one step towards cutting out the conscious, reporting mind in pursuit of some ethereal quantity of happiness that rises and falls within the mind.
The new surveillance and self-surveillance opportunities offered by data analytics and smartphones promise to eradicate this problem. People don’t need to report their happiness via a survey if their words can be interpreted en masse without them even knowing, or if they can offer real-time numerical feedback on it via a smartphone app. For two hundred years, the ambition to measure the ebbs and flows of mental life was restricted to the limits of institutions – prisons, university labs, hospitals, workplaces. The power hierarchies which facilitated this measurement were therefore visible, even if they were not challengeable. Today, as those institutional limits evaporate, they are neither of those things.
Yet this is not even the most extravagant possibility for utilitarian surveillance. At the outer reaches of the science of human happiness are research projects which strip away the experience or consciousness of it altogether. Happiness, by this account, is not so much a state of mind or consciousness, but a biological and physical state of being that can be known objectively regardless of the carrier’s own judgement or reports of it.
What has always been so seductive about the science of happiness is its promise to unlock the secrets of subjective mood. But as that science becomes ever more advanced, eventually the subjective element of it starts to drop out of the picture altogether. Bentham’s presumption, that pleasure and pain are the only real dimensions of psychology, is now leading squarely towards the philosophical riddle whereby a neuroscientist or data scientist can tell me that I am objectively wrong about my own mood. We are reaching the point where our bodies are more trusted communicators than our words.
If one way of ‘seeing’ happiness as a physiological event is via the face, the other way is to get even closer to its supposed locus: the brain. Various types of mood and disorder are now considered visible, thanks to the affordances of EEG and fMRI scanners, including bipolar disorder and experiences of happiness.17 The exaggerated claims that have been made for neuroscience are already legion, and the plausibility of ever entirely reducing mind (as studied by psychology) to brain (as studied by neuroscience) depends on fundamentally misunderstanding what the word ‘mind’ means in the first place. Nevertheless, it’s possible that a new utilitarian epoch is opening up, the like of which Bentham could never have imagined, in which happiness science reaches the point where it can bypass not only traditional surveys and psychological tests, but all physical and verbal indicators of mood, to access the mood itself in its physical manifestation. The fundamental meaning of the word ‘mood’ is being transformed.
As familiar concepts of consciousness and emotion become increasingly marginalized by physical symptoms and neurological events, something rather strange is taking place. Moods and decisions, once attributable to the self, begin to migrate to other parts of our body. The cultural imperative to relocate depression in the body has reached the point where scientists now believe that it can be diagnosed through a blood test. What if the patient disagreed? Would they be wrong? More bizarrely, the term ‘brain’ is morphing into an abstract concept, that can refer to various body parts. The biologist Michael Gershon claims to have discovered a ‘second brain’ in the gut, which handles digestion, but which may experience its own moods and ‘mental illnesses’.
Few of the new instruments of surveillance have been invented with the aim of manipulating us or invading our privacy for political purposes. They are largely motivated by an honest scientific or medical instinct that human welfare will be improved if the nature of well-being can be better understood, through tracking it across the population over time. For those walking in Bentham’s footsteps, progress depends on the human sciences finding better ways of understanding the mind–body relationship, new means of linking emotive pleasures to physical things, and grappling with the endless riddle of what ‘really’ goes on inside our heads.
Where this is explicitly for our own health and well-being – which a great deal of it is developed for – it becomes difficult to mount resistance. On the contrary, many of the new digital apps and analytics tools aimed at uncovering the secrets of happiness and well-being require us to actively cooperate in the measurement of ourselves, and to share data on our mood enthusiastically. There must be obvious benefits available for doing so, or else these forms of measurement would largely cease to work.
The problem is that this is never the end of the matter. What begins as a scientific enquiry into the conditions and nature of human welfare can swiftly mutate into new strategies for behavioural control. Philosophically speaking, there is a gulf separating utilitarianism from behaviourism: the former privileges the inner experience of the mind as the barometer of all value, whereas the latter is only concerned with the various ways in which the observed human animal can be visibly influenced and manipulated. But in terms of methods, technologies and techniques, the tendency to slip from the former into the latter is all too easy. Inner subjective feelings are granted such a priority under utilitarianism that the appeal of machines capable of reading and predicting them in an objective, behaviourist fashion becomes all the greater.
Likewise, what often begins as a basis on which to understand human flourishing and progress – fundamental ideas of enlightenment and humanism – suddenly reappears as a route to sell people stuff they don’t need, work harder for managers who don’t respect them and conform to policy objectives over which they have no say. Quantifying relations among mind, body and world invariably becomes a basis for asserting control over people and rendering their decisions predictable.
The truth of decisions?
The Hudson Yards real estate project on the West Side of Manhattan is the largest development in New York City since the Rockefeller Center was built in the 1930s. When completed, it will be home to sixteen new skyscrapers, containing office space, around 5,000 apartments, retail space and a school. And thanks to a collaboration between city authorities and New York University (NYU), initially brokered by former mayor Michael Bloomberg, it will also be one vast psychology lab. Hudson Yards will be one of the most ambitious examples of what the NYU research team term a ‘quantified community’, in which the entire fabric of the development will be used to mine data to be analysed by academics and businesses. The behaviourist project initiated by Watson, of treating humans like white rats to be stimulated in search of a response, is now becoming integrated into the principles of urban planning.
One of the key ways in which the age of big data differs from that of the survey is that big data is collected by default, without any intention to analyse it. Surveys are costly to carry out and need to be carefully designed around specific research questions. By contrast, the main thing with transactional data is that researchers are in a position to collect as much of it as possible first and worry about their research questions second. The quantified community team are pretty sure they have an idea of what they’re interested in: pedestrian flows, street traffic, air quality, energy use, social networks, waste disposal, recycling, and health and activity levels of workers and residents. But none of this really matters when it comes to the design of the project. The lead developer of Hudson Yards is enthusiastic and agnostic at the same time. ‘I don’t know what the applications will be’, he says, ‘but I do know that you can’t do it without the data.’18 Observe everything first. Ask questions later.
It is rare for academic researchers to be involved in projects of such a scale. But where it is feasible, the possibilities for behavioural analysis and experimentation are vast. Behavioural psychology is founded on a brutally simple question: how to render the behaviour of another person predictable and controllable? Experiments which manipulate the environment, purely to discover how people respond, always bring ethical dilemmas with them. But when these travel beyond the confines of the traditional psychology lab and permeate everyday life, the problem becomes more political. Society itself is used and prodded to serve the research projects of a scientific elite.
As always with behaviourism, it can only function scientifically on the basis that those participating in experiments do so naively, that is, they are not fully aware of what is going on or being tested. This can be disconcerting. In 2013, the British government was embarrassed when a blogger discovered that jobseekers were being asked to complete psychometric surveys whose results were completely bogus.19 Regardless of how the user answered the questions, they got the same results, telling them what their main strengths are in the job market. It later transpired that this was an experiment being run by the government’s ‘Nudge Unit’, to see if individual behaviour was altered by having this survey offer them these findings. Social reality had been manipulated to generate findings for those looking down from above.
This logic of experimentation allows for policies to be introduced which would otherwise seem entirely unreasonable, or even illegal. Behavioural experiments on criminal activity show that individuals are less psychologically prone to take drugs or engage in low-level crime if the resulting penalty is swift and certain. The association between the act and the result needs to be as firm as possible if punishment is to succeed as a deterrent. In that sense, due process becomes viewed as an inefficient blockage, standing in the way of behaviour change. The much-celebrated HOPE (Hawaii’s Opportunity Probation with Enforcement) programme, which builds directly on this body of evidence, ensures that repeat offenders know they will be jailed immediately if found up to no good.
Projects such as the Hudson Yards quantified community, the Nudge Unit’s fake survey and HOPE share a number of characteristics. Most obviously, they are fuelled by a high degree of scientific optimism that it may be possible to acquire hard objective knowledge regarding individual decision-making, and then to design public policy (or business practices) accordingly. This optimism is scarcely new; indeed it tends to recur ever few decades or so. The first wave occurred during the 1920s, inspired by Watson and Taylorist principles of ‘scientific management’. A second occurred in the 1960s, with the rise of new statistical approaches to management, whose most high-profile proponent was US Defense Secretary Robert McNamara during the Vietnam War. The 2010s represent a third wave.
What really drives this behaviourist exuberance? The answer in every case is the same: an anti-philosophical agnosticism, combined with an enthusiastic embrace of mass surveillance. These two things necessarily go together. What the behaviourist is really saying is this:
I start with no theory about why people act as they do. I make no presumption as to whether the cause of their decisions is found in their brain, their relationships, their bodies, or their past experiences. I make no appeal to moral or political philosophy, for I am a scientist. I make no claims about human beings, beyond what I can see or measure.
But this radical agnosticism is only plausible on the basis that the agnostic in question is privy to huge surveillance capabilities. This is why new epochs of behaviourist optimism always coincide with new technologies of data collection and analysis. Only the scientist who can look down on us from above, scraping our data, watching our bodies, assessing our movements, measuring our inputs and outputs, has the privilege of making no presumptions regarding why human beings act as they do.
For the rest of us, talking to our neighbours or engaging in debate, we are constantly drawing on assumptions of what people intend, what they’re thinking, why they have chosen the path they did, and what they actually meant when they said something. On a basic level, to understand what another person says is to draw on various cultural presuppositions about the words they’ve used and how they’ve used them. These presuppositions may not be theories in any strict sense, but more like rules of thumb, which help us to interpret the social world around us. The claim that it is possible to know how decisions are taken, purely on the basis of data, is one that only the observer in his watchtower can plausibly make. For him, ‘theory’ is simply that which hasn’t yet become visible, and in the age of big data, fMRI and affective computing, he hopes to be able to abandon it altogether.
Look at how this works today. Firstly, the theoretical agnosticism. The dream that pushes ‘data science’ forwards is that we might one day be able to dispense with separate disciplines of economics, psychology, sociology, management and so on. Instead, a general science of choice will emerge, in which mathematicians and physicists study large data sets to discover general laws of behaviour. In place of a science of markets (economics), a science of workplaces (management), a science of consumer choice (market research) and a science of organization and association (sociology), there will be a single science which finally gets to the truth of why decisions are made as they are. The ‘end of theory’ means the end of parallel disciplines, and a dawning era in which neuroscience and big data analytics are synthesized into a set of hard laws of decision-making.
The fewer assumptions that are made about human beings, the more robust the scientific findings. For long periods of its history, behaviourism referred primarily to the study of animals, such as rats. What made Watson a revolutionary figure within American psychology was his adamant view that the identical techniques should be extended to the study of human beings. Today, the fact that it is ‘quants’ (mathematicians and physicists, equipped with algorithmic techniques to explore large data sets) who are rendering our behaviour predictable is deemed all the more promising, given these individuals are not burdened by any theory of what distinguishes human beings or societies from any other type of system.
Secondly, the surveillance. As examples such as Hudson Yards or the Nudge Unit indicate, the new era of behaviourist exuberance has emerged on the basis of new high-level alliances between political authorities and academic researchers. Without those alliances, social scientists continue to labour under the auspices of ‘theory’ and ‘understanding’, as indeed we all do when seeking to interpret what each other are up to in our day-to-day lives. Alternatively, there are companies such as Facebook, who are able to make hard, objective claims about how people are influenced by different tastes, moods or behaviours – thanks to their ability to observe and analyse the online activity of nearly a billion people.
Add mass behavioural surveillance to neuroscience, and you have a cottage industry of decision experts, ready to predict how an individual will behave under different circumstances. Popular psychologists such as Dan Ariely, author of Predictably Irrational, and Robert Cialdini, author of Influence: The Psychology of Persuasion, unveil secrets of why people really take the decisions that they do. It transpires, so we’re told, that individuals are not in charge of their choices at all, that they can’t really tell you why they do what they do. Whether it be the pursuit of workplace efficiency, the design of public policy or seeking a date, the general science of choice promises to introduce facts where previously there was only superstition. The fact that, no matter what the context, ‘choice’ always seems to refer to something which resembles shopping suggests that the decision scientists may not have thrown off the scourge of prejudice or theory as much as they may like.
And yet the apparent legitimacy of this data-led approach to understanding people is contributing to further expansions in surveillance capabilities. Human resource management is one of the latest fields to be swept up in data euphoria, with new techniques known as ‘talent analytics’ now available, which allow managers to evaluate their employees algorithmically, using data produced by workplace email traffic.20 The Boston-based company Sociometric Solutions goes further, producing gadgets to be worn by employees, to make their movements, tone of voice and conversations traceable by management. ‘Smart cities’ and ‘smart homes’, which are constantly reacting to and seeking to alter their inhabitants’ behaviour, are other areas where the new scientific utopia is being built. In an ironic twist in the history of consumerism, it has emerged that we could soon be relieved even of the responsibility for our purchasing decisions thanks to ‘predictive shopping’, in which companies mail products (such as books or groceries) directly to the consumer’s home, without being asked to, purely on the basis of algorithmic analysis or smart-home monitoring.21
The rhetoric of the data merchants is one of enlightenment: of moving from an age of guesswork to one of objective science, echoing how Bentham understood the impact of utilitarianism on law and punishment. But this is to completely obscure the power relations and equipment necessary for this form of ‘progress’ to be achieved at all.
Perhaps there is nothing surprising about any of this. We all intuitively understand that making a digital transaction or sharing a piece of information with friends is to be a research subject in the new all-encompassing laboratory. Controversies surrounding smart cities and Facebook focus on the privacy threat that these types of platform involve. But for the most part, the science which the new laboratory produces is beyond reproach: we are seduced by the idea that, underneath the liberal myth of individual autonomy, every choice has some cause or objective driver, be that biological or economic. What is too often forgotten is that this idea makes no sense whatsoever, absent the apparatuses of observation, tracking, surveillance and audit. Either we can have theories and interpretations of human activity, and the possibility of some form of self-government; or we can have hard facts of behaviour, and reconstruct society as a laboratory. But we cannot have both.
The happiness utopia
In 2014, Russia’s Alfa-Bank announced an unusual new type of consumer finance product called an Activity Savings Account.22 Customers use one of several bodily-tracking devices, such as Fitbit, RunKeeper or Jawbone UP, which measure how many steps they take per day. Each step taken results in a small amount of money being transferred into the activity account, where it accrues higher interest than in the standard account. Alfa-Bank has found that the customers who use this account are saving twice as much as other customers and walking 1.5 times as far as the average Russian.
The previous year, an experiment was conducted in Moscow’s Vystavochnaya subway station as part of the preparation for the 2014 Winter Olympics.23 One of the ticket machines was replaced by a new one containing a sensory device. Passengers were given the option of either paying thirty rubles for their ticket or performing thirty squats in front of the machine in two minutes. If they failed to achieve this, they had to pay the thirty rubles instead.
Services such as the fitness-tracking ticket machine are currently still at the status of gimmicks. The activity bank account is more serious. Employee fitness-tracking programmes, which are sold in terms of their calculated productivity benefits, have nothing gimmicky about them at all. When Bentham confronted the question of how to measure subjective feelings, he expressed a vague hope that it might be done through either money or the measuring of pulse rate. In this, he anticipated the rudimentary tools of well-being experts entirely correctly.
The next stage for the happiness industry is to develop technologies whereby those two separate indicators of well-being can be unified. Monism, the belief that there is a single index of value through which any ethical or political outcome can be assessed, is always frustrated by the fact that no single ultimate indicator of this value can be found or built. Money is all very well, but it leaves out other psychological and physiological aspects of well-being. Measuring blood pressure or pulse rate is fine up to a point, but it cannot indicate how satisfied we are with our lives. fMRI scans can now visualize emotions in real time, but they miss broader notions of health and flourishing. Affect scales and questionnaires run up against cultural problems of how different words and symptoms are understood.
This is why the capacity to translate bodily and monetary measures into one another is potentially so important right now. It begins to dissolve the boundaries which separate otherwise discrete measures of well-being or pleasure, and to build an apparatus capable of calculating which decision, outcome or policy is ultimately best in every way. This is a utopian proposition (in the literal sense of utopia as ‘no place’). There can be no single measure of happiness and well-being, for the good philosophical reason that there is not actually any single quantity of such things in the first place. Monism is useful rhetorically, and attractive from the perspective of the powerful who yearn for simple ways of working out what to do next. But does anyone actually believe that all pleasures and pains sit on a single index? Sure, we might debate matters as if that were the case, using the metaphor of ‘utility’ or ‘well-being’ with which to do so. But take away its objective neural, facial, psychological, physiological, behavioural and monetary indicators, and the ghostly notion of happiness as a single quantity also vanishes into thin air.
In which case, why build such an apparatus of measurement? Why go to such lengths to ensure that the various separate bits of it are joined up, connecting our bank balances to our bodies, our facial expressions to our shopping habits, and so on? Under the auspices of scientific optimism, we are being governed by a philosophy that makes no real sense. It is unable to specify, finally, whether happiness is something physical or metaphysical. Every time it is asserted as the former, it slips away again. Yet the apparatus of measurement keeps on growing, creeping further into our personal and social lives.
The Copenhagen tenant who kicked the JWT researcher down the flight of stairs in 1927 saw this for what it is: a strategy of power. The surveillance, management and government of our feelings is successful to the extent that it neutralizes alternative ways of understanding human beings and alternative forms of political and economic representation. This project will never reach its destination. Despite claims by neuroscientists to be crossing ‘final frontiers’ regarding decision-making or emotions, the search for the ‘objective’ reality of our feelings will keep being dashed, and keep extending further. The main thing is that if unhappiness can be expressed via instruments of measurement, if success can be understood in terms of quantifiable outcomes, then critical and emancipatory projects are ensnared, and their energies are harnessed.
Utilitarianism can sanction virtually any type of policy solution in pursuit of mental optimization, including quasi-socialist forms of organization and production on a small scale, where they appear to make people feel better and healthier. It favours human ‘flourishing’ in an open-ended, humanistic sense, which may be achieved through friendship and altruism, as recommended by positive psychologists. But if a definition of optimization were offered which included control over one’s circumstances and one’s time, a voice that exerted power over decision-making, and a sense of autonomy that wasn’t reducible to neural or psychological causality, this simply would not be computable. Such an idea of human fulfilment, in which each individual speaks her mind rather than reveals it unwittingly, where unhappiness is a basis for critique and reform rather than for treatment, and where mind–body problems are simply forgotten rather than targeted through relentless medical research, points towards a different form of politics altogether.
There are a number of critical psychologists over the years who have sought to point this out, by stressing how mental illness is entangled with disempowerment. There are plenty of inspiring ventures and experiments which seek to give people hope partly through restoring their say over their own lives. There are also businesses which do not rely on behavioural science to manage and sell to people. These scattered alternatives are all parts of some larger alternative, which correctly understood might even be a better recipe for happiness.