Computer technologies, like many other technologies of the past, are not simple—technically, but also in regard to their relations to us. Digital technologies have a strongly ambivalent character—they are empowering, liberating, and transparent, yet at the same time intrusive, constraining, and opaque. We enjoy a sense of empowerment and emancipatory self-expression afforded by digital technologies, but anxiety and confusion perplex us in the face of the rapid changes they have entailed, not all of which we like. There is a light side and a dark side, and we find it hard to tease them apart—probably, most of the time.
For these reasons, and perhaps many others, our seemingly simple question about the changing relations between labor and computing did not yield simple answers. The question invited us to engage in certain moves toward theory, drawing on the ideas of giants such as Marx, Foucault, and Weber, along with many other thinkers and writers, to bring the ideas together with our own understanding of computer technology. Yet these engagements seemed to always end in new questions. It was a bit like a game of whack-a-mole—with every question to be addressed by the theories and our experiences, another popped up. We found ourselves taking one “detour” after another. Here we want to take you through these detours, and highlight insights from each, before coming back to the original question.
One of our very early questions (even before we had homed in on heteromation as a focus) concerned how we contend with doubt and confusion in the form of questions and anxieties punctuating everyday life. Am I spending too much time playing this video game? How can I get my spouse to stop using his smartphone obsessively? Shouldn’t I go take a walk? Am I unreasonable to feel this annoyed at that person and her loud cell phone conversation? How genuine are my experiences in the digital environment, and what “good” is attained through them? How can I justify (dis)engagement with this environment to myself and to (ir)relevant others? How often should I check my email, and how quickly should I respond to text messages? Should I sign onto Facebook, LinkedIn, Twitter, and the many other sites thrust at me every day? How do they affect my professional life and career? Do they enhance my opportunities for connection, employment, and advancement, or do they undermine my privacy, security, stability, and time for other pursuits? Who should my friends be? And how often should I check their posts? How should I feel about my followers? What should I share with them, and what should I hide? Should I take “technology vacations,” as some pundits advise, or even “digital detox”? In brief, we wonder if digital technology is improving our well-being as individuals and societies, or changing things in ways that are neither in our interests nor within our control. Is digital technology taking away the personal, communal, and human aspects of our lives, or enhancing them?
Despite such questions, most of us, most of the time, find ourselves attached to and engaged with digital technologies. They seem inevitable, indispensable, and irresistible; they invariably evoke fascination, attraction, and strong desires for their offerings. Certainly the technologies enable much of value: information gathering, sharing opinions, political participation, connecting to friends, family, and colleagues, crunching numbers, checking the weather, analyzing texts, listening to music, playing games, taking care of ourselves, and a multitude of leisure pursuits. At the same, we observe others in incessant (obsessive?) engagement with the technologies, inviting us directly or indirectly to do the same.
We notice an increasing number of government, business, and social services (such as healthcare) that can only be accessed digitally. It is strictly within the computerized environment that we can operate as “normal” members of our communities and societies. Even critical commentators who urge caution about technology deliver dispatches from their digital devices, composing texts for transmission, via a few clicks, to the plentiful crop of readers they can reach on the internet. Pervasive systems such as computing appear inevitable, and we accept them as fate. Despite a certain measure of doubt, we are increasingly wrapped up in the technologies, accepting them with sincere appreciation for their utility and convenience, but also succumbing to their coercive power, and the peer pressure and social demands they seem to bring forth.
Technologies appear to come and go according to an inner logic beyond human control. This kind of thinking, that technology is inevitable (in whatever form it currently happens to be), is often referred to as “technological determinism,” and has been the subject of scrutiny and criticism for several decades. Critics, coming from traditions such as the social constructivist perspective, contend that this view ignores the human element in the development and adoption of technology. Ultimately, they argue, it is human beings who are in charge, making choices about technology—a proposition that makes intuitive sense once made. Despite such criticism, however, the deterministic argument has had great staying power, not dissimilar to mythical dragons. In fact, every time its critics think that they have cut off its head, the Dragon grows it back with ever more resilience.
To understand the determinism implicit in the core of many current accounts of the relationship between computing and economy, we need to interrogate a more basic relationship—namely, that between humans and machines. This relation is often understood in terms of a naturalistic and essentialist perspective that attributes inherent properties and capacities to humans and technologies: what each is good at. The intuitive appeal of the language of humans-are-good-at-certain-things and machines-are-good-at-others makes it the more difficult to challenge the thinking behind it.
We saw one example of this in chapter 1 in the views of influential figures such as Herbert Simon and Richard Langlois. The false predictions that Simon made about the future of computers—e.g., “machines will be capable, within twenty years, of doing any work a man can do” (Simon 1960)—have revealed some of the fallacies in his thinking. Brynjolfsson and McAfee (2011, p. 62) take note of Simon’s claim, and argue that, “the set of tasks machines can do is not fixed. It is constantly evolving. …” (2011, p. 62). This understanding, however, does not stop them from presenting the following thesis:
[T]here’s never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there’s never been a worse time to be a worker with only “ordinary” skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate. (p. 3)
This view leads the authors to attribute feats to technology that give it much more power than it actually has: “Today’s information technologies favor more-skilled over less-skilled workers, increase the returns to capital owners over labor, and increase the advantages that superstars have over everybody else” (ibid., p. 74). With technology being a key driver of this “skill-biased technical change,” as Brynjolfsson and McAfee call it, it would seem that the only option left to human beings is to adapt to it (although even this option is suspect, as, according to the authors’ own logic, humans are chasing the moving target of increasing machine competence, a competence that is “constantly evolving”). In this fashion, the socioeconomic developments of the last few decades, including the increasing inequality that concerns Brynjolfsson and McAfee, and many others, come to seem to be only natural outcomes of technological developments.
One implication of this logic is to put the blame on the doorstep of those who have been the losers in the playing out of these trends, giving the “losers” but one option: educate and equip yourself with the right set of technical skills, and you will have a chance to be saved. Humans are invited to play a game of catch-up with powerful machines. Or at least those humans with access to elite educational institutions are invited. In their continued invocation of the idea of “superstars,” Brynjolfsson and McAfee do not seem to have any suggestions for the rest of us, except that we should try and educate ourselves about technology, presumably because that is what computer technologies “favor.”1 The naturalized view tends to attribute too much agency to machines, leaving humans at the mercy of technology and making invisible the power relations that underlie the drive to automation, heteromation, and the other “inevitable” economic moves made in the name of capitalist growth. To counter this tendency, and to understand the drivers of change, we need to unnaturalize both humans and machines.
If it is not technology only that drives change, then, what else influences the growing socioeconomic disparities of the last few decades? The answer to the question obviously depends on one’s attitude toward these disparities—i.e., whether they are fair, natural, and sustainable or unjust, anomalous, and dangerous. We have neoliberal economists such as Larry Summers, the former U.S. Secretary of the Treasury, who celebrates inequality, considering it “the other side of successful entrepreneurship ... that is something we surely want to encourage.”2
Many of us do not, in fact, want to celebrate or encourage inequality. There is little disagreement about the growing inequality of the last few decades. Economic statistics, which provide the basis for almost all interpretations, are quite telling in this respect. According to the latest Oxfam report, for instance, between 1988 and 2011, 46 percent of overall income growth around the globe went to the top 10 percent of the population, while the bottom 10 percent received only 0.6 percent (Oxfam 2016).
To see why this state of affairs is unfortunate, consider the view, shared by people on both sides of the political spectrum, that in the last few decades, our societies have transitioned from a normal distribution of income, wealth, and social mobility to a power law distribution (Oxfam 2016). The common explanation starts with the assumption that people appear on the social scene with varying degrees of “motivation,” which then puts them on different tracks with respect to their level of participation in economic activities, ultimately determining their share of the economic pie.
The problem with this entrepreneurial, psychologized theory of human behavior, as we discussed in chapters 9 and 12, is that it puts the psychological cart in front of the socioeconomic horse. In so doing, it place the burden of change on individuals, sanctioning the manipulated “nudging” of their behaviors through delicate mechanisms of control. An alternative approach is to look for the drivers of change in inherent tensions of modernism and capitalism—a tack that students of Marx, Foucault, Weber, and others have pursued. Accounts based on this approach demonstrate patterns of cyclical development, where one tension gives rise to the next, one social settlement desettles another, and one opportunity undermines the previous one. Our formulation of this approach in terms of predicaments and possibilities is intended to capture this dynamic with an eye to the interactions between computing technology and the capitalist economy. This formulation led us to a “rewriting” of the history of computing (chapter 1), of capitalist change (chapter 3), and of the drivers and mechanisms of engagement with computing technology (chapters 4 and 10). To understand the growing inequalities of the last few decades, we must look beyond statistical trends to identify the underlying drivers of change that determine its dynamics.
Let us now return to the problem of harmony in the capitalist system, as professed by neoliberal thinkers. Capitalism is often referred to as a “system” because, like any other social arrangement, it involves a large number of parts (people, institutions, technologies, and so on) interacting with each other in all manner of political, economic, and cultural relations. And systems have an “inside” and an “outside,” as any systems theorist would postulate, which brings up the question of who/what is inside capitalism, and who/what is outside. Although we rarely ask this question, there is a common and implicit assumption that we are all inside the capitalist system because we are all part of it. This, of course, makes intuitive sense because we all, in fact, operate within the system, playing by its rules, benefiting from its resources and institutions, and contributing to it according to our ways and means. By this logic, anyone who lives in a capitalist society is an insider, regardless of their status, class, race, ethnicity, gender, education. There is a great deal of truth to this thinking, but there is also much more to the simple logic than meets the eye.
To see why, consider the case of Native Americans (or any other aboriginal people, for that matter) during the European takeover of the continent in the fifteenth and sixteenth centuries and afterward. The colonial system that drove this process—another “system” with its own parts and its own inside and outside—did not consider Native Americans as belonging to it. The natives were outside the system; there was “them” (the savages, people without civilization), and there was “us” (the civilized, the superior). Thus, European settlers in North America, by and large, oppressed the natives—hence the repugnant folk expression of the nineteenth century: “the only good Indian is a dead Indian” or, earlier, the Catholic Church’s easy acceptance of enslavement of the indigenes as a legitimate social formation. European settlers and missionaries, on the other hand, were part of the colonial system regardless of how filthy, brutal, or murderous they were. That they were the “subjects” of the monarchs of England or Spain automatically gave them status as insiders. This division of peoples was a relatively clear-cut case of a system with an inside and outside.
The story of the capitalist system, however, is less straightforward. Take a nineteenth-century lower-class Londoner of the kind described by Dickens (say, Oliver Twist), or an early twentieth-century factory worker, such as the one portrayed by Charlie Chaplin in Modern Times. These individuals were inside the capitalist system, not only in the sense that they were the “subjects” of a sovereign power, but in the sense that the capitalist system needed them. And because it needed them, it provided them with at least two things: a means of subsistence and a justification for why they should play along with capitalist rules. And since these provisions were not always adequate, there was also a need for mechanisms of control. In this fashion, control and consent are historically and closely tied to each other, as Marcuse (1964) formulated (see chapter 3). Early on—that is, roughly up until the mid-twentieth century—control took a rather direct form, demanding obedience from workers on the shop floor. The good worker of industrial capitalism was, therefore, not a dead one, or a slave, but an obedient person.
The difference between the “good Indian” and the “good worker” is important in that it can help us understand the basis of class exploitation in early capitalism. With this difference in mind, Erik Olin Wright (2005) formulated a theory of class exploitation on the basis of three criteria:
These three conditions, in other words, should be in place in order for a relationship to be considered exploitative. That is why, according to Wright, the industrial worker, but not the Indian, can be considered exploited; Indians were oppressed but not necessarily exploited (apart from the slaves of the Spanish).
How about the rest of us—all the Googlers, Facebookers, Twitterers, YouTubers, Instagrammers, Mechanical Turkers, gamers, citizen scientists, self-service customers of banks, insurance companies, and other corporations, and the rest of the army of billions that we selectively accounted for in part II of the book? Are we exploited?
On the one hand, we deeply agree with those who identify a strongly negative, exploitative, and coercive thread in the way computing technology supports and feeds current capitalism. On the other, we could not fail to notice, and partially share, the sense of fascination, excitement, and optimism that surrounds this technology. It was in dealing with this intellectual predicament that we came up with the notion of heteromation and the whole conceptual apparatus that accompanies it here.
The concept of heteromation, for us, strikes a meaningful and pragmatic balance between these views in a number of ways. First, it does justice to the labor, ingenuity, and creativity of the growing number of human beings who contribute value in the current economy, gaining no or minimal reward, recognition, or compensation. Second, it recognizes the power of computing technology and its transformative potential for socioeconomic, cultural, and political change, while avoiding the common deterministic fallacy of putting humans at the mercy of machines. Third, it provides a fair understanding of the sociocultural mechanisms that drive participation and engagement, but that also obscure the delicate techniques of control and even coercion in the current environment. In doing so, it corrects the psychologizing tendency of attributing undue credit and blame to individual motivations and shortcomings, and puts the responsibility instead where it indeed belongs: the unsustainable structure of the current capitalist system. Fourth, heteromation captures and integrates the economic and social aspects of the predicaments of contemporary life at both the individual and collective levels. It can, as such, unveil some of the mystery that surrounds recent technological developments and socioeconomic displacements that accompany them, and that leave many in doubt and darkness about their current and future lives. These mechanisms largely work through a logic of inclusion (not exclusion, as in Wright’s explanation of exploitation), drawing people into the fold of computing while at the same time keeping them out of the circle of capitalist elites or denying them a meaningful role in the governance of their lives, communities, and societies.
This tension between digital inclusion and social exclusion creates a predicament for capitalism. At its base, the predicament is not new; it is something that Marx noticed a long time ago. As we discussed in chapter 3, Caffentzis (2013, p. 72) summarizes Marx’s point:
Hence, the capitalist class faces a permanent contradiction it must finesse: (a) the desire to eliminate recalcitrant, demanding workers from production, (b) the desire to exploit the largest mass of workers possible.
The predicament, in other words, derives from the fact that, on the one hand, capitalism needs human labor as the sole source of value and, on the other, it aspires to minimize or even eliminate the cost of labor. How could it, then, deal with this tension? It turns out that the tension cannot be easily resolved, as any serious economist would readily acknowledge. Consider Hal Varian, Google’s chief economist, who, like many other theorists of current capitalism, recognizes the deep economic changes brought about by computer technology—in particular, the reduction of what economists call the “marginal cost of reproduction” to almost zero. What this means is that it costs almost nothing to reproduce and distribute information because of the properties of the digital medium (Ekbia 2009; Kallinikos, Aaltonen, and Marton. 2013). The production of information, on the other hand, is a very different story, as Shapiro and Varian (1998, p. 21) cogently describe: “Information is costly to produce but cheap to reproduce.” This very statement captures an opportunity that economists have smartly identified: minimize production costs, and you have a good business model for a product. (See also Caffentzis’s [2003] discussion of this point).
An example of such a product is the instantaneous online translation service Google Translator, which basically generates translations by mixing and matching the fragments of all human-generated translations that the system has in its vast repository of texts. The repository is the gold mine that Google digs into in order to provide its services. It is with gold mines such as this that Brynjolfsson and McAfee (2011, p.27) wonder, “What would happen to the digital world if information were no longer costly to produce? What would happen if it were free right from the start?” To this question, they candidly provide the following answer:
The old business saying is that “time is money,” but what’s amazing about the modern Internet is how many people are willing to devote their time to producing online content without seeking any money in return. … The billions of hours that people spend uploading, tagging, and commenting on photos on social media sites like Facebook unquestionably creates value for their friends, family, and even strangers. Yet at the same time these hours are uncompensated, so presumably the people doing this “work” find it more intrinsically rewarding than the next best use of their time. To get a sense of the scale of this effort, consider that last year, users collectively spent about 200 million hours each day just on Facebook, much of it creating content for other users to consume. That’s ten times as many person-hours as were needed to build the entire Panama Canal. (ibid; emphasis added)
This description is hardly in need of commentary, except for a few “little” assumptions:
While it is difficult to dispute the first assumption on its own terms, we can understand what “willingness” really means in light of the second assumption. Imagine an unemployed, underemployed, or even an employed college graduate with job for which they are over-qualified, of which there are plenty nowadays, and consider what is the “next best” thing that they can do with their time. Or, consider a person, even a manager, with a relatively well-paying job, who comes home in the evening or on the weekend, tired and exhausted after working in a tense environment that demands more than the paid 40 hours without overtime compensation, including the expectation that the worker will stay online at home, lest she miss an email or a call from her manager. What is the next best thing that this individual can do with her time?3 Such scenarios, all too common, do not seem to penetrate sanitized discourses positing rational economic actors calmly deciding among intrinsic rewards.
And, then, there are “strangers,” whom Brynjolfsson and McAfee do not name, making us wonder who they have in mind: the neighbors around the block who have lost their jobs to automation; the small downtown shop owner on the verge of bankruptcy because Walmart has opened a Superstore in the area, the local coffee shop taken over by Starbucks, the elderly person who must struggle through long phone menus trying to fill a prescription; or perhaps the Congolese fisherwomen, graphic designers in Botswana, activists in San Salvador, and cattle herders in Serengeti who are so precisely invoked as beneficiaries of the new digital age by Eric Schmidt and Jared Cohen (2014; see Assange 2014, p. 54). We suggest that the “strangers” who benefit from people’s free labor must decisively include the major corporations and other winners of new capitalism, some of whom we have discussed in this book.
Our argument in the latter chapters of the book has been that the current state of affairs is not sustainable—socially, politically, or environmentally. This assessment is shared by many prominent academics and intellectuals, including economists (Piketty 2014, Stiglitz 2014), social scientists and environmentalists (Altieri 1999, Dyer-Witheford 1999, Vandermeer 2011, Bradley 2014, Hornborg 2014, Klein 2014, Reich 2015), religious leaders and humanists such as Pope Francis and members of the Dark Mountain Collective, and many others. These figures look at the current situation from very different perspectives and prescribe different solutions, to be sure, but they all share concerns about the risks that current trends pose to our civilization and to humanity as a whole. If this many credible people see the problems, then, how is it that the trend continues to expand with alarming recklessness?
The answer to this question is multifaceted, and we do not claim to even have the beginnings of an answer. We do, however, know that part of the answer is in how people, particularly elites with clout, act within the current situation to foster their own interests. Political, intellectual, and academic elites shape a great deal of the thinking and discourse about social issues. Although in this book we kept using the word “capitalism” as if it represents a personified unity, we are aware that in reality there is no such reified entity as “capitalism.” What happens, rather, is that a set of processes has been set in motion in modern societies which have given rise to, among other things, what we call the capitalist system. Central to these processes is a “triumphant reorganization of capitalism that is deploying the new technological innovations to solidify an unprecedented level of global domination” (Dyer-Witheford 1999, p. 236). The inner dynamic of these processes is largely above and beyond the will of any single individual or group of individuals. The dynamic is earthly and material, however, and it provides a space for human intervention.
We have already seen one example of how intellectual elites might have influenced our thinking, and hence the course of events regarding computing and the economy, in the person of Herbert Simon—a towering figure who influenced twentieth-century thought in a broad range of areas, from economics, operations management, and organizational science to artificial intelligence, psychology, and cognitive science. We have also seen that, despite his many false predictions, Simon doggedly stood by his views. Now, to consider one of Simon’s peers, let us quote another Nobel Laureate—Wassily Leontief—who made the following statement in 1983:
The role of humans as the most important factor of production is bound to diminish in the same way that the role of horses in agricultural production was first diminished and then eliminated by the introduction of tractors.
Brynjolfsson and McAfee, who quote Leontief with approval, build on his proposition to argue for “technological unemployment”—i.e., driving human beings out of their jobs by technologies of automation.4 We not only disagree with the premise of Leontief’s argument about the diminishing role of humans in production, we also find the whole analogy between horses and humans misguided. We make this judgment not from an anthropomorphic and chauvinistic human perspective, but from a purely socioeconomic one. We consider statements such as the above as self-fulfilling prophecies of neoliberal thinkers such as Gary Becker (1962), who sought to create a new kind of subjectivity that reacts to situations in predictable ways. In their attempt to turn humans into predictable automata, these ideologues prophesied an image of humans as first akin to horses, and then to machines. What they failed to notice is the key difference between humans, other animals, and machines: it is not in what each is “good at,” but in the degree to which their actions and behaviors can be regulated, monitored, and controlled. The same qualities that make humans malleable, shapeable, and, often times fallible, also allow them to reflect, resist, and revolt—that’s what makes humans special, without equivalent in the animal kingdom or in the realm of machines.
We are almost full circle back to our original question about the complex and convoluted relationship of humans with machines. We cannot, however, leave the discussion here without some measure of self-reflection on our part. Our perspective in this book, which we have also pursued in our earlier works (e.g., Nardi and O’Day 2000, Ekbia 2008), can be broadly described as the “critical study of computing.” We consider ourselves “critical friends” of technology. This line of research has a relatively long tradition in informatics and computer and information science, going back to the work of people such as Rob Kling (1996). The work has had varying degrees of impact on the real practice of computing, some of which might be more obvious and acknowledged than others. At the same time, the long tradition has also allowed and encouraged some degree of self-reflection and self-criticism within the community. A good example is the posthumous critique of Kling’s legacy by some of his close collaborators (King, Iacano, and Grudin 2004). The authors admire Kling for, among other things, his “strong inclination to view as dubious any statement that was not grounded in empirical evidence or theoretical analysis, particularly those that encouraged people to take actions that would ultimately benefit those making the statement” (King, Iacano, and Grudin 2004, p. 3). Among such statements, the authors mention those made by the AI pioneer Marvin Minsky in an interview with Life magazine in November 1970:
In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months, it will be at genius level and a few months after that is powers will be incalculable. (Darrach 1970, p. 60; in King Iacano, and Grudin 2004, p. 2)
This and similar statements by Minsky and other AI practitioners interviewed by Life led the poor reporter to conclude: “Computers could free billions of people to spend most of their time doing pretty much as they damn please” (1970, p. 65; in King Iacano, and Grudin 2004). It was conclusions such as these, and the “expert” statements that provided fodder for them, that drove Kling, rightly and righteously, toward a critical perspective. And it was Kling’s sharp and skeptical analysis that drew the admiration of others for his work. However, as King, Iacano, and Grudin point out, “a strong critical perspective can be non-reflective and too quick to dismiss other points of view,” partly because of its “highly protectionist stance toward the common person and regaled against class advantages that technology might have for the rich and powerful” (King, Iacano, and Grudin 2004, p. 4). One outcome of such a stance, according to the authors, was Kling’s failure to appreciate “the dynamics of exponential change.” This is a kind of change that is, for instance, embedded in Moore’s Law, and which the human mind is apparently incapable of easily grasping.5 King, Iacono, and Grudin describe this as an outcome of “the slippery slope of a critical perspective.”
The warning about the slippery slope cannot but give us pause about our intellectual vulnerability—a danger that we take very seriously. Although there is no potion or panacea that would protect us against this, we have tried hard throughout this book to avoid the slippery slope by maintaining a balance between the positive, productive, and promising aspects of the current computerized economy and its negative, dark, and damaging aspects. To that end, we have consulted a broad range of literatures coming from different perspectives, trying to do justice to their viewpoints. We have further drawn on our own empirical studies, and our own daily experiences, and those of those around us, to attain a concrete and grounded understanding of current circumstances.
Our labors have aimed not only to expose, to the best of our ability, the complex system of humans, machines, services, political entities, and economic interests that come together in multiple layers of mediation and remediation throughout our computerized environment, but to venture toward the equally complex issue of making evaluative judgments about technology. The computer-mediated activities we enjoy make a huge difference in our lives, in matters that are often and at once pragmatic, delicate, and deeply human in character. But this has come at the cost of inequality, insecurity, anxiety, and numerous other concerns that we have examined in this book. Yet seeds of change may lie within the very technologies we have discussed, although human reason, aided by our sense of fairness and mutual dependence, must prevail if we are to successfully cultivate them.