4  Coding Cow Clicker: The Work of Algorithms

O Adam, Adam! no longer will you have to earn your bread by the sweat of your brow; you will return to Paradise where you were nourished by the hand of God. You will be free and supreme; you will have no other task, no other work, no other cares than to perfect your own being. You will be the master of creation.

Harry Domin, in R.U.R. by Karel Čapek1

It Felt Like a One-Liner

When I was pursuing graduate studies at Stanford University in 2007, I had little idea that just around the corner undergraduates and teaching assistants were making thousands of dollars overnight by designing simple applications for a new platform called Facebook. Using the company’s application program interface (API), these students would code a program to share things like hugs or “hotness points,” often in a matter of hours.2 Once they began placing ads in the applications, money flooded, contributing to a “gold rush” atmosphere in a few classrooms and dorms.3

But what compelled all those users to send one another hotness points, to participate in these public networks of attention? One person leading these classes was B. J. Fogg, a noted psychologist of online behavior who runs the Persuasive Technology Lab at Stanford. Fogg’s work on “captology” and persuasive technology argues that tools like Facebook have profound behavioral effects because they tap into significant mental triggers. He identifies “persuasive design” as the fulcrum of a person’s motivation, her ability or capacity to act and the specific triggers that could push her into taking particular actions.4 Students building these early apps created products that induced thousands of people to engage in particular forms of work that are lucrative to advertisers: clicking on a button, sharing a link with friends, or another simple mechanic to encourage engagement. These apps typically request access to a user’s network of Facebook friends, allowing the software to send messages to other contacts and leverage each user’s relationships to extend its reach. Many of them offer limited value to their players but function as lucrative cultural viruses for their developers, spreading tailored advertising rapidly across social networks.

Since those early days, “social gaming” has become more sophisticated and more profitable, drawing the attention of a huge range of actors, from major videogame studios to social activists, all hoping to catalyze particular forms of cultural work among millions of people. As social gaming has risen to become a significant market in its own right, the boundary lines of play continue to blur. This transition has often been framed through the term “gamification”—the idea of using cause and effect mechanisms inspired by game-play to encourage particular behaviors. As one leading advocate defines it,

gamification can be thought of as using some elements of game systems in the cause of a business objective. It’s easiest to identify the trend with experiences (frequent flyer programs, Nike Running/Nike+ or Foursquare) that feel immediately game-like. The presence of key game mechanics, such as points, badges, levels, challenges, leader boards, rewards and onboarding, are signals that a game is taking place. Increasingly however, gamification is being used to create experiences that use the power of games without being quite as explicit.5

Gamification is a deliberate grafting of system onto lifeworld, using Habermas’s terms again, creating a superstructure of metrics and arbitrary goals attached to cultural behaviors. Facebook is full of these cues, some more subtle than others, for translating abstract conceptions of friendship and communication into granular, countable actions. From the number of friends one accumulates to declaring affiliations with particular fan groups or political causes, Facebook is a vast system for measuring engagement. Tagging another user in a post or a photograph becomes a kind of social grooming, giving that person a small mental boost and the pleasure of being publicly recognized.6

But gamification on Facebook reaches its apogee in the third-party applications that millions of people use on the site. Perhaps the most notorious example is FarmVille, the addictive social farming game released by Zynga in 2009, which at its peak attracted 80 million players.7 A classic example of gamification, FarmVille hooked players by creating a set of cues and rewards for sustained engagement over long periods of time. Certain actions were deferred by the game rules, requiring a player to come back hours or days later to harvest their crops or perform other maintenance activities. This perversion of the asynchronous flexibility of typical online interaction created its own seductive rigor for players, leading some to wake up in the middle of the night to tend to their virtual farms. For a certain minority of players, the solution was to make small payments that would waive these oppressive rules, at least temporarily, allowing a crop to grow instantly, for example. That minority has supported the entire business model, generating hundreds of millions of dollars in annual revenue. FarmVille and its successors are effective at eliciting particular rote behaviors from humans through a combination of carrots and sticks, engaging them in actions that the company can monetize directly or use to expand its network of users.

The cultural narrative layer of the farm masks a mesmerizing Skinner box, the classic tool of operant conditioning, which, in this case, links revenue-generating behaviors to the innate human rewards of social connectedness and completion. As The Atlantic paraphrased one company executive: “One of the most compelling parts of playing Zynga’s games is deciding when and how to spam your friend with reminders to play Zynga’s games.”8 Ultimately these games are a kind of escapism masquerading as efficiency—plant your crops, build your empire, complete this task all in sixty seconds while you wait in line. But that pelletized, incremental escapism obscures its own forms of discipline and productivity. FarmVille is both of and beyond Facebook, leveraging the site’s persistent modes of contact with users to capture a shocking amount of their attention and cash. In the extreme, they can foster forms of addiction that approach the personal destruction of drugs and alcohol.9

Ian Bogost has been a persistent critic of gamification, arguing that it should be reframed as “exploitationware” for its abuse of human susceptibility to manipulation by cynical marketers. But like others, he acknowledges the potency of games to motivate behaviors: “even condemnations of video games acknowledge that they contain special power, power to captivate us and draw us in, power to encourage us to repeat things we’ve seemingly done before, power to get us to spend money on things that seem not to exist.”10 The term exploitationware is useful because it highlights the essentially commercial aspect of these games, signaling their role as algorithmic culture machines that effectively mine or extract particular forms of value through interactions with users.

It was in reaction to this trend that Bogost created his own piece of exploitationware that was also “persuasive.” Cow Clicker began as a satirical response to the mindless repetition of these social games:

Games like FarmVille are cow clickers. You click on a cow, and that’s all you do. I remember thinking at the time that it felt like a one-liner, the kind of thing you would tweet. I just put it in the back of my mind.11

But as Bogost’s public critique of Zynga and its ilk drew more attention, he decided to demonstrate his ideas with a game that emulated the worst excesses of manipulative social games. As journalist Jason Tanz described the whole adventure for Wired:

The rules were simple to the point of absurdity: There was a picture of a cow, which players were allowed to click once every six hours. Each time they did, they received one point, called a click. Players could invite as many as eight friends to join their “pasture”; whenever anyone within the pasture clicked their cow, they all received a click. A leaderboard tracked the game’s most prodigious clickers. Players could purchase in-game currency, called mooney, which they could use to buy more cows or circumvent the time restriction. In true FarmVille fashion, whenever a player clicked a cow, an announcement—“I’m clicking a cow”—appeared on their Facebook newsfeed.12

Cow Clicker was deliberately designed to be absurd, a meaningless game that would reveal the hypocrisy and manipulation of gamification. But it became popular, first as an ironic protest from others who shared Bogost’s views, then as a game in its own right. Players either didn’t realize that this was a satire, or played in spite of that knowledge, like the stay-at-home father who told Tanz, “instead of stupid games that have no point, we might as well play a stupid game that has a point.”13 At its apogee, over 50,000 people were clicking on digital cows and Bogost found himself enmeshed in his own Skinner box of feedback, getting rewarded by the player community when he added new features to the game. Bogost has described this process as a kind of “method design” like method acting, putting himself into the creative space of a social game designer and ultimately suffering the same kind of systemic, dehumanizing entanglement with the software that he sees it inflicting on players: “It’s hard for me to express the compulsion and self-loathing that have accompanied the apparently trivial creation of this little theory-cum-parody game.”14

10766_004_fig_001.jpg

Figure 4.1 Cow Clicker screenshot.

For Bogost, the heart of this critique of the social gaming phenomenon and the social networks it relies on is another foundation-stone in the philosophy of technology, Martin Heidegger’s notion of enframing. In very simple terms, Heidegger argued that technologies (and our social world in general) tend to nudge us into certain modes of thinking about what is possible and what can be revealed about the universe. We see a hammer and we think about what we can hammer with it; but a hammer could also be used to open a bottle, to prop open a door, to hold down papers on a windy day.

In Unit Operations, Bogost describes how social networks encourage modes of enframing by explicitly designing representations of social relationships and tools for manipulating them: “In LinkedIn introducing one business associate to another suddenly becomes a formal unit operation: a set of software interactions that enable bigger professional networks while fixing users’ individual experiences.”15 Cow Clicker takes this enframing to a logically absurd conclusion, allowing players to accumulate extra points when friends “click on your clicks,” formalizing the meaningless action of clicking on a cow. In Bogost’s terms, the unit operation is evacuated of almost all real content, leaving only the satirical digital cow behind, while the network infrastructure, the procedural operations of an addictive social game, remain highly visible. The way that Bogost’s experiment spiraled out of control reflects the ongoing debate about technicity launched by philosophers like Heidegger and Simondon, but now played out in status updates and social games. As users grapple with the intellectual and emotional consequences of enframing, a battle for meaning emerges. The runaway semiotics of Cow Clicker signal that this is satire and sincere pursuit, game and gulag all at once.

Bogost’s critique of these modes of enframing explores the troubling intersection of social engagement, personal compulsion, and interactive design. Cow Clicker is a bad game on multiple levels: its design is deliberately poor and uninteresting; it explicitly aims to waste its players’ time through arbitrary six-hour deadlines; it shamelessly leverages their social networks to expand its viral reach; it tempts them into spending real money on satirical “mooney,” leading one commenter to note: “What fascinates me is the fact that the more money you make from this, the more depressed you are going to feel. I like that, I think it’s funny.”16 But Cow Clicker is not just a game—it satirizes a much deeper form of evolving cultural relationship, one that has blossomed with the age of smartphones and ubiquitous computing into a new kind of systemic colonization of the margins. Like FarmVille, Cow Clicker calls attention to a series of cultural transactions that blur the distinctions between temporal, cultural, and financial units of value, creating systems of algorithmic arbitrage that extract attention and revenue from spaces of “play.” As media scholar McKenzie Wark polemically describes the core arrangement on Facebook: “The power of the vectoral class [in this case, Zynga and Facebook] retreats from direct ownership of the cultural product but consolidates around the control of the vector. We get all the culture; they get all the revenue.”17 We get to play FarmVille, creating our own distinctive virtual homesteads, but end up paying for the privilege with time, social status (by sharing our networks with Zynga), and often money.

Cow Clicker is a critique of social games, but it also reveals how algorithms are restructuring rules in all sorts of professional and social arenas. Facebook, LinkedIn, and related platforms distill social interactions into a set of explicitly structured games, and the scores are tallied by algorithmic architectures tracking behaviors, interactions, and other forms of feedback. Social influence, professional networking, and friendship networks all predate the Internet, but the meaning of these connections, the ways that they count and are made legible, now get aggressively shaped and defined by algorithmic platforms. Facebook may not explicitly define what a “friend” is, but it will tell you how many you have and suggest new friends for you, creating a powerful, sometimes even addictive implicit definition through its hugely successful platform.

As a culture machine, Bogost’s creation illustrates the conflicting notions of labor and value in the contemporary marketplace of attention. For Zynga and many other algorithmic entertainment companies, computation is the means for converting human attention into income. For the players of these games, the rewards are to engage in a kind of digital grooming, tending an imaginary object of care (a farm, a digital pet, etc.) and engaging with other players in a community of practice. Users construct their own narratives within the constraints of algorithmic enframing even as they click through the Skinner boxes set up for them. The algorithms structure and track these actions, gathering them like drops of rain in a catchment to be resold as a bulk data commodity. Meanwhile, the players generally perceive only a fragment of this larger market situation, often donating not just their attention (to view ads) and their social graph (to deepen their profiles with data brokers and to expand the algorithm’s reach), but also their cash, making in-game purchases to enhance their playing experience. For many of us—roughly 1.5 billion people accessed Facebook at least once a month in 2015, out of 3.2 billion Internet users worldwide—some version of these transactions constitutes a major source of “fun.”18

Work and Play

Perhaps the most compelling element of Cow Clicker as a work of conceptual art is its inversion of the concept of fun. Bogost designed the game to be deliberately awkward and tedious, centering the action on an almost entirely static object, the cow, that can be clicked (to no direct effect) only once every six hours—a kind of ascetic or penitent ritual of the digital age. This is an abstraction of fun that deliberately seeks to eliminate any real joy, turning the “game” into an especially stark Skinner box of shallow action and even shallower rewards. But Cow Clicker merely accentuates a new kind of ludic labor that has become increasingly prevalent. The gaming term “grinding” describes the performance of repetitive actions to accumulate resources or gain powers within a game: repeatedly gathering the same item or completing the same minor challenge in a game like World of Warcraft, for example. The grinding activities are typically uninteresting in themselves, but players engage in them in order to unlock the ability to do more interesting things later. Boring things we do now in order to do fun things later … this is one definition of work, one that has increasingly penetrated the space of entertainment and troubled the separation of work and play.19

The boundaries between games and normal life are blurring: players hold weddings and funerals in virtual worlds; some hiring executives consider gaming team management as a form of leadership experience; companies employ points, badges, and other gamification methods to encourage particular kinds of employee behavior.20 Part of the shift is demographic: games have gradually evolved from an adolescent pastime to an activity that many adults unabashedly acknowledge as a hobby or obsession. According to an industry report, by 2014 the average gamer was thirty-one years old.21 The Cow Clicker player who argues “we might as well play a stupid game that has a point” neatly illustrates what game researcher and economist Edward Castronova describes as the “membrane” of fantasy surrounding synthetic worlds, the “almost magic circle” that seems to protect the space of play from regular activity but never quite succeeds.22 This is play in the gap between computation and culture, a space inflected by competing political, cultural, and computational metaphors: the ever-popular World of Warcraft can be read at once in the frames of epidemiology, race studies, organizational management, economics, and, of course, as a fantasy narrative. As media theorist Alexander Galloway puts it in Gaming: Essays on Algorithmic Culture, “video games render social realities into playable form.”23 Here the magic of computation taps the deep roots of socially constructed spaces of play, like the carnival or the theater.

Entering this permeable membrane of fantasy, we gain access to a space where cross-pollinating rules, beliefs, and values provide their own opportunities for cultural arbitrage. World of Warcraft, for example, contains multitudes, as anthropologist Bonnie Nardi reports: “Christian guilds, gay guilds, location-based guilds, family guilds, military guilds, guilds of coworkers, and guilds of professional colleagues,” as well as special servers set aside for players who prefer a more historical experience, communicating in an “ersatz Ye Olde English patois.”24 At a certain stage of critical mass and collective engagement, the magic of this incomplete circle becomes more alchemical, using the aporias between real and virtual spaces to opportunistically convert certain forms of value into others. One example of this alchemy is what Castronova terms “social validation,” the process by which a sought-after item within the fantasy world of a game acquires real value simply because enough people desire it.25 When enough players desire a particular weapon or artifact that can be transferred in a game, a market emerges on sites like eBay for trading that item in “real” currency.26 The worlds of work and play collide, creating new forms of arbitrage between computation and culture: so-called “gold farmers” in World of Warcraft and similar games, for example, “play” in sweatshop conditions to acquire virtual goods that can be sold for real money.

The alchemical arbitrage of different values can pierce cultural space much more deeply than these straightforward transactions, however. The seemingly arbitrary rules of virtual games can have a deep impact on the lives touched by that alchemy, rewriting the practices of everyday life. The Internet is filled with stories of Internet addiction: game aficionados who lost their jobs, their marriages, their savings, and even their lives to the call of synthetic worlds like World of Warcraft or FarmVille. These instances of destructive addiction and dependence on algorithmic entertainment are relatively rare, but they demonstrate the power of algorithmic systems to reorder human lives, to evacuate them of traditional forms of meaning and belonging. Faced with appealing computational systems, especially games and those that take on the Skinner box trappings of games, we often feel the compulsion to engage. Sometimes there is little choice involved, as algorithmic systems take on work as well as play via customer service and frontline management roles for telephone helpdesks, workforce scheduling systems, and many other applications.

In all of these ways we see humans grappling with truly algorithmic spaces. These systems are ordered by computationally structured rules that are then manipulated (and at times hacked or contravened, like Hiro wielding his katana in the Metaverse) by human intention and cultural reframing. In The Language of New Media, Lev Manovich describes a player learning an algorithm through play as a kind of “transcoding” of computation into human behavior: “the projection of the ontology of a computer onto culture itself.”27 But this experience moves beyond play—it is also an act of investigation, of interpretation, of reading. Galloway explores this relationship as a kind of hermeneutics, a model for interpreting knowledge, in Gaming. Galloway’s book turns on the claim that “to interpret a game means to interpret its algorithm,” thus linking the essential meaning of a game to its status as a computational culture machine.28 The player coming to grips with the rules of Grand Theft Auto V is interpreting the game’s algorithm as much as the critic writing about it—they both perform a hermeneutic act that is both work and play, and part of that labor is the effort required to span the gap between computational and cultural systems of meaning.

Galloway frames this tension between work and play in the context of “informatic control,” which players either role play in games like Civilization (managing the lives of millions of digital citizens) or submit to in games like Cow Clicker (waiting for that six-hour counter to tick down).29 This tension reaches back to Norbert Wiener and cybernetics: over the course of his career, Wiener grew increasingly concerned with the consequences of cybernetics in implementation, particularly around automation and labor.30 The cybernetic ideal of the feedback loop and the organism as an informational entity could also be applied to Manichean systems that manipulate human participants for unsavory or merely dishonest ends. The science fiction author Douglas Adams took great pleasure in a text adventure game based on The Hitchhiker’s Guide to the Galaxy because it had moved beyond “user friendly” into “user insulting” and “user mendacious.”31 Players found it delightful.

As Galloway and others have argued, the real critique of gamification rises from the logical extension of this colonial march, as gamification comes to define not just social interaction but deep structures of labor and society. One of the most compelling aspects of games is precisely the seduction of algorithmically ordered universes—spaces where our apophenia can be deeply indulged, where every event and process operates according to a rule set. These universes are aesthetically neat and tidy, with rules and conditions that, we believe, can be learned and ultimately mastered. The aesthetic of computational order echoes Bogost’s warning about the cathedral; its appeal for human engagement is rapidly expanding from play to work. Increasingly, startups are bringing this logic to the real world, creating game-like experiences for services like taxis (e.g., Uber and Lyft), household chores (Handy, HomeJoy, Mopp) and even office communications (Slack).

These companies operate in what design entrepreneur Scott Belsky calls the “interface layer,” using appealing design to clarify and rationalize messy aspects of cultural life into simple, dependable choices.32

The Interface Economy

If Zynga and its cohort of game-makers have found ways to extract labor value from entertainment, the new wave of interface layer companies is reframing labor as a kind of entertainment, adopting the optimistic framing of the “sharing economy.” Their rhetoric relies on the notion of technological collaboration and just-in-time delivery: taking advantage of unused resources like empty seats in cars, unused rooms in houses, and so forth. But all of these interactions are grounded in the mediating computational layer that manages ad hoc logistics, matches buyers and sellers of services, and structures access to platforms through carefully constructed interfaces. Indeed, as we’ll see below, that last factor is so important that it makes much more sense to call this the “interface economy,” where traditional social and commercial interactions are increasingly conducted through apps and screens that depend on sophisticated, tightly designed forms of abstraction and simplification.

The interface economy of the 2010s follows logically from the first wave of technology companies to gain dominance in the 1990s and 2000s. A principal factor in the rise of early giants like Amazon, Netflix, and Google was their ability to adapt algorithmic arbitrage to established capitalistic spaces. “Disruptive” technologies upended the way we shop for books, rent movies, and search for information, shuttering thousands of brick and mortar stores as these services moved online. In the past few years, the incubators and venture capitalists of Silicon Valley have turned their attention to new areas ready for algorithmic reinvention that are more distant from the traditional technology sector. The triumph of gamification, ubiquitous computing, and remote sensing (in other words, the quantification of everything) has led to a slew of new businesses that add an algorithmic layer over previously stable cultural spaces. Companies like TaskRabbit, Uber, and Airbnb are adapting algorithmic logic to find new efficiencies in lodging, transportation, and personal services, inserting a computational layer of abstraction between consumers and their traditional pathways to services like taxis, hotels, and personal assistants.

These companies take the ethos of games like FarmVille and impose their “almost-magic circle” on what was previously considered to be serious business. Uber, for example, presents a simple application interface for its drivers that is deeply reminiscent of open-space driving games like the Grand Theft Auto series (figure 4.2). The company’s opacity about pricing and the percentage of revenue shared with drivers makes it even more like an arbitrary video game where points are handed out according to an algorithm we players only partially understand. The entire platform is designed to abstract away the regulatory and biopolitical aspects of hired drivers. Employees become contractors and the established overhead of cab fleets, dispatchers, garages, and maintenance magically disappears. All the socioeconomic infrastructure gets swept away behind the simple software interfaces that connect riders with drivers, and a legal interface that abstracts risk away into generalized blanket insurance policies covering every driver and passenger. Perhaps most appealing for many riders, the awkwardness of payment and tipping is also abstracted away. Once you hail a car using the company’s app, payment becomes entirely a background activity, with charges applied once the rider exits the vehicle at her destination. For most Uber ride types tipping is not possible within the system at all; passengers must use cash or some third-party payment system, such as the ubiquitous mobile payment service Square.33

10766_004_fig_002.jpg

Figure 4.2 The cartoon maps Uber provides for its drivers and passengers via the Google Play Store.

10766_004_fig_003.jpg

Figure 4.3 Uber’s homepage offers a message of simultaneous elitism and equality (image from July 2014).

The layer of abstraction Uber imposes over the payment process is telling. In fact, Uber’s customers, the players in its game, are both the drivers and the riders, since the company collects its commission based on bringing these two groups together. We see the arrangement in advertisements like the one in figure 4.3, where the sleek black and white photography romanticizes the luxury of emerging from a hired car even as it puts the driver and the rider, both wearing elegant clothes, on the same plane of wealth and accomplishment. “Owning the moment” is a fitting phrase for a company that, like all vendors of gamification, depends on the abstraction and regulation of time. If FarmVille forces its players to tend their crops at arbitrary times of the day and night, Uber promises liberation from the temporal hegemony of other systems. As a rider, there is no more anxiety or uncertainty about locating or waiting for a cab; as a driver, no more struggling with a dispatch supervisor about where or when one is allowed to drive. The interface layer here provides certain forms of certainty in terms of immediate time and distance—neatly pictured by the very literal almost-magic circle of the cartoon map views of available riders and drivers. But it also takes many things away: a complete understanding of the financial model by which Uber decides how much a fare will cost, the customer’s agency to reward exceptional service, and engagement with established regulatory schemes for taxis and livery services (which has landed Uber in legal battles around the world).

The romance of Uber and other interface layer companies depends on the idealization of the individual as a one-person corporation—a nation of independent CEOs working where, when, and how they please. Uber is merely one prominent example of the broader movement to build this interface layer into many different cultural spaces, from hiring contractors for home repair to facilitating private party car sales. All of these markets were, of course, already technological, but they were largely inaccessible to direct algorithmic management until the advent of smartphones and ubiquitous sensors enabling the close monitoring of human and financial resources. In terms of labor and surplus value, what the algorithms of Uber, Airbnb, and their cohort capitalize on is the slack infrastructure of modern consumption: empty cars, unused bedrooms, and under-employed people. According to UCLA urban planning researcher Donald Shoup, the average car is parked 95 percent of the time; why not exploit that latent resource?34

Viewed more broadly, the interface layer is a colonization of the quiet backwaters of contemporary capitalism—the remobilization of goods and spaces after they have already been consumed or deployed. Ultimately these systems engage in precisely the same kinds of arbitrage of goods, human attention, and time that gamification does, motivating us to create new economic efficiencies and extract revenue from them. The cofounder of another startup, named Yerdle (dedicated to recycling unwanted consumer goods through a kind of algorithmic swap-bazaar), put it just right: “We want to make people make things better.”35 This altruistic ambition to motivate positive efficiencies through capitalism resonates with Google chief Eric Schmidt’s suggestion that customers “want Google to tell them what they should be doing next.” This is the central labor question at the implementation fault line between algorithmic gamification and the marketplace: who is motivating these changes, and what exactly are we “sharing” in the sharing economy?

On the most obvious level, this new economy is about more efficient access to privately owned or atomized goods and services. The rhetoric of companies like Yerdle and Airbnb leans on the mobilization of material resources: cars, apartments, and household objects that are sitting around unused. Share your personal goods to monetize that slack and reduce the overhead of ownership, turning an empty vehicle or room into a profit center and a community resource. At a deeper level, what the interface entrepreneurs are asking is for us to share (and monetize) our time: the founders of Lyft are motivated not just by profit but by the loneliness of the average commuter stuck in his car.36 These companies encourage us to dedicate our hours to others, often in appeals that blend the allure of wages for labor with something more socially complex. Where Uber sells a kind of elite independence to both its drivers and riders (figure 4.3), Lyft is selling a different and more intimate kind of social contact (figure 4.4). The company only recently abandoned its directive that drivers festoon their cars with quirky pink moustaches, and many drivers still assume passengers will sit companionably in the front seat, rather than the rear.

10766_004_fig_004.jpg

Figure 4.4 Lyft advertising takes a very different tack from Uber.

For companies like Lyft and more deliberately intimate interface layer systems like the dating app Tindr, the “sharing economy” is not about money at all, but about that experience of companionship. If these business models are founded on exploiting certain kinds of alienated labor and attention, their customer experience promises relief from that alienation. Even as Uber and Lyft collect their invisible commissions on unseen transactions, the affective experience is one of a specially branded community. But that community is crucially, essentially mediated by the algorithm. Drivers and riders are rated and vetted through computation; the interface layer bringing them together is also the central arbiter of trust. Little wonder that the most serious threats to these companies are not financial scandals but attacks on that trust, as when Uber was revealed to be tracking the movements of journalists, or incidents where its drivers have been arrested for sexually assaulting passengers.37 The sharing economy ultimately depends on an atomized form of intimacy, a series of fleeting, close encounters with strangers that are managed and underwritten (in emotional, financial, and liability terms) by algorithmic culture machines.

While this intimacy is necessary for the sharing economy to function, it is not the primary commodity these systems are selling. The real sell is the interface itself: the experience of computationally mediated culture and its underlying algorithmic simplification and abstraction. Having a stranger you have never met arrive to clean your bathroom or sit next to you while they drive you home is intimate but also awkward. Much more appealing is the aesthetic of the interface that delivered that service on demand. The algorithm performs with a calibrated affect, and delivers an encapsulated experience that is typically put to the use of making these awkward transactions more palatable for consumers who find stark class divisions uncomfortable. Many interface companies literally present themselves as “wrappers” around existing services, bundling, organizing, and demystifying them for a painless user experience. But the wrapper is experiential as well as logistical and commercial, recontextualizing the original service or product with the values of the sharing economy. The intimacy we feel is carefully constrained by the underlying interface. Far easier to give that maid one star through the anonymity and distance of the smartphone than to fire someone you hired yourself. These systems can also reinforce class divides by rewarding technoliteracy, making the emancipation of the digital economy disproportionately available to users with the education, money, and time to master those games.

When the specter of class does appear, the interface economy suggests that we are all users together—all denizens of the same ad hoc economic zone. Companies like Lyft convince us that we can be either the driver or the rider, the contractor or the consumer of services, and that these experiences are being provided by people fundamentally “like us.” The interface layer acts as a baffle or shock absorber, diffusing common socioeconomic barriers like class, gender, and race through the magic of five-star ratings and computationally validated trust. This form of arbitrage is a sophisticated triangulation of intimacies, asking participants to trade in or renegotiate certain facets of identity in favor of algorithmically crafted substitutes. People hailing cabs on the streets of New York have long felt judged by race, among other characteristics, and some African Americans specifically use Uber because racial context is muted or removed from the ride-hailing equation.38 Uber becomes a new filter in the system of social context, identity politics, regulatory regimes, and antidiscrimination laws, inflecting these various ideological conflicts through the lens of its interface. As one blogger in Washington, DC, noted, “The Uber experience is just so much easier for African-Americans. There’s no fighting or conversation. When I need a car, it comes. It takes me to my destination. It’s amazing that I have to pay a premium for that experience, but it’s worth it.”39

In short, the fundamental commodity of the interface economy is access to these systems—not the services they provide so much as the computational constructs themselves, which in turn map or simulate a growing range of cultural and social spaces. In the context of labor politics, the interface economy atomizes commercial transactions, cutting them out of spatial and social context, and converts the cultural geography of the city into one or another abstracted space, a cartoon map on a small screen that nevertheless obscures the real city it represents. Increasingly we outsource the determination of bias, of commercial viability, indeed of context itself, to the algorithm, asking the interface layer to do not just the logistical but the ethical and cultural work for us. All of this “sharing” does offer new forms of intimacy and connection, eliminating or reducing certain traditional forms of bias, but it demands a steep investment up front. In order to become economic actors in the algorithmic interface, we have to buy in to the computational trust systems that calculate value through equations of abstraction and arbitrage that are largely invisible to us. Behind the facade of the facile, friendly computational interface, there is a world of labor translation, remediation, and exploitation at work.

Algorithmic Labor: Cloud Warehouses

While the interface economy supports many human-to-human interactions where consumers and contractors operate on relatively equal footing, there is another sphere of algorithmic arbitrage where human labor is almost entirely obscured behind the veil of computation. The disruptive innovations that upended traditional blue-collar industries have changed the fundamental assumptions of many corporations around their business operations. All of the algorithmic businesses we have discussed so far—companies like Zynga, Uber, Google, and Apple—depend on a massive infrastructure for distributed, global computation: millions of servers running around the clock in vast warehouses, processing and managing tremendous data stores. This “cloud” of data is another crucial link in the layers of abstraction between culture machines and algorithmic process, enabling all of the magic tricks we have come to depend on, like the speedy responses of Siri or the immediacy of search results from Google. Christian Sandvig has pointed out the fascinating history of the cloud metaphor, which began life as an icon computer engineers in the 1970s would use in system flowcharts: “The term comes from the symbology of the network diagram, where a cloud symbol indicates a part of the diagram whose internal details are irrelevant.”40 By definition, then, the cloud was an abstraction, a way to bracket off less interesting aspects of a system.

Today we use the cloud metaphor to describe the management of atoms as well as bits, promising instant access to information and goods anywhere, at any time. The messy logistics of physical reality require companies like Amazon and Apple to manage or transact with vast warehouses, factories, and transportation networks. These facilities are staffed by millions of workers who are beholden to an algorithmic logic of the workplace as they service the computational cloud and the frenetic interchange of bits and atoms. As environmental scholar Allison Carruth put it, the vast server farms behind the scenes are “an energy-intensive and massively industrial infrastructure” that hides major ecological costs behind the hazy, insubstantial aesthetic of the cloud metaphor.41 Such places are the factories of the interface layer, the depots and arbitrage points where reserves of human, financial, informational, and electrical energy are translated and routed to their various targets.

There are many kinds of cloud facilities, from pure data archives to very literal warehouses full of stuff. The most tangible of these, the closest analog to familiar forms of labor and capital, are the cloud warehouses that supply our needs for overnight or same-day shipping on millions of consumer products. Companies like Amazon contract with third-party logistics outfits to service these facilities and hire their workers, who have job titles like “picker.” Pickers are human computers doing work just a little too complex for robots: directed by tablet computers to locate items among thousands of bins, they translate standing reserves of material into algorithmic processes. Every step and second is tracked, and their performance and continued employment depend on meeting specific, tightly constrained targets.42 The warehouses are a physical expression of algorithmic logic: a particular bin might hold a random assortment of objects, sitting in an aisle with similarly haphazard organization. The picker might hold an object for a few seconds before putting it on a conveyer belt to be packaged by another human elsewhere in the facility. Every aspect of these temporary employees’ professional lives is dictated by algorithmic arbitrage: the numbers of workers hired and the shifts assigned (often determined only hours or minutes before work is to start); the productivity goals of each worker, calculated in seconds, steps, and units shipped; the policies for lateness, conversation, and even workplace temperature (during a summer heat wave, it was deemed better, or more efficient, to line up ambulances at one Amazon warehouse than install air-conditioning).43

The conditions can be deeply inhumane and logical only to machines. The third party logistics company that hired undercover reporter Mac McClelland had a zero tolerance policy for lateness in the employee’s first week. One employee whose wife delivered a baby in his first week was summarily fired for missing a day’s work, requiring him to repeat the hiring process and rejoin the company a few weeks later. Beyond the emotional duress and financial hardship of such policies, they make little sense from a human operational management perspective (in the preceding example, this would include duplicating costly hiring paperwork, drug tests, and background checks). But the superstructure of algorithmic culture makes sense of this senselessness by rendering it invisible. The cloud seems physically intangible even as it serves to cover or obscure these layers of abstraction and human labor behind the spectacle of computation, showering us with messages about the seductive ideal of frictionless e-commerce with unbeatable prices and fast free shipping.

For McClelland and her coworkers, the space of algorithmic abstraction has entirely subsumed reality—the game that is the picker’s job still comes with arbitrary, manipulative rules, but the consequences of failure are much higher. The “almost-magic circle” of the warehouse reverses the relationship between real and virtual that we see in the gamification of entertainment and media. Unlike the magic of, say, Uber, where a car from a cartoon map materializes in real life at the curb, cloud workers remain trapped in the cartoonish world of the algorithm, escaping only at the end of their unpredictable shifts. These workers maintain the infrastructure supporting a capitalist, science fiction fantasy: the frictionless commerce of one-click purchasing, limitless choice, and near-instant gratification. But for the workers servicing these warehouses, the architecture of step-counting, numerical targets, and other game mechanics embody the realities of that idealized, game-like narrative across real space and real lives. The cloud functions as an opaque membrane deflecting public attention, ethical inquiries, and legal liability, so we can all continue playing the online shopping game with no obligation, and few opportunities, to empathize or interact with a human being.

For the workers inside this circle there’s no magic to speak of unless you count the Orwellian power of sensors that relentlessly measure your behavior, imposing an externalized, numerical index of worth like a high score or a health bar floating over each worker’s head, but often visible only to management. In terms of their power relationship to global algorithmic systems, the lives of American cloud workers are not that different from the well-known miseries of Apple contractor FoxConn’s massive facilities in China.44 There workers, some children as young as fourteen, have persistently suffered under incredibly long hours, dangerous working conditions, and other serious problems.45 Terry Gou, the company’s chairman, put it bluntly in 2012: “[We have] a workforce of over one million worldwide and as human beings are also animals, to manage one million animals gives me a headache.”46 So far, corporations have been unable to escape what William Gibson once called meatspace, the squishy realm of biological materiality. Biopower fuels many crucial aspects of the interface economy, though advances in robotics and artificial intelligence may one day eliminate the jobs most of these “animals” hold. In addition to spending $775 million to acquire the factory automation firm Kiva Systems, Amazon is already testing out robot arms and other automated systems that could eliminate the human job of picker entirely.47

Like the intense logistical requirements of Amazon’s same-day delivery services, FoxConn’s conditions are underwritten by Apple’s relationship with its consumers. We expect the company to rapidly develop, prototype, and then mass-produce millions of devices over incredibly short spans of time—sometimes mere months—while maintaining near-total secrecy until the new product can be dramatically unveiled. We expect global release dates, ever-increasing performance and innovation, and instant gratification from the design wizards of Cupertino. These things are all possible, and highly profitable, but only through the logistical and human infrastructure of algorithmically mediated factory floors, worker dormitories, and predatory practices.

Cloud warehouses and factories transpose computational logic to physical spaces; spanning the gap between the abstract and the implemented entails making them terrible places for humans to work. Algorithmic arbitrage extracts immediacy, relevance, and rapid innovation from the raw materials of untold millions of poorly compensated person-hours of labor. These systems rely on a sophisticated backend to promote the romance of algorithmic culture through the magic of the cloud, invisibly organizing millions of humans according to computational logic and executing orders to support the impression of seamless, effortless efficiency. And the systems are efficient, but not primarily in the sense presented to us as consumers. Amazon, Apple, and other companies continue to find algorithmic solutions to deliver goods and services to us ever faster, but their real work, the winning cultural arbitrage, is in doing so at a profit.

One can almost imagine putting the magical pipeline in reverse. Instead of a system that delivers magnificent technological wonders to us on demand, think about what it whisks away for us. It suctions up all the awkwardness and hassle of the real world, dumping it like so much toxic waste onto cloud workers who must organize their lives around abstract, algorithmic structures of employment and value. It vacuums all the wasted time of standing in line, waiting on hold, inching cars around parking lots at the mall … and entombs those seconds, minutes, and years in the cold storage of frigidly air-conditioned server farms, endlessly recirculated by the whirring fans and waiting CPU cycles of the cloud.

Artificial Artificial Intelligence

The final telos of algorithmic labor is the work that abstracts physical and cultural infrastructure away altogether. The factories, server farms, and warehouses of the cloud still require massive physical plants, but some projects strive to take the algorithmic economy logic a step further. If the interface entrepreneurs seek to unmoor highly specific businesses from their original contexts, like taxis and house cleaning services, Amazon’s Mechanical Turk goes much farther in its attempt to create a new general industrial base in the cloud—an assemblage of workers for a huge range of tasks, all working on their own, managed by algorithm. Amazon rather ironically bills this as “artificial artificial intelligence,” which neatly summarizes its reinvention of the human–machine relationship. The system creates a kind of interface to the human mind—an industrial nam-shub—for quickly harnessing the brain power of thousands of people to run a specific program. The unit operations here are highly specific “human intelligence tasks” (HITs) that often take only seconds to complete: “call a number, note how they answer,” “extract purchased items from a shopping cart receipt,” or “rotate 3D model to match image,” for example.48 Instead of assigning simple repetitive tasks to a calculating machine, Mechanical Turk assigns them to individual humans who typically make a few cents on each HIT.49 They are, in a very literal sense, humans functioning as mere technical extensions of a computational culture machine. Within the system interface these people are anonymized and identified only by an alphanumeric code (figure 4.5), their identities defined by their performance records and a set of “qualifications” that elevate some workers to the title of “master worker” in tasks like photo moderation and categorization.50

10766_004_fig_005.jpg

Figure 4.5 Amazon Mechanical Turk Interface for Managing Workers.

The system draws its name from another seminal moment in the mythos of artificial intelligence, the “Turk” automaton that dazzled the courts of central Europe at its unveiling in 1770 by Wolfgang von Kempelen. It was a hoax controlled by a carefully concealed, all-too-human chess master within its cabinet of gears and display panels, but it was a powerful spectacle: a mechanical thinking machine that could defeat almost any opponent. The Turk was presented as a working machine designed for play, a calculating engine whose promise was only fulfilled two centuries later when IBM’s Deep Blue defeated chess world champion Gary Kasparov in 1997. The history of the original Turk illuminates the cultural force of Amazon’s name for this distributed human computation system and the rhetorical power of artificial artificial intelligence. In both 1770 and 1997, algorithmic calculations were harnessed in a very human way for a human problem, doing a form of work that is meaningless except in the human terms of a game or contest of wits. In both cases attendants wheeled out devices that were literal black boxes with highly constrained inputs and outputs, and their performance in highly publicized challenge matches was as much about the cultural figure of computation as it was about computing chess moves. These were “thinking machines” performing one of humanity’s most rigorous and widely established forms (or games) of analytical thought. Machines thinking—that was the spectacle, fraudulent in the first instance, vehemently contested (by Kasparov, at least) and later widely accepted in the second.

10766_004_fig_006.jpg

Figure 4.6 An engraving of the Turk from Karl Gottlieb von Windisch’s 1784 book Inanimate Reason.

The Turk’s spring-wound embrace of mechanical thought was not a complete novelty, though its intelligence was: automata had pervaded the cathedrals and religious festivals of Europe for at least two hundred years, according to historian Jessica Riskin.51 Before the Reformation and its stern separation of divinity and human artifact, congregations enjoyed the spectacle of machines that were “mechanical and divine.”52 Their performances were knowing ones, inspiring as much humor as devotion, but nevertheless the magic was there in works such as the Rood of Grace at Boxley Abbey, which greeted pilgrims annually in the sixteenth century with a Christ who moved and rolled his eyes. Human craftsmen and performers were essential to these performances, simulating responsiveness and awareness to effectively engage the crowds. Something of this knowing spectacle remained in Kasparov’s matches with Deep Blue, including his persistent efforts to deploy openings and play styles that would throw off IBM’s algorithms.53 The human engineer was never entirely hidden behind the mechanism, as IBM employees tweaked the system between every game. Perhaps the best example of this relationship was a highly scrutinized move near the end of game one, the truth of which was only revealed in 2012 when IBM researcher Murray Campbell was interviewed by the popular statistician Nate Silver. The move in question had sent Kasparov “into a tizzy” as it seemed to reflect the ambiguity and refinement of a human-level intelligence, and many have suggested it threw off the grandmaster’s concentration for the second game, which he proceeded to lose.54 In fact, as Campbell revealed, the move had been a bug, one the engineers corrected after the first match.55 Kasparov himself had made magic out of the algorithm, inventing a sophisticated cultural explanation for what was in the end a random computational artifact.

The original Turk and the great match with Kasparov projected the Turing test onto a chessboard, asking the existential question of whether we would be able to recognize intelligence as a kind of style or aesthetic of play. Deep Blue did not merely beat Kasparov; it has transformed the entire chess world. Now young grandmasters can train against superior AI opponents, and a new space of collaborative play has emerged in the implementation gap where “centaurs” comprising human and machine teams consistently beat both grandmasters and pure algorithmic intelligence.56 The most effective aesthetic is one of augmentation, complementing human intuition with computational depth. The analogy holds in other areas of implementation as well. Uber is not simply another way to get home; it is upending the regulatory and cultural space of hired transportation, changing the grammar and vocabulary we all use to read the street. The algorithmic reinvention of work is transforming the literal and imaginary geographies of culture, a theme we will return to at the end of this volume.

The transition from Deep Blue’s victory in 1997 to Amazon’s Mechanical Turk marks the reversal of the computational instrument: now human brains are the black box, strung together into an ad-hoc network of wetware servers for odd jobs. Mechanical Turk both validates the distinctive intelligence of the human mind (which can still outperform algorithmic intelligence on a wide variety of contextual challenges) and subjects it to algorithmic logic, creating an infrastructure of hyperefficient micropayments for context-free work. Dedicated “turkers” earn about $5 an hour, and many of them reside in the United States (with India making up the second largest contingent).57 But it’s disingenuous even to speak of workers in the typical meaning of that term, or an hourly wage, since the system’s atomization offers no stability, regularity, or persistence of particular forms of labor, leading to huge variance in the amount of time it takes to complete a particular task. Just as many computer servers in the cloud sit idle (in 2012, McKinsey & Company estimated the percentage as 90 percent when measured by power consumption), most of these workers are not engaged most of the time.58 Mechanical Turk melds the logic of interface companies that seek to activate a standing reserve, making our lives more economically efficient by commercializing underutilized resources, with the algorithmic logic of a server farm.

That algorithmic logic of distributed, parallel computing means that the marketplace is not one for “workers” at all but for carefully delimited slices of human attention and ability: nobody hires a “Mechanical Turk”; rather, they purchase the performance of some number of repetitive tasks (e.g., 10,000 HITs looking at a particular image and describing it in words). It is piece-rate work, leading some researchers to argue that the system makes the computer the sewing machine of the twenty-first century.59 This is disingenuous because a sewing machine with a trained operator can be hired out to the highest bidder, but the Mechanical Turk marketplace is designed to eliminate almost any kind of expertise or specialization among workers, and thereby any real bargaining power. More important, the personal computer is not the most crucial technology in the system: once again the interface—the Mechanical Turk platform and the human brains connected to it—is the essential piece. Cycles of human attention make up the core commodity of Mechanical Turk, with the users’ machines, the Amazon cloud, and the data processors who purchase these cycles comprising a complete system of computation. Here the culture machine of Mechanical Turk and the limited role of computation within it are clearly visible: computer code assigns jobs and handles payments, but the real work is happening in biological processors.

Just as the interface economy whisks away ambiguity, concealing its abstractions behind the facade of computation, Mechanical Turk is a kind of fine mesh for sifting through ambiguity. Here is the meditation of one blogger on the experience of being a turker:

[The typical jobs] are your transcription HITs, your decipher-this-horrible-handwriting HITs, your survey HITs. These are the tasks where translating human ambiguity into computer-style binary is the most frustrating, because you’re likely to arrive at the answer “I don’t know” and be forced to choose anyway. I remember a few angry-keyboard-mashing examples, like the time I was told to identify whether jewelry was gold or silver based on a single black-and-white picture. Impossible to tell! Or being asked if a person’s illegible scribbling looks more like “X” or “Y.” Neither, it looks like “Z!”60

Mechanical Turk is another system of arbitrage operating in the implementation gap. In fact, it quantifies and commoditizes that gap, turning it into a series of micro-tasks and judgments: incremental moments of abstraction and concretization. The grinding series of identical tasks it farms out can then be integrated, like individual frames in a full-motion film, into an illusion of continuous computation. What makes Mechanical Turk unusual is the way it puts the human back end of computation on display as a commercial service, applying the logic of the interface economy to the zone of implementation itself. As the quote above demonstrates, the humans who operate in that contested space must constantly negotiate between computational and cultural regimes of meaning.

Piecework Poetics

Across our various examples the interface layer is transforming not only the politics but the aesthetics of labor, imposing new, algorithmic contexts on central questions of identity, value, and success. The sea change that has already upended the lives of cab drivers and contract workers is also affecting doctors and lawyers. The smartphones so many of us carry are irrevocably blurring professional roles and boundaries for almost every participant in the global interface economy, redefining the affect of work in ways that we are only beginning to understand.

Mechanical Turk is a crucible for these transitions, an artificial market created specifically to commercialize the thesis that humans are important cogs in computational machines. The turkers at the heart of this system not only take on the challenge of endless micro-tasks managing ambiguity—they also take on the affective work of acting as a human element inside of a computational application. Conceptual artist Nick Thurston mined this seam of the implementation gap in Of the Subcontract, Or Principles of Poetic Right, a literary exploration of Mechanical Turk largely commissioned from turkers themselves. Of the Subcontract neatly inflects the platform’s central premise, asking its wetware servers to take on a task, poetic composition, where humans still hold a decided edge over algorithms.

The poems in the collection are powerful precisely because of their pseudo-automaton status. Each piece begins with a readout of the amount of time spent on composition (usually measured in a few minutes and seconds), the hourly rate of this particular worker, and an indication of which HIT or submission was printed in the book (e.g., the fourth of seven submissions, the first of two). The entire collection is prefaced by an introduction nominally authored by McKenzie Wark but actually ghostwritten by a Lahore-based freelancer, as the afterword reveals.

The collection is a particularly powerful aesthetic move because it simultaneously co-opts and questions the tyranny of computational time. Turkers—people competing for HITs at an average of $5 an hour—are probably not people who feel like they can afford to compose poetry. Using the Turk system to assign creativity as micro-task performs a particular kind of affective absurdity.

This is artificial artificial literature. First, these humans are asked to emulate their humanity through the Amazon interface. Second, these are not poems authored through an individual impulse to create but as externally mandated piecework—a virtual factory for verse. The formatting of the printed page, with its attendant metadata about cost and composition time, emphasizes that double artificiality even as the poems themselves remain: no matter the conditions of their hyper-commoditized composition, they are works of writing by humans, conveying particular human ideas.

The book’s first section of poems, titled “Artificial Artificial Intelligence,” generates persistent flashes of the humanity lurking beneath or behind the system, like entry 0.24, “My Son”: “You are my chubby bubble cutie pie / I love you my son / For you are the only one.”61 Asking these turkers to write themselves into the interface of the system immediately brings other humans into view along with them. We catch glimpses of sons and daughters, missing parents and lost loves, all filtered through the anonymizing mechanism of Mechanical Turk and its Taylorist computations of labor and value. These are exactly the kinds of contextual links that the interface layer generally eliminates, creeping back in through the voice of the poet.

Many poems seem to respond to their prompts with the same flat, affectless tone as the Mechanical Turk system itself, offering up anodyne confections of cliché and truism, completing the task of composition in as little as twelve seconds. But for others a cautious sense of revelation, of emotional exposure, transforms the grim performance of poetry-by-the-cent into something beautiful. Beyond quietly asserting their humanity, these poets also persistently grapple with the question of identity in an algorithmic system, like entry 0.04, excerpted here:

You have put me in a box with a lock and a key.

I pick at the lock from time to time.

But it is not so simple to be free.

I am chained by the title you have put on me.

I so long to be me, to break loose.

But my fears keep me in that place.

That place where you forced me to be.

My greatest fear is of you not loving me.

Am I who I think I am?

Or am I who you say I am?62

In eight minutes and thirty seconds the author explores the uncertainty of identity in terms that seem to apply equally to a lover and to Amazon itself, describing the lock of imposed identity and title as a vise that thwarts not just movement but certainty itself.

The time stamp, the money stamp, and the anonymity of the authors all convey Of the Subcontract’s status as another kind of Turing test. These subcontracted authors are performing on at least two levels. On the first level, they are pulling off an impression of humanity under the constraints of a ticking clock and waiting HTML form field. But on the second, they are still performing as cogs within a computational system, repeating the same task multiple times and attempting to optimize their own processing efficiency to complete each poetic HIT as quickly as possible. And yet, beneath all these layers of abstracted, artificial production, moments of beauty emerge.

Of the Subcontract is a literary experiment uniting surrealism and authoritarianism—as if someone had collected a volume of poems from the denizens of Terry Gilliam’s darkly fantastical film Brazil. It shares roots with surrealist thinking and automatic writing, drawing, and other creative processes, with the sharp difference that this is not a playground for creative elites but a factory for the socioeconomically disadvantaged. The pleasure and responsibility of the creative experiment resides with Of the Subcontract’s real authors, Nick Thurston and his merry band, who are commissioning poetry that began as their little game and ends as work for anonymous others. This tension and discomfort are precisely what Thurston and his collaborators sought to achieve.

The resurgence of aesthetics between the cracks of such a pointedly cynical artistic experiment is presumably exactly what they were hoping for: a proof of the value of poetry written in the margins of a proof of its bankruptcy. Of the Subcontract accords aesthetics its traditional place at the intersection of other forms of value, allowing financial transactions, algorithmic calculations, and human computation to be transmuted into something with the official seal of artistic production. The political theater of crowdsourcing piecework poetry also becomes performance art of a different nature, a literary production that asserts the value not only of the critique but of the processual by-products generated by the critique. We end up admiring not just the artistic concept of the volume but the piecework poems themselves. In this way, Of the Subcontract performs an algorithmic critique of Mechanical Turk, relying on the system itself to to process or run that critique. Our work as readers, then, is to examine the iterative process of commissioning and assembling these poems. The implementation, the ways that the work of these anonymous turkers was collected and contextualized, is an integral part of the whole poetic mechanism.

Experimental poet Darren Wershler strikes at the heart of this critical tangle in his afterword, which is worth quoting at length:

We have also read essays explaining that the Turk is in fact an elegant metaphor for the precarious condition of the worker in a globalized and networked milieu. And we have made a substantial amount of art that actually makes use of Amazon Mechanical Turk as a productive medium to demonstrate the same point, but in a way that is, you know, artier.

The point is not that the mechanism is empty, like some kind of neutral reproducer. The point is that it is a mechanism that already includes a spot for you—like the Law in Franz Kafka’s novel The Trial—whether that spot is in front of it as a player, inside it as the operator, behind it as the spectator being shown its misleading components, from afar as the critic describing and demystifying it by virtue of your criticism, or, increasingly, as the artist or writer (mis)using it in your project. The moment that you engage the setup as a problematic the machine springs into action.63

Wershler’s argument beautifully captures the reality of algorithms in the context of labor and cultural production more broadly: the system reconfigures not just the methods and outputs of production but the entire cultural frame, implicating all of us as collaborators. As Galloway puts it in The Interface Effect:

I dispute the ideological mystification that says that we are the free while the Chinese children are in chains, that our computers are a lifeline and their computers are a curse. This kind of obscenity must be thrown out. We are all gold farmers, and all the more paradoxical since most of us do it willingly and for no money at all.64

We are all “(mis)using” these systems in our lives, whether we consider that use cultural or aesthetic practice, private or professional communication, or even the critique of these systems. Hybrid labor between inextricably linked human and algorithmic culture machines is happening all around us, all the time. In the most banal, pervasive ways, we rely on algorithmic systems to perform and evaluate labor, from the search bar to Microsoft Word. In the final analysis, Of the Subcontract’s central narrative is not that of oppressed turkers articulating brief glimpses of their human condition, but of the place held for each of us in the computational gap.

Moral Machinery

Debates about the ethical impact of mechanization are anything but novel. The original chess-playing Turk, for example, inspired florid prose and rapt audiences during its long run of public appearances in the United States in the heyday of industrialization.65 The tenor of those debates is instructive today when we think of the interface layer and new algorithmic systems of behavior modification that are far less blunt than the brutal conditions of nineteenth-century mills and factories. Even then, many viewed the rise of automation as a force for ethical good, inspiring (or forcing) workers to conduct themselves with the same dedication as the machines they attended. This is what one British economist called “moral machinery” in 1835: a system of managerial interventions to enhance the industrial system’s natural tendencies toward order and productivity among human workers.66

As historian Stephen P. Rice argues, the spectacle of the Mechanical Turk modeled a new affect for its American audiences, performing a form of labor that blurred the line between machine and human, between algorithm and worker.

Launched into the scene of middle-class anxiety about worker self-control, the chess-player assumed the twin statuses of regulated machine and ideal mechanized worker. Viewers could locate in the chess-player the uniquely human traits of will and reason without having to remove those qualities too far from the mechanical traits of regularity and efficiency. Read as a regulated or “minded” machine, [the Turk] showed the new productive order in place.67

This double framing as human and machine closely echoes the interface of Amazon Mechanical Turk, where each HIT promises the taskmaster a response that is both mechanically reliable and reliably human. Of the Subcontract’s poems push that dualism to its limit, opening up a channel for poetic self-reflection from the very belly of the machine.

The nineteenth- and twenty-first-century Turk platforms both use aesthetics to produce certain attitudes or affects for their users and viewers. Both machines do a particular kind of work (playing chess, completing millions of distributed HITs), but they also perform another kind of labor when they reinforce the dualism between human and machine that Rice identifies. This is what political philosophers Michael Hardt and Antonio Negri call “affective labor,” or economic work that produces a physical and emotional output, classically exemplified by the retail dictate “service with a smile.”68 For the nineteenth-century Turk, the spectacle itself is the affective product: the figure of the Turk with his moving arm, the display cabinet full of functionless (but impressive) gears and machinery, and of course the labor of the man hidden inside the machine’s secret compartment all contribute to manufacturing a sense of wonder and excitement. In the twenty-first century, Amazon’s Mechanical Turk promotes notions of repetition and fungibility, implicitly encouraging its turkers as well as the purchasers of human computation to think of each worker as a simple number. The HITs themselves are ordered up in batches and, as we saw above, often disappear in seconds, encouraging turkers to be constantly vigilant for new opportunities. By relying on Amazon’s task distribution network and personal computers, the system also constrains the bodies and the time of its users (depending on the fluctuations of what piecework is available at any given time, and how well that work is compensated). The system produces HITs but also a kind of labor culture around those HITs.

That trace of a labor culture also appears in the apps and interfaces of the sharing economy. As the ads for Uber and Lyft suggest (figure 4.3, figure 4.4), a huge amount of energy in the interface economy goes toward the production of affect among consumers and providers of services. User feedback, emotional enjoyment, and a neoliberal ideal of independence define these systems at every level, from their logos to their feedback mechanisms. They are persuasive platforms designed to create an algorithmically mediated space of community. The computational layer between users and providers manages the details, calculating feedback scores, making particular users visible or invisible to one another, and generally managing the “experience” of using the system. These services build their reputation on transparency, arguing that the combination of user feedback and background checks serves to vet our drivers, house cleaners, errand-runners, and so forth in a more robust way than we would typically do on our own. But the systems retain control over the deployment of that transparency, choosing what information we see and maintaining control of this validation process behind algorithmic black boxes. In the end what we typically see are a small number of options that are all strongly recommended: the first step in a carefully choreographed user experience that depends as much on emotional outcomes as financial ones.

The emotional valence of financial transactions goes beyond affective labor, tracing its roots to Adam Smith and the Theory of Moral Sentiments, an important counterpoint to his better-known arguments about capitalism in The Wealth of Nations. Smith argues in Moral Sentiments that social cohesion and the effective functioning of a marketplace depend on a logic of virtuous action that has imagination at its root. We can never know the lived experience of others, but social engagement depends on imagining that experience in its joy and pain. Empathy is a crucial component of all social intercourse, a feedback mechanism for adjusting our own behavior according to moral guideposts and our own judgments. Smith’s recognition that we can never truly know the experience of others—yet constantly strive to imagine it—places an important moral responsibility on each participant in cultural practice, one that parallels his libertarian political views. The foundation of a moral society, he argues, is the constant imaginative work we all do to map out the territory of empathy, creating a pragmatic sense of justice and shared experience that guides economic and social behavior.

As law professor Jedediah Purdy argues, the economic imposition of particular emotions and even identity positions on service workers brings something distinctively new to Smith’s model: “Mandatory smiles are part of an irony at the heart of capitalism. … Faking it is the new feudalism. It is the key to an economic order of emotional work that tells people who and how to be on the basis of where they fall in the social and economic hierarchy.”69 When we adapt the theory of moral sentiments to the age of the algorithm, we can see that a third party enters the transactional space between human actors. The expansion of the interface economy and its dependence on affective labor brings new genres of performance to life: we are very much part of the interface now, contributing to the aesthetic superstructure of computation with our bodies, our minds, and our affect. Five star ratings all around! Yet we do this work as “centaurs” too, depending on the same interface platforms for essential affective cues. The march for “Uber for X” systems that turn various financial and noncommercial arrangements into services for hire brings with it the affective labor of the service economy, but with computation performing crucial pieces of the imaginative work of empathy.

Take Uber, for example. The company’s feedback system asks both riders and drivers to rate one another, creating a persistent database of user profiles that allows the company to identify its most compliant and troublesome workers and clients (who can then be passively or actively excluded from future transactions). This notion of consumer feedback depends on the capitalistic function of empathy as Smith defined it: creating an empathetic frame, a channel for feedback, encourages better interactions. Something as simple as knowing your driver’s name is a more empathetic contact than most of us have with a taxi driver. And of course this data can help companies identify problems in their business practices and help customers find the services and products they want. Feedback data allows us to extrapolate beyond the individual experience, aggregating acts of empathetic imagination to project a more holistic view of the future (e.g., forty-three people thought this driver was rude). But by creating the tool for extrapolation, these systems also replace the nuance and authorship of the imaginative act as Smith imagined it with a series of surveys. Quantifying empathy, often into a completely abstracted five-star scale, encourages us to leave the heavy lifting to the algorithms. Instead of doing the imaginative work ourselves, we outsource it to a culture machine built up of tailored Likert-scale questions and comment forms, black box ranking algorithms, and the algorithmically mediated crowd of strangers that interface companies seek to synthesize into a virtual community. Variations on this move have persisted for centuries, from the deployment of statistics like “four out of five doctors choose product X” to the empathetic appeals of advertising in general—but computational platforms take it to a deeper epistemological level.

As we saw above, the central keyword for the “sharing economy” is not imagination, but trust. Uber and its peers offer to take on various forms of cultural and imaginative work in exchange for our trust (and, of course, our money). The affective labor they ask their workers to engage in serves that larger trust, building up a brand of consistent, friendly service. We are asked to trust, most significantly, in the algorithm. Trust that the feedback and background checks will weed out any dangerous or merely rude drivers. Trust that the system will find the closest, best vehicle to whisk you on your way—that the cartoon map is real. Trust that the driver will be fairly compensated and that you will pay a fair price. In each of these claims, the imaginative work of empathy has been outsourced to the algorithm. We don’t need to work out how to catch a cab, assess the vehicle and its driver, calculate the tip, or engage in any of the other micro-practices and empathetic moments that define the taxi experience. They do not simply disappear. Instead, Uber asks us to perform substitute empathetic work, in algorithmic terms. Some of this work is explicitly requested or required: rate the driver, share the experience on social media for a future discount, recruit friends to the service to get another discount. Some of it is less obvious: the ads and the labor culture of the service ask us to validate the drivers as fellow entrepreneurs, individualists, and free agents. Different companies present different versions of that validation (Lyft encourages riders to sit in the front; Uber does not), but they all seek to build a synthetic community of practice held together by algorithms.

This can result in powerful new forms of community and social action, like the story of an Uber driver who raised money for a terminally ill passenger, in part by mobilizing the Uber community.70 But it also reroutes a central facet of economic life through algorithmic mediation, embedding the essential empathetic calculus in the culture machine. We are just beginning to see the impact of this move, a growing dependence on algorithms themselves to perform affective labor. The Easter eggs in Siri from chapter 2 provide one example of a machine performing emotional work for our benefit. The rapidly expanding field of quantified health and well-being offers a broader arena where algorithmic systems seek to collaboratively perform affective work with and through our bodies. At times it seems like we desperately seek channels to perform and experience this affective labor directly, perhaps because so much of it has been outsourced and formalized by algorithms, user interfaces, and computational back ends. Why else would it be so easy to make thousands of dollars with a Facebook app that allows you to send hugs to your friends? That hunger for emotional contact, for a space where we can imagine directly, marks another disparity between abstraction and implementation. The gulf between the imaginative empathy of human and machine actors in culture comes down to the construction of value. Just as Smith sought to put economic practice on a foundation of intersubjective, empathetic understanding, we are now struggling to define the fundamental structure of value in an algorithmic world.

Notes