3    THE VISUAL CULTURE OF IMAGE ENGINEERS (OR THE LENA IMAGE, PART 1)

The centerfold image of the November 1972 issue of Playboy magazine featured a young Swedish woman in a large, beige hat with an enormous, purple feather tassel. It appears that she is in an attic: a wicker bassinet containing a doll is visible in the background and a kerosene lamp sits over her shoulder. Her breast is exposed and reflected in a mirror on the right side of the image. The mirroring of the naked body is a generic feature of porn: it doubles the model’s flesh and often reveals what is otherwise hidden from the gaze of the camera/spectator. She is naked but for the hat, a scarf, and a pair of boots—which is a weird assemblage of clothes for someone to be wearing. She stares directly into the camera. Playboy said her name was Lenna Sjööblom, though we now know her name to be Lena Forsén (previously Söderberg). The centerfold appeared in Playboy at the peak of its popularity, and the November 1972 issue was (perhaps coincidentally) the highest-selling issue of Playboy ever.1

Having considered and talked about this image for the past decade, I still think that it is a rather odd picture. The hat and the feather tassel are incongruous with the scene; the image is cropped awkwardly. The whole thing has the appearance of someone having escaped to an attic naked and put on whatever they found. That may be the point. It’s indelibly marked by the aesthetics of its time, including a Vaseline-smudged lens that is unmistakably an artifact of the 1970s. It remains surprising to me that this image has circulated as an example of a good image for nearly fifty years–but it has.

This centerfold would be an unremarkable footnote in the history of visual culture, photography, porn, and magazine publishing were it not for the fact that in 1973, someone at the University of Southern California (USC), in the Signal and Image Processing Institute (SIPI), scanned it to test new image compression and transmission techniques for use on ARPANET, an early computer network built by the US military and historically treated as the predecessor to the internet. Between the moment that it was digitized in 1973 and today, this particular image transformed into the “Lena image” or “Lenna image” (figure 3.1), a ubiquitous industry standard and, by anecdotal measure, the most popular digital test image of all time. The way that we look at images online was standardized by engineers and computer scientists, who often returned to the Lena image when they wanted to demonstrate new skills, new techniques, and new standards—it is woven into the fabric of our digital and visual cultures.

Figure 3.1

The Lena image in its test image form. This image is called Lena_std.tif and was obtained from the Signal and Image Processing Institute’s test image database. It is an excerpt of the November 1972 centerfold of Playboy magazine.

There are differing accounts of how a Playboy centerfold wound up on ARPANET in the earliest days of this network technology. The accounts agree that it took place in mid-1973 and that a SIPI engineer named Alexander Sawchuk and/or his graduate assistant, W. Scott Johnson, performed the digitization. From there, however, the precise details diverge. Version A of the origin story describes a deliberate choice to send someone out to buy a Playboy, which they chose because of its special qualities:

One team member ran out to the nearest magazine store and picked up the latest Playboy, the fateful Lena issue. The magazine was chosen because it was one of the few publications that had full-color, high-quality glossy photos—Hugh Hefner insisted on using only the best photography and paper stock to avoid having his product considered a low-end skin rag—and its centerfold was ideal because it was the right size. Photos were wrapped around the scanner’s cylindrical drum, which measured thirteen centimeters by thirteen centimeters. Folded to hide the “naughty bits,” the top third of the centerfold fit perfectly.2

In this version, the image was specifically chosen with foresight out of a desire for a high-quality image on good paper stock—an attempt, as it is remembered, to capture the qualities of Playboy as a printed artifact, which were specific to its status as porn. Although someone supposedly “ran” out to a store, the scene is relatively calm and deliberate. The narrator remarks that the object and the instrument had a natural affinity—the image “fit perfectly” on the scanner—just the right size to crop out the model’s breasts and scrub the image of its porniness. The image, in this version, was simply “folded” in a way that rendered it no longer illicit.

In another telling of this story, published in the IEEE Professional Communication Society Newsletter, the scene is much more frenzied. According to version B:

Sawchuk estimates that it was in June or July of 1973 when he along with a graduate student and the SIPI lab manager, was hurriedly searching the lab for a good image to scan for a colleague’s conference paper. They had tired of their stock of usual test images, dull stuff dating back to television standards work in the early 1960s. They wanted something glossy to ensure good output dynamic range, and they wanted a human face. Just then, somebody happened to walk in with a recent issue of Playboy. The engineers tore away the top third of the centerfold so they could wrap it around the drum of their Muirhead wirephoto scanner, which they had outfitted with analog-to-digital converters (one each for the red, green, and blue channels) and a Hewlett Packard 2100 minicomputer. The Muirhead had a fixed resolution of 100 lines per inch and the engineers wanted a 512 x 512 image, so they limited the scan to the top 5.12 inches of the picture, effectively cropping it at the subject’s shoulders.3

In this version, Sawchuk, his assistant, and the lab director used the magazine both out of necessity and an acute desire for a novel image. Instead of folding the image, they tore it. This narrative replaces the deliberate, modest folding used in the first version with a feeling of urgency. In this tale, too, there is an affinity between the scanner and the image: the engineers “wanted a 512 x 512 image,” which had the effect of cropping the image above the model’s breasts.

This is a fairly typical way that moments of innovation are remembered in both popular culture and in professional oral histories; moments of boredom and tedium are interrupted by frenzied improvisation. This hurried scene is characterized by the hectic pace of intense ingenuity, leaving the scanner, limited by its resolution, to do the hard work of cleaning the image of its “not safe for work” content. The irony, of course, is that by using the Playboy image, the engineers had already revealed that for them, the centerfold image was safe for their workplace: perhaps a colleague brought the Playboy to work, like a mystery novel, a crossword puzzle, or a newspaper to read on a lunch break.4

It is also apt that the Lena centerfold should follow this path to notoriety. On the formal level, Playboy was known as the first popular magazine with a centerfold in the United States. This is an important detail in the life story of an image often used to test the limits of data compression: the centerfold was already a compression technology that used the technique of folding to maximize the storage potential of stapled paper. The centerfold—as compression technology—employs the codec of a single gatefold to fit 50 percent more nude woman into the format of the magazine (figure 3.2). Compression allows data to travel, and the centerfold allowed the Lena image to be transported into the lab undisturbed, in the hands of an engineer.

Figure 3.2

Diagram of centerfold technology. Paper folds are marked by dashed lines, with the far-right third folding inside of the middle third. The centerfold uses a gatefold to compress porn into the standard dimensions and format of a magazine. Image: Dylan Mulvin.

These stories recall Eve Sedgwick’s exploration of “male homosocial desire” and the (sexual and nonsexual) ways that the coherence of American, heteronormative desire is mediated and maintained through shared objects of desire.5 By “homosocial desire,” Sedgwick means the ways that objects like centerfolds are sites of sexual attachment––artifacts through which groups (of men) can articulate their desire and bond with other people over their shared experience of desiring. The Lena image in its original context was not just any kind of private image, but a Playboy centerfold, an archetypal artifact of what Lauren Berlant calls the “zone of privacy” and a conspicuous symbol of a “national heterosexuality [that] ‘adult’ Americans generally seek to inhabit.”6

If the magazine was unremarkable in the USC lab, it is remarkable for this taken-for-grantedness. Even if we don’t know this scene firsthand, we can recognize it as a genre of “spectacular masculinity”: the garage, the closet, the shed, the “man cave”—marked by communal signifiers like the pinup calendar, the beer fridge, or a picture of a Corvette.7 It’s the way that American hetero desire left its mark on domestic and workplace architecture of the late twentieth century. And the emerging computer professions were no exception.8 A Playboy centerfold in a research lab in the early 1970s was not any mere piece of pop culture detritus. It was a rank icon of normative desire and the shifting popular mores surrounding the expression of sexuality in public life.

In each version of the origin story, agency for acquiring and possessing Playboy was redistributed and disavowed. Desire for the image was displaced and recoded as desire for the formal qualities of the picture and the material features of the magazine—singling out the paper stock, the glossiness of the image, its dynamic range, and its portrayal of a human face. The image, which is passive in this narrative (something to simply appear; something to be retrieved), is subjected to the close analysis and gaze of the technicians. But it’s the form, not the content, that they cop to wanting. This requires a significant suspension of disbelief that the content of the image (a naked woman) could be separated from its form (a glossy image on paper). It denies that the glossiness of a porn magazine is connected to the aesthetics of representing nudity. But it is altogether typical of the ways that proxies are cleaved from their origins. This is a clear attempt to clean the image, to render it as mere data and grist for a technical system.

Scholars of data are now well accustomed to noting the ways that data are never raw but always cooked, and never mere data but always material artifacts of social relations.9 Data sets are inescapably shaped by the contexts of their collection, storage, transmission, and interpretation.10 As part of a larger project to complicate stories about data and to rematerialize the digital, this chapter documents how an object of desire was framed as useful data and a useful proxy for images of the world out there. The Lena image served the dual purposes for engineers needing data to train algorithms and seeking a template of a human face to act as a stand-in for other faces. This story traces the social life of images in the earliest days of network technologies and illustrates how image proxies are marked by the cultural milieus of their uses.11 By reconstructing the media practices of computer scientists in the early and adolescent periods of the internet, I excavate the norms and controversies surrounding image reproduction and the ways that test images—as porous proxies for the world—soak up the contexts of their use and reuse.

This chapter is organized around three contexts for studying proxies as porous materializations of data: the socioaesthetic context of test images used in the standardization of image technologies; the context of computer research in the 1960s and 1970s—including the regular exploitation of women’s bodies; and the institutional context of military-funded image research at USC. It closes with a discussion of conversions and how we can think about the transformation of images within the larger history of proxies. Through the investigation of these contexts, I undertake an analysis that follows how the Lena image came to be and came to be taken for granted—to see how a cropped image of a white woman from a porn magazine could settle into a canonical test image for a new, digital, and networked visual culture.

It is now axiomatic in the history of media technology to recognize the ways that sexuality and technology codevelop in cycles of innovation, adoption, fear, hope, stigma, experimentation, and desire.12 The internet is often singled out for exaggerating the effects of a “pornotroping” approach to information—one that renders and controls bodies as codified flesh.13 This chapter and the next will contribute to that historiography. But I also want to use the institutional and technological history of image processing to understand how image engineers, in addition to being skilled laborers embedded in a university and a foundational, technical institution, were processing their labor through their identity as consumers of porn.14 The intention, then, is to reckon with the ways that desire and control (and desire as control) shape technological development from the ground up.

>>>

The history of the Lena image links two forms of cultural work surrounding proxies: that of the women who have traditionally served as the models for test images, and that of the engineers, technicians, and scientists who leverage these images to build connections between their work environment, their disciplinary standpoints, and the coding of technologies. To appreciate the connection between these two forms of labor, joined as they are by a Playboy centerfold making its way into the SIPI labs, we first have to recognize that there is one obvious discrepancy between the two origin stories of the Lena image. In version A, a team member “ran out to the nearest magazine store” to buy the issue of Playboy. In version B, at the moment of technical need, “somebody happened to walk in with a recent issue of Playboy.” But we can probably dismiss version A’s timeline, as the centerfold is from the November 1972 issue, which wouldn’t have been on the shelves of a nearby store or newsstand in “June or July of 1973.” So maybe something like version B is somewhat closer to the truth. But this version should give us pause. Objects are never simply lying around. Porn magazines do not happen to appear in a lab at just the right moment. Timeliness is a condition of social expectations—a blend of the material culture of our surroundings and the tempos of our labor.

Objects, as Sara Ahmed writes, do not “make an appearance.” Instead, arrivals take time. Objects “could even be described as the transformation of time into form.”15 To study the arrival of objects (or stories of their arrival) is to interrogate the contexts of their appearance, and how those contexts condition and shape what exactly arrives. We cannot understand how the Lena image came to appear on ARPANET in the early 1970s without understanding the welcome presence of the November 1972 issue of Playboy in the offices of SIPI. The conditions for the arrival of the Lena image had to be right. In this case, those conditions included the practiced surveillance of a woman’s body by the trained eyes of engineers, who themselves were devoted to the labor of training computers in the surveillance of images. To investigate the history of a proxy test image is not only to plumb the standards of visual culture, but also to reckon with the visual culture of engineers, their position in a larger circuit of culture, and the material culture of their workplaces.16

The power to name—let alone create—proxies can shape the default conditions of a knowledge infrastructure and the common connections shared by its participants. Sawchuk, who digitized the Lena image, would go on to serve as one of SIPI’s first directors, and SIPI itself would gain notoriety for its early work in image compression and analysis, as well as its database of digitized test images. If, following Marilyn Strathern, we consider culture to be “the way certain thoughts are used to think others,” then the Lena image, as much as any proxy, has served this purpose for nearly fifty years—acting as a lingua franca through which image engineers could understand one another’s labor and accomplishments.17 Although the Lena image began as a stand-in for the world of images, it soon became a stand-in for the lifeworld of engineers, a stand-in for the communal media of a mostly male profession, and a stand-in for a world of images of women that could be decompiled, measured, and analyzed.

More than just an analogy, the persistence of the Lena image exposes what Charles Goodwin has termed “professional vision.” For Goodwin, members of a profession shape events through the creation of objects of knowledge “that become the insignia of a profession’s craft: the theories, artifacts, and bodies of expertise that distinguish it from other professions.”18 Practitioners do this, Goodwin argues, through the coding of phenomena, which renders everyday events into recognizable objects of knowledge in the discourse of a profession; through the highlighting of phenomena and their features through practices of discursive marking; and by producing and articulating material representations—meaning that the stuff of representation becomes “the material and cognitive infrastructure” that makes theory possible.19 In the two narratives about the Lena image’s origins, we can already see this process at work, as SIPI engineers sought to code a centerfold as a digital image (their domain), highlight its formal features (and downplay its cultural ones), and produce and articulate their own process of material representation—in this case, articulating the use of the image to the materiality of digital transformation.20

What we recognize as styles and techniques of visual representation are inseparable from the uses of image proxies, which are used to train and evaluate representational skill. From painting and drawing to three-dimensional (3D) renders and machine learning databases, shared reference images have served as tools for training and comparing the results of graphic techniques and the skills of various creators.21 These shared reference points, in turn, serve as benchmarks for communities of practice. Chosen to stand in for the world out there, an image proxy becomes an artifact of a profession’s history, its coherence, and a signal of insider knowledge. Through these proxies, we learn how to see, how to judge, how to classify, and how to trace our belonging in a culture.

LENA/“LENA”/“LENNA”

There are at least three different, overlapping figures to consider in the Lena image—three different codings of the image. There is Lena Forsén, a Swedish person; there is Lenna Sjööblom, the name that appears in Playboy; and there is the Lenna/Lena image, a cropped, scanned, and digitized copy of the centerfold, whose varied spelling in the digital imaging literature indicates some of the awkward liminality of the image itself. We should be uncomfortable completely separating these three figures, as doing so threatens to undercut the humanity of the person whose body and labor are on display in every version of the image. However, as the history of these images and their different circulations show, in order to understand how the Lena image operates as both a proxy and a token of professional vision, it is necessary to understand how the Lena image and the model, Lenna Sjööblom, were effectively separated—how a person can be separated from her representation and how a test image is coded as a test image instead of as a centerfold.22

What really separates the Lena image from the centerfold is a massive act of erasure and a concerted act of data hygiene: cropped just above the model’s bare breasts, the test image elides the illicit content of the original, leaving only her face and the reflection of the woman in the mirror. But the act of concealment that recodes the image and transforms it from a lurid centerfold into a decontextualized headshot can only ever be partial. The image is still marked by the soft-focus styling of 1970s magazine representations of women, and its original status as a centerfold codes the imaginary spaces beyond the cropping, as the unseen naked body haunts the excerpt. This decontextualization is also the primary act of highlighting (to use Goodwin’s terminology) that transforms the Lena image into a workable proxy. The act of cropping the image above the bare breasts, whether by folding or tearing, cleansed it of its original context and transported it from a private image of desire into a testable surface.

In the context of the image’s creation, we can situate clipping, cropping, tearing, and folding as ways of transforming the image and framing a vision process that would make it a viable test object. This reframing allowed the image to travel outside the immediate context of the SIPI lab. Just as the Joint Photographic Experts Group (JPEG) or MP3 standards reformat image or sound data to travel in a compressed form, centerfolds need to be reformatted as scientific objects.23 The history of digital image compression needs to be understood through the manual labor of shaping image standards, including their test data, and the many acts of formatting and cleaning “dirty” test data for professional use.

With the growth of computer vision research driven by massive databases, there are potentially millions of test and training images, often obtained from social media and the World Wide Web. Each image has an origin story and could serve as a case study in how the material of a vernacular life can be transformed into a test object in scientific and technical research. But the Lena image is a privileged case. Not only is it excerpted from porn, and likely the most frequently used test image of its kind—with an inordinately long lifespan—it stands out from other test images because of the sentimentality espoused by engineers toward the image, as well as the woman it portrays. But even if it is a privileged case, it is far from alone. The Lena image is one in a long line of images of white women used by engineers and technicians to set the contours of so-called normality within image standardization.

TEST IMAGES: A HISTORY OF FEMINIZED WHITENESS

Now let us approach the moment of digitization in the SIPI labs as a conjuncture: a meeting of contexts that brought together a moment in American popular culture, a set of homosocial rituals, common-sense thinking, practices surrounding the use of images, and scientific knowledge-making. The first node in this conjuncture, then, is the history of test images used in the standardization of visual culture. This history demonstrates that the use of the Lena image was entirely in keeping with existing practices of calibrating visual standards to white women’s skin as a prototype. And the history of test images weds the datafication of images and the visual culture of engineers, braiding together a material and aesthetic assemblage for producing standardized image technologies.

A well-used and -circulated test image is a proxy: a known quantity, a fixed point, and an invariant for testing the variables of image technologies. Test images are objects for trying things out, measuring the skills of students, and gauging the success of new techniques and technologies of image reproduction and manipulation.24 Test images are worked upon when they are transformed, compressed, warped, masked, analyzed, identified, decompiled, and recompiled. But whatever is done to a test image must be measurable. In the parlance of digital image processing, test images are considered “data”:

Testing different methods on the same data makes it possible to compare their performance both in compression efficiency and in speed. The need for standard test data has also been felt in the field of image compression, and there currently exist collections of still images commonly used by researchers and implementers in this field.25

In researching how scientific and technical processes choose stand-ins, the terms “test data” and “training data” frequently appear. For instance, training data can be used to hone a facial recognition algorithm’s predictive assumptions by using a corpus of facial images. Test data will present the algorithm with fresh data to see if the training was successful. For this reason, test data must be data that did not appear in the training set because in order to prove that your facial recognition algorithm works on the same data set twice would be redundant and offer no predictive value of its success in the real world. The definition of “success” will be negotiated by a range of stakeholders who can contest what kinds of successes and failures are acceptable and which may be noxious.

A piece of test-proctoring software, used for administering university exams remotely, flags people of color as “unverifiable” and denies them access to their schoolwork—a situation for which students must seek redress. The technology, driven by artificial intelligence (AI), asked these students to “shine more light” on their faces.26 A millimeter wave-scanning machine, used in airport security checkpoints and trained on a strict, binarized categorization of genders, registers statistically “anomalous” cases as suspicious, singling out some people for further inspection and scrutiny.27 The result is greater friction, more work, more uncertainty, and less security for people who do not cleanly register in opaque, data-driven systems that are increasingly embedded within the civil infrastructures of everyday life. In each case, the likely culprit are the training and test data used to develop technologies; data that circumscribed the normative dimensions of how these technologies could be used.

In visual culture and visual technologies, using the same test images repeatedly and consistently across techniques enables a process of technoaesthetic benchmarking, in which practitioners can weigh the costs of bandwidth or storage against the question “Does it look good enough?” or “Does it work well enough?” Benchmarking works only if we can say (in quantity or quality) how a new version compares to the original, and without a test image, difference isn’t measurable. As for the Lena image’s benchmarks, it features a recognizable face, a reflective surface, and a complex feather tassel that are often cited as the fixed points that computer scientists and engineers can use to track and index the success of their transformations. Test images, then, become canvases for crafting, marking, and capturing differences. As such, test images often (but not always) feature inconspicuous subject matter, clear divisions of space, and a variety of pictorial features.

My favorite test image is used in 3D object recognition, and it features a shoe, a landline telephone, and a box of miniature biscotti. It gets the job done.28 The thinking goes that if an image technique works well on a test image or a series of test images, then it is likely to work well on future, yet-unknown images. This is possible only if a given image—like the Lena image or a shoe and a box of biscotti—is treated as a credible sample of the world of possible “natural images” (i.e., the world of rich and varied images from the vernacular world). This requires seeing the Lena image as a stand-in and imagining, however provisionally, that the way it responds to transformation and analysis will correspond with the world of as-yet-unknown images. Here, the proxy status of the Lena image is leveraged to make a wager: if a processing technique works on this woman’s face, it will probably work on pictures of other faces too.

As much as we might try, it is not possible to unbraid the cultural dimensions of proxies from their material and formal dimensions. Instead, proxies must be approached as porous and leaky amalgams of their sociomaterial histories—they must be approached, in other words, as culture.29 The uses of the Lena image, beginning with the production of a glossy centerfold for Playboy and continuing through its digitization and circulation among engineers, are as much a part of the image as its formal representation, distribution of features, and color values.

>>>

Femininity and whiteness haunt the history of test images. Throughout the twentieth century, images of young white women were used by engineers, technicians, and consumers to develop image standards, test that those standards were implemented correctly, and maintain their equipment. By creating technologies that more faithfully reproduce whiteness, those coded as nonwhite (especially the skin of those culturally coded as Black and Brown) are rendered less legible in image media and are subject to the compounding inequalities of intersectional prejudices, produced and exaggerated by technology.30 In recent years, and because of the work of civil rights, anti-racist, and abolitionist activists, greater attention is now paid to the failures of representation that result from using biased training data. Train your algorithms on too many images of pale-skinned people, and they will struggle to properly recognize less pale faces and flag those faces as problems for the system—a problem that is multiplied when such technologies are disproportionately used to police and incarcerate racialized populations.31

The failure of image technologies to render or register nonwhite skin has become a focus of activists’ demands for more just image technologies. However, activists and critics have also been clear that merely using more “inclusive” training and testing data is not a sufficient response to the violences and oppressions of carceral technology. And while these concerns are magnified by the unprecedented scales of new technologies and the low level of interpretability of many algorithms, the warped representation of skin is an endemic issue in the development of both digital and analog image technologies.

As Simone Browne describes, “prototypical whiteness” operates by treating whiteness as a normative starting point and coding “darkness” as an exception.32 The structuring power of prototypical whiteness means that some bodies are coded as legible and others as problematic. This is the ideological manner in which race becomes a “problem to be solved,” to which, ironically, technology is offered as a solution.33 Browne is building on the work of Lewis Gordon, who writes that “whites’ existence is treated as self-justified whereas Blacks’ existence is treated as requiring justification.”34 Whiteness is a standard, Gordon argues, a default that can be taken for granted; adjustments to this default are exceptions and require their own explanations and justifications. But it’s a process that always refers back to an inescapable, normative whiteness that structures both the standard and the exception.

Prototypical whiteness is a facet of the history of image technologies and a general “culture of light,” in Richard Dyer’s terms, that binds the history of pictorial representation with the history of race and colonialism.35 As Dyer states, “white power secures its dominance by seeming not to be anything in particular”36—e.g., the automated proctoring software that asks you to “shine more light” on your face. The whiteness at work in test images is not an essential identity, but rather a social category that treats “white” as the unmarked and default condition of image technologies. Prototypical whiteness works in concert with other normate templates that code bodies along the axes of race, gender, sexuality, able-bodiedness, and age. Hence, whiteness is both a technological construction that encodes some bodies as more legible than others and a coordinate on a graph of social difference. In both valences it acts as a cultural adhesive, connecting aesthetic norms and technological constraints.

Because we cannot unbraid the social and the material, we have to understand the ways that they reinforce each other, in this case in the continued production and encoding of bodily difference.37 Prototypical whiteness is baked into the history of visual media, but it extends to other imaging technologies as well, where different flesh tones and luminosities are treated unequally. Researchers have shown that self-driving cars show a “predictive inequity” in detecting pedestrians of varying skin tones;38 that pulse oximeters—which provide vital information about a person’s pulse and blood-oxygen levels—provide less accurate readings on darker skin;39 and that fitness trackers report less accurate heart rates for people with higher levels of melanin.40 Each of these technologies presents the possibility of worse health outcomes or death, in part because of a testing and calibration system based on prototypical whiteness.41 By being treated as prototypical, default, invisible, and taken-for-granted whiteness escapes marking, and, in relief, defines what it means to be marked as ethnic, different, impaired, queer, or generally Other. In the technological construction of prototypical whiteness, otherness becomes the special case that must be explained and adjusted for, while simultaneously serving as the exception that proves the rule of whiteness’s normativity. This means that “whiteness,” as a cultural position and a datafied coding of flesh, is not simply captured and reproduced by image technologies; but image technologies also work within a sociomaterial system to code and crystalize nonwhiteness as difference.

>>>

Whiteness is most portable, as a benchmark for image technologies, when it is yoked with gendered representation and a normative femininity. Here the history of images of white women threads together a history of being looked at and consumed through measurement. In recent years, several researchers have taken up the history of test images to chart how engineers, scientists, technicians, and a range of standard-setters have encoded this history of raced and gendered representation within the basic infrastructures of visual culture.42

Genevieve Yue and Mary Ann Doane have each written about “China Girls,” which, despite the orientalized name, were white women used in the calibration of film reels, from the 1920s to the 1990s. China Girls were short filmstrips, clipped from a young woman’s screen test and stitched to the beginning of freshly developed film reels as tests for technicians to use in calibration. The strips would be played before the newly processed film, and through these side-by-side comparisons, technicians ensured that the film was developed correctly. In this way, they work as a fail-safe check against error. There is no clear account of the origins of the term “China Girl,” and the professional use of the term apparently predates any appearance in print.43 Despite the difficulties in precisely locating the origins of the term, the orientalist emphasis of the name privileges, as Yue writes, “a woman’s subordinate, submissive behavior, qualities that would be consistent with the technological function the image serves.”44 The search for these qualities is echoed throughout the history of test images in visual culture, as well as the long-standing uses of images of women as test objects, in which whiteness and a normative femininity are used to set the conditions of new image media and maintain their default settings.

“Shirley images” or “Shirley cards” have served a similar function to China Girls, but they are used in color calibration and skin-color balance in photography.45 Named for an early model, Shirley cards act as color bars for flesh tones. As Lorna Roth writes, skin color balance is a process in which a “woman wearing a colorful, high-contrast dress is used as a basis for measuring and calibrating the skin tones on the photograph being printed.”46 Shirley cards and China Girls are striking in the ways that they try to suture together a technical apparatus and a woman’s face and body. They forecast the practices that these technologies might be used for—they imagine, however partially, the kinds of faces that might appear on film and photography stock.

Figure 3.3 (seen earlier in this section) shows a typical Kodak Shirley card from the early 1970s. It portrays a woman in upper-class garb surrounded by pillows in primary colors. Against the stark contrast of her black-and-white clothing, the image provides clearly delineated blocks of color that can be tested and compared with other images. Figure 3.4, on the other hand, is a Pixl test image produced by a Danish company, reproduced around the web (I first found it on the website of a Russian ink supplier), and used for color calibration of photo printers, acting as an unofficial Shirley card. The image features the faces of women, a pair of isolated lips, a pile of meat, a well-manicured park, a luxury car, a watch, and a naked pair of buttocks. These Pixl images circulate on message boards as calibration tools, and even though the assemblage sometimes changes (the car is updated, for instance) the women’s faces and the buttocks stay constant. It’s an incredible amalgam of stuff, all marked by a kind of distilled desire. Where early Shirley images dressed models in high-class finery to test image media, the Pixl image drops the pretense; women and the artifacts of conspicuous leisure all encircle a final image: a giant crevasse.

Figure 3.3

Kodak Shirley Card (1974) portraying a woman positioned between three cushions (in the original color image, the cushions appear clockwise from right: red, yellow, and blue) and wearing a fur stole (white) with gloves (black). Original image: Kodak; photograph: From the collection of Hermann Zschiegner.

Figure 3.4

The Pixl test image for 2009. Used for standard Red Green Blue (sRGB) calibration. Courtesy of Thomas Holm and Pixl Aps.

Television standards were also built on test images with similar aesthetic values as Shirley cards. For most of its history, American color television was based on something called the NTSC standard (named after the National Television System Committee). This standard was adopted by dozens of other countries and lasted for over five decades as a truly hegemonic standard of visual culture.47 It’s fair to describe the NTSC standard as one of the most pervasive and longest-lasting moving image standards of the twentieth century. Curiously, though, this moving image standard was almost completely based on still images. The engineers who built the standard used twenty-seven test images and a single filmstrip. A close look at their test images (figure 3.5) shows that they depict scenes from an idyllic, pastoral life while portraying exclusively white skin.48 They show, among others, scenes of people boating, playing table tennis, lounging on hay, and leaning on a single-propeller airplane. Although China Girls and Shirley cards predated the NTSC, the prototypical use of whiteness as a default in image media was extended through the standardization of television.

Figure 3.5

NTSC test image, “Boat-Ashore Pair” from Donald G. Fink and NTSC (1955), Color Television Standards: Selected Papers and Records.

Like the many test images that came before it, the so-called first Photoshopped image, taken and used by one of Photoshop’s inventors, John Knoll, also features the half-nude body of a woman with pale skin, sitting on the beach, her back to the camera. The image is called “Jennifer in Paradise”; the Jennifer in question is Knoll’s girlfriend, and paradise is Bora Bora. Also, like the USC engineers and the Lena image, Knoll narrates his selection of the image through the combination of its formal features, its transformability, and its affective charge. “It was a good image to do demos with,” he recalls. “It was pleasing to look at and there were a whole bunch of things you could do with that image technically.”49 The image was just one of several that were used to demonstrate the capacities of Photoshop, but like other test images, it has taken on an iconic status as the ur-text of the technology.

>>>

From early film and television to the present, image technologies have been tuned to the prototype of white women’s skin, used as an instrument of infrastructural calibration. This means that whiteness moves through these standards with ease, whereas darker skin creates friction for image standards. People with darker skin have historically been portrayed in less detail, with less accuracy, and in aesthetically marginalized ways. This is due to the amalgam of the ways that image technologies are tested, calibrated, and standardized, as well as the network of technologies, practices, and cultural labor that surround image reproduction. Kodak film and photography stock notoriously failed to reproduce nonwhite skin—a feature attributed to the assumed whiteness of its users and the film emulsion used in producing it and the ways that cinematographers photographed scenes and the ways that make-up artists were trained and the way that lighting professionals lit faces.50 It’s this entire circuit of people, practices, and trained know-how, calibrated through image proxies, that further entrenches whiteness as a norm of visual representation. As Dyer writes of the history of photography and cinematography,

The assumption that the normal face is a white face runs through most published advice given on photo- and cinematography. This is carried above all by illustrations which invariably use a white face, except on those rare occasions when they are discussing the “problem” of dark-skinned people.51

Treating whiteness as a default meant that cases when conventional lighting techniques didn’t work or some skin wouldn’t register on film required exceptional solutions—these moments turned those bodies into so-called problems that exceeded the default operating conditions of the technology. Whereas Dyer documents this process in the history of film, photography, and art, we can see its traces clearly extended in the history of test images. By basing their sample of the outside world on a prototypical whiteness, these failures of imagination and consideration are embedded in technologies that are developed and calibrated according to a strictly limited cultural viewpoint.

As companies like Kodak accrued more evidence that their technologies failed to work outside of a narrow band of prototypical whiteness, they increasingly sought to remedy the problem by redesigning film standards and lighting techniques that could reproduce a wider range of skin tones.52 These adjustments were frequently coded in racialized terms. As Lorna Roth reports in a personal correspondence from one Kodak executive, lauding the capacities of a new film stock called Kodak Gold Max, he praised its ability to “photograph the details of a dark horse in low light.” As Roth writes, “I take this to be a coded message, informing the public that this is ‘the right film for photographing ‘peoples of colour.’”53

If we follow Roth’s reading of this correspondence as a coded message, then we can see it as an attempt to conceal the existing, racialized problems with Kodak film stock through a comparatively ridiculous scenario—there were presumably fewer complaints regarding the incapacity of Kodak film to photograph horses in low light than its incapacity to reproduce some people’s faces and skin. This is not only a dehumanizing equivalence that equates (dark) horses with Black or Brown human bodies, it further denies the political demand for just representation and the dignity of having one’s body faithfully captured on film.

In addition to new film stocks, film and photography companies started producing new “multiracial” Shirley cards. Figure 3.6 displays one example, called “Musicians,” that was produced by Kodak and provided to consumers by an Australian photo printer. As part of a $400,000 AUD Kodak Photo CD and Pro Photo printer from the early 1990s, the “Musicians” image could be used by anyone who wanted to calibrate their own at-home monitor.54 But on the internet “Musicians” has traveled widely and now easily can be found reformatted on message boards, hobbyist sites, and commercial printers’ sites, which indicates many and makeshift ways in which proxies float through social networks, becoming recognizable through use and reuse as quasi-standard stand-ins. In this image, the whiteness and high-class garb of earlier Shirley cards are swapped out. Instead, the scene that it portrays is an emphatically ethnicized one, in which a contrast of skin tones is paired with essentializing cultural stereotypes.

Figure 3.6

A Kodak Shirley card called “Musicians” (1993). Image: Kodak. Courtesy of David Myers.

Although all the models in “Musicians” hold instruments and wear a headdress of some sort, the image attempts to create another kind of striking delineation. Like the stark opposition of white, black, and primary colors in figure 3.3, here the accessorizing of the models separates them into (still-feminized) ethnic types based on an equation of skin tone with culturally predetermined modes of dress. For the publics of test images (photo-printers and photographers), these multiracial Shirley cards reentrench the prototypicality of Euro-American whiteness by clearly marking these remedial test images as exceptions to the rule. They situate the so-called problem of race as a problem of identity and ethnic tradition, thereby reifying the default, white Shirley as its own ethnic type.

The “solution” to the regular failures of image technologies once again materializes in the cultural, proxy labor of feminized models, here called upon to stand in for a performed diversity—a consumable otherness—that is conspicuously marked by their contrast to the unmarked whiteness of earlier Shirleys. Solutions like that presented by “Musicians” to the histories of failure within image technologies expose the limited potential of inclusion as a remedy to unjust representation—and demonstrate how inclusion itself can reinforce the representational power of the already-dominant.55 Images like “Musicians” expose the global reach of image technologies and standards, but they do so by presenting a visualized menu of women, and evoking the commodification of otherness, in which “ethnicity becomes spice, seasoning that can liven up the dull dish that is mainstream white culture.”56

Whiteness is a problem for technical reasons—it breaks the intended outcomes of a technology meant to portray people faithfully—but more important, it is also unjust and sets a default against which everything else must be treated as exceptional. Defaults encode the normal way that a technology is meant to work; when these defaults are tuned to normate bodily templates that are ableist, sexist, and racist, it forces the vast majority of people who don’t fit those templates to adjust their behavior to fit the defaults. This is a recurrent theme in the politics of proxies. The Middletown and Decatur studies discussed in chapter 1 inserted whiteness as a default social position in measuring American experience. We see the technological construction of prototypical whiteness at work in stark relief in the history of test images, where whiteness has consistently and repeatedly mutated the conditions of possibility for new technologies and their capacities of representation.

The fact that white women’s bodies—often half-clothed, naked, or in a swimsuit—are employed throughout the process of creating and maintaining a standard demonstrates how proxies are not only used as a way of modeling the world in the laboratory setting. These images suture a standard together throughout the process of its development, use, and ongoing maintenance. In this sense, the SIPI engineers could be confident that their use of the Lena image would cohere with the larger world of image standards, which had also been tuned to similar images of white women. Of the images discussed here, from China Girls to the “Jennifer in Paradise” image to Shirley cards, some might be used by professionals trained in computer science, image engineering, or photo processing; others might be used by amateurs and hobbyists. But the corpus braids together the sociomaterial process that forms a consistent visual culture.57

THE CASE OF COMPUTER VISION: THE PROFESSIONAL CONTEXT OF THE LENA IMAGE

We’ve seen that the Lena image—in its portrayal of a white woman’s face and body—is consistent with the use of test images in other twentieth-century image technologies, from photography to film to television. On the one hand, the history of these images corroborates the notion that the Lena image did not just appear as an unprecedented kind of image in the engineering labs of USC. On the other hand, the digitization of a new test image was a more novel event. Although the first digital image was scanned in 1957, most test images at SIPI were reused from film and television.58 Digital image processing was mostly new and experimental, and its practitioners hoped that it would solve many of the challenges of compression, analysis, and transmission presented by existing video and image technologies. Moreover, image technologies have often served as useful demonstration sites for the potential of new computing technologies. And this was equally the case during the period that the Lena image appeared, as many of the early and foundational experiments in AI were built with the aim of teaching a machine to recognize images.59 As one practitioner noted in a history of this formative period in digital image work, “Almost as soon as digital computers became available, it was realized that they could be used to process and extract information from digitized images.”60

The history of SIPI runs parallel to both the rise of computer science and the establishment of the internet—in the form of its predecessor, ARPANET. ARPANET existed from 1966–1990 as a joint scientific and military project investigating the possibilities of distributed computer networks and packet-switching. It was funded by the US Department of Defense (DoD)’s Advanced Research Projects Agency (ARPA—hence, ARPANET), and led by one office in particular, the Information Processing Techniques Office (IPTO). The IPTO helped establish computer science as a discipline and provided the direction and funding of what would become the internet.61 That background uses a lot of acronyms to say one thing: the Lena image appeared during the earliest days of networked computing, when the technology was still in flux and its uses undetermined.

During ARPANET’s formative years, the research was led by Lawrence Roberts, who headed up the IPTO. Roberts, an electrical engineer, is renowned in the industry for his role in supervising the development of ARPANET and his early work on e-mail. But he had also cut his teeth as an engineer working on image processing in his graduate work at the Massachusetts Institute of Technology (MIT). The terms “Roberts gradient” and “Roberts Cross” are derived from his work and remain key concepts in computer vision and digital image processing. Strikingly, his master’s thesis, “Picture Coding Using Pseudo-Random Noise,” used several Playboy images in demonstrating a technique for sending television images over a digital network. The research also appeared as an article, including the Playboy images, in the IRE Transactions of Information Theory journal in 1962 (and is still available online).62 It continues to be widely cited in research and patent applications up to the present day.

It was later revealed that the model in the Playboy images used by Roberts was sixteen when she posed for her nude photos—so she was only a child.63 Like the Lena image, Roberts cropped the nude Playboy images when he transformed them into test images; but unlike those who digitized the Lena image, Roberts attributed the images to Playboy. Despite the revelation that his foundational study employed excerpts of underage pornography, no attempt to redact Roberts’s thesis was ever made, and that fact has never been addressed or even mentioned by Roberts or by any article that cites Roberts’s research (as far as I have been able to find). His research was anthologized by SIPI’s founding director, William Pratt, who listed it in 1967 in an early bibliography of work on image compression.64 Additionally, Lawrence Roberts’s doctoral work was widely cited by the engineers at SIPI during the same years that the Lena image first appeared in their research.65

The appearance of Playboy at SIPI was not just predictable but a typical and indelible mark of the profession’s values and cultural outlook. It is significant that the leader of the ARPANET project, who was responsible for funding much of the research at SIPI, was himself a trailblazing researcher in digital image technology and had used Playboy as his own test material. Roberts’s study had already confirmed an important feature of a very new and experimental field: digital image processing would continue the pattern established in other image technologies by using images of women as test objects for the demonstration of professional skills and technical feats.

>>>

Within many of the most prominent institutions involved in the history of computing and networking, young engineers were practicing digitizing, analyzing, and transmitting images of nude women. Beyond USC and Lawrence Roberts’s work at MIT, engineers at Stanford had used the communally sanctioned objectification of women and the sexualization of image analysis as a professional practice. The Stanford Artificial Intelligence Lab (SAIL) is among the earliest and most storied centers of AI research in the postwar period. Established in the early 1960s by John McCarthy, who is credited with (among many other major achievements in the field) coining the term “artificial intelligence (AI),” it pioneered research in language processing and robotics. SAIL was folded into Stanford’s computer science department during the “AI Winter” of the 1980s and 1990s, a period of depressed investment and interest in AI research.

At one point in 1991, past members of SAIL circulated a remembrance of the early years of the lab, titled “TAKE ME, I’M YOURS: The Autobiography of SAIL.” The remembrance is told from the perspective of a SAIL computer, a PDP-6, and focuses on notable moments in the lab’s history. At one point in the message, PDP-6 discusses its sexuality (equating time-sharing with promiscuity) and reminisces about a lab project in which students solicited a woman to be part of a film in which she would have sex with a computer. Having interviewed two volunteers and rejected one for being “too inhibited,” they conspired to film the other volunteer sexualizing the computer while other members of the lab secretly watched on a recently installed closed-circuit television (CCTV). As PDP-6 described it:

As you know, we timesharing computers are multi sexual––we get it on with dozens of people simultaneously. One of the more unusual interactions that I had was hatched by some students who were taking a course in abnormal psychology and needed a term project. They decided to make a film about a woman making it with a computer, so they advertised in the Stanford Daily for an “uninhibited female.” That was in the liberated early 70s and they got two applicants. Based on an interview, however, they decided that one of them was too inhibited.

They set up a filming session by telling the principal bureaucrat, Les Earnest, that I was going down for maintenance at midnight. As soon as he left, however, their budding starlet shed her clothes and began fondling my tape drives––as you know most filmmakers use the cliché of the rotating tape drives because they are some of my few visually moving parts.

Other students who were in on this conspiracy remained in other parts of my building, but I catered to their voyeuristic interests by turning one of my television cameras on the action so that they could see it all on their display terminals

After a number of boring shots of this young lady hanging on to me while I rotated, the filmmakers set up another shot using one of my experimental fingers. It consisted of an inflatable rubber widget that had the peculiar property that it curled when it was pressurized. I leave to your imagination how this implement was used in the film. Incidentally, the students reportedly received an “A” for their work.66

The “experimental fingers” mentioned were likely part of two SAIL projects on robotic arms—either the Stanford Hydraulic Arm or another implement called the “Orm” (Swedish for “snake”), which “featured 28 rubber sacks sandwiched between steel plates. By inflating various combinations of sacks, the arm would move.”67 Les Earnest, the SAIL lab manager mentioned in the reminiscence, is also the likely author of this document. The image stills from this episode are still available on a website devoted to the history of SAIL.68

What should we make of this episode? Some might treat it as a story about students in the Bay Area in the early 1970s engaging in a sexually provocative stunt for a psychology assignment, using their research into robotics and AI as the basis of a movie not atypical of the B-movies of the era. But there is another reading of this episode that we can see beneath the glibness of the PDP e-mail. We know that the images of the woman’s nude body were recorded by a PDP-10 in 4-bit grayscale and saved for (at least) the next fifty years. We know nothing about whether the woman consented to the lab’s clandestine observers, the recording of her image to disk, and its storage in their database and continued storage online. As such, this episode is one of many in the burgeoning field of computer science and engineering in which the bodies of women were instrumentalized in the demonstration of technology; here, it is not the Orm or the hydraulic arm that should draw our attention, but the CCTV, networked to computer monitors throughout the lab. We know that this episode took place on March 8, 1971, shortly after the terminals were equipped with the ability to receive television signals.

Thus, this e-mail, far from a mere remembrance of the glory days of AI research, is a reminiscence about the novel CCTV system and the secret filming of a naked woman that the system enabled. These moments of self-narration tie together multiple kinds of instrumentalization—that of new technology and that of a woman’s body. The episode highlights the ways that men articulate their technical achievements to homosocial desires. Like the narratives of the Lena image’s origins, resourcefulness in the lab was sexualized, and the feat of putting an image of an unwitting, naked woman onto the computer network was treated as both a cultural and a technical accomplishment.

Robin Lynch has recorded other similar entries from this period. For instance, at Bell Labs in 1964, Kenneth Knowlton and Leon Harmon had a renowned dancer, Deborah Hay, pose for a nude photo, which they printed as a 12-foot-long bitmap mural that they posted on the office door of their manager. They were admonished for the prank, but much like the Lena image, they found themselves celebrated later when the image made its way into art exhibitions and reminiscences about the interconnected histories of art and computing. The image, which is known as Nude or Studies in Perception I—is often treated as “the first computer-made nude portrait.” As Lynch makes clear, however, every step of the process of producing Nude involved a deliberate suturing together of the performance of Hay, the instruments of their computers and scanners, and the visual reference points of classical nude portraiture.69

The professional context for the creation of the Lena image was, therefore, the well-practiced use of nude women as test subjects and the pervasive sexualization of digital image production as an emergent technical field. Through repetition and citation, these practices of objectification fostered homosocial bonds that connected the sexualized examination of women’s bodies to the professional measurement of their features. Digital image processing was a nascent discipline, but throughout many of its earliest and most prominent institutions, the instrumentalization of women’s bodies was a means of demonstrating the potential of new methods of seeing.

TANK, WOMAN, TERRITORY: THE INSTITUTIONAL CONTEXT OF SIPI

The Playboy image that USC engineers placed on the analog-to-digital scanner was not a random excerpt from the world of pop culture. Rather, the centerfold was a conspicuous sample of the SIPI engineers’ cultural milieu and a sign of the porous boundaries of the lab’s environment. This milieu was characterized by the kinds of image work already happening at the USC, the precedents set by existing test images that used white women as prototypes, and an espoused desire to create a new kind of test image that reflected, through proxy logic, the cultural and technical aspirations of the SIPI engineers.

According to the institute’s own description of its history, “SIPI was one of the first research organizations in the world dedicated to image processing.”70 It was established in 1971 as the Image Processing Institute (IPI) using funding provided by a contract from the DoD and the IPTO, the organization headed up by Lawrence Roberts, and the office leading the ARPANET project.71 Prior to its foundation, many electrical engineers at USC already worked with several branches of the military, conducting image processing work contracted by the US Air Force, the Army Research Office, and the Jet Propulsion Laboratory at the National Aeronautics and Space Administration (NASA)—and indeed, much of their research concerns more efficient ways of sending images from the Moon back to Earth. As William Pratt writes, the initial work at SIPI began “on a very modest scale, but the program increased in size and scope with the attendant international interest in the field.”72

According to SIPI’s current director, Richard Leahy, “Much of the early work at SIPI was on transform coding, now the basis of the [Joint Photographic Experts Group (JPEG) and Moving Picture Experts Group (MPEG)] standards for still and video image compression and transmission over the internet.”73 The institute, in other words, aligns its early and field-defining research into image coding with some of the most pervasive and recognizable standards for digital images today. Reports from the 1960s attest to Leahy’s claim. A representative report by the lab founders William Pratt and Harry Andrews describes the “classic problem” of digital image coding as “the search for a coding method which will minimize the number of code symbols required to describe an image.”74 To this end, SIPI engineers developed what they describe as a novel means of image communication: “whereby the two dimensional Fourier transform of an image is transmitted over a channel rather than the image itself.”75 When Pratt and Andrews refer to “the image itself,” they are referring to a situation like NASA’s Moon Surveyor missions, meant to make the transmission of images from the Moon back to Earth more efficient. Transform coding changed how this happened. Instead of sending analog television signals from the Moon, images could be sampled, turned into bits, and reconstituted from the data back on Earth.

In reports from the early 1970s, SIPI’s leadership explicates the work that they are meant to be doing for the DoD’s ARPA program. The reports that are contemporaneous with the Lena image refer to the institute’s work as, generally, “Image Processing Research,” which included every step from processing, transmitting, displaying, and analyzing to detecting and identifying images in digital form.76 Their research goals also indicated the ways that image processing could support the aims of the DoD, if only in the abstract. In particular, image transmission from the battlefield, combined with image analysis and detection, could be used to distinguish between enemies and nonenemies. It is worth highlighting that this research was taking place in the middle of the Vietnam War and at the height of the Cold War, and that these military priorities were reflected in the test images used in SIPI reports.

Ivan Sutherland, an influential computer scientist, pioneer of computer graphics, and another early director of the IPTO, recalled the use of test images during this period in work that he called “artificial intelligence” research:

[When] I was in ARPA, the Army had this set of tank and non-tank images, and one of the problems was, could you recognize tanks in aerial photographs? There was this wonderful set of tank and nontank images—I think there were a hundred images. Some of the tanks were half under a tree, and some of them were recognizable mostly because of the tracks, the trail that they leave behind. For 20 or 25 years, there has been the hope that some artificial intelligence program or vision program would be able to recognize tanks reliably.77

I love this passage for a bunch of reasons—I especially love the idea of the exclusive categories of “tank” and “nontank” images. But Sutherland’s memories of this data set are also revealing. They indicate some of the test images that SIPI engineers might be expected to use—and indeed, tank images and aerial photography reappear regularly in their research. His comments also signal the larger goals of image processing work in this period and its connection to early AI research.

Much of the early digital image processing research began as an attempt at character recognition—the idea of using a computer to process images that could be automatically “read” without help from a human.78 The goal at SIPI was to combine this work with the ambitious project of also transmitting compressed images, potentially across large territories or even from space. Whether the “tank” and “nontank” images that Sutherland refers to are the same ones that appear in SIPI reports is unclear. But the test images used at SIPI certainly speak to the interests of the institute’s government and military funders. The research was often caught between clearly military-oriented applications such as aerial surveillance and the more pedestrian uses of digital images for sending pictures of people. This awkward blending is visible in the test images from SIPI at that time. Figure 3.7 shows one example, a triptych where a white woman’s face is sandwiched between an image of a tank and an aerial surveillance image.

Figure 3.7

An artist’s rendering of a triptych band of test images from SIPI: a tank, “Girl,” and an aerial surveillance image of a territory surrounding water. It appears to be a port or naval base. This is an interpretation of the test image triptych found in William K. Pratt, USCIPI Report #660, 51. Image: R. R. Mulvin.

The image of the woman in this triptych was in wide circulation long before the Lena image and is often simply called “Girl.” It is actually a frame from an earlier (1966) test film produced by the Society of Motion Picture and Television Engineers (SMPTE). Frames from the SMPTE’s test film appear throughout SIPI’s tests, including other images of the same woman and a “Man” image.79 The test images at SIPI combined multiple image technologies—stitching together a new makeshift standard from the proxies of earlier technologies and the emerging applications of digital processing.

Enter, finally, the Lena image. The first time that the Lena image appears in the published research at SIPI, it is used in three different studies—suggesting its adoption as a common proxy across the lab.80 Although it was scanned in 1973, the image did not appear in a published SIPI report until 1975–1976.81 The first three published studies that use the image in this report are typical of SIPI research in this period, with the aim of combining the digital coding and transmission of images with the analysis and identification of their picture elements. In short, the work continues the project of combining image coding with AI and the hope that digital processing could lead to a system to automatically detect and identify picture content (like tanks!).

In addition to the sudden and widespread appearance of the Lena image in SIPI research, the lab’s work alternates between a fairly stable set of test images, including the triptych of the tank, woman, and aerial surveillance images, and often running the same techniques on the triptych and the Lena images. Among the areas of research using these images were edge detection and salient feature extraction. Edge detection is a way of measuring discontinuities in image brightness, which allows you to outline the edges of shapes. If you were going to distinguish tanks from nontanks, edge detection would be a first step. Salient feature extraction is a technique for reducing the amount of information that you need to identify and describe a data set: how much of a face do you (or a computer) need to see to know that you are looking at a face and not a tank? These are both ways of looking at images and training computers to look at images and each transforms their formal components (e.g., luminosity, prominent features) into measurable data—rendering images of people and space as controllable material. The Lena image, like the images of tanks, other women, and territory, was among the material through which visual culture was becoming digitized and measurable. The desire for a “good image” meant a desire for something that could be predictably controlled.

To put a fine point on it, the test images being used at SIPI capture how the nascent field of image processing collapsed images of territories, feminized human faces, and enemy tanks into the same grammar of control. By applying the same techniques of image analysis to these three kinds of test image, side by side, techniques like edge detection and feature extraction were meant to transform images into measurable, identifiable data. Engineers were working with a stockpile of images oriented to military research goals and recycled from film and television standards. But in the genesis of the Lena image, they chose to use a new proxy for the world of images, and they chose a Playboy photograph as their stand-in.

Lawrence Roberts had already shown Playboy to be a viable source of test material, and existing test images like China Girls and Shirley cards echoed the notion that images of white women could be useful stand-ins. Choosing an image from which to extract features requires a familiarity with the features that one wishes to extract. It means being able to choose and desire an image that one can consume as an object. Tasked with training a computer to understand salient images, a homosocial and (almost) universally male group of engineers selected an artifact of American mainstream porn to train computers in a new science of recognition.

>>>

As historians of computing and networking have shown, much of the early history of the internet is concealed through secrecy, a lack of publicity surrounding the IPTO, and a lack of interest in its operations.82 This lack of traces reflects my own experience with this era as well. Janet Abbate’s history of the internet is a valuable resource, but it doesn’t mention SIPI.83 The piecemeal documentation of early ARPANET work makes it difficult to understand this period outside of the dominant narratives provided by its main architects in interviews and existing histories. Those narratives are characterized by the frequent claim that a series of high-risk decisions about packet switching, time-sharing, and distributed and open networks created the internet as we know it. It is difficult to test these claims with a limited set of documentary evidence from ARPA, the IPTO, and other involved institutions.

What do we make of SIPI’s work on ARPANET, in the midst of the Vietnam War? Looking to the historiography of the internet shows a denial of a connection between the two. Abbate writes:

One potential source of tension that does not seem to have arisen within the ARPANET community was the involvement of university researchers—many of them students—in a military project during the height of the Vietnam War. It helped that the network technology was not inherently destructive and had no immediate defense application.84

It is worth investigating the claim that both practitioners and historians of computing treat work on ARPANET as “not inherently destructive.” First, if this were the entire case, then we might expect that work on ARPANET would be immune to the backlash against military research in universities during the Vietnam War. However, as Bob Kahn (one of IPTO’s directors in this period) says, the IPTO’s budget was suppressed in the mid-1970s due to the “Vietnam Syndrome,”85 meaning someone (if only the DoD) thought that the work wasn’t shielded from the war or blowback from the war.

Second, to argue that “the network technology was not inherently destructive” is to take a narrow view of network technology. For starters, multiple members of the IPTO research team point to the IPTO’s involvement in crafting command-and-control technologies, with direct application to destructive activities.86 This includes the image processing work going on at SIPI and the use of AI to distinguish between tanks and nontanks—with the direct implications for choosing bombing targets more efficiently.

Third, if we understand the work at SIPI as part of a larger trajectory of research into surveillance and camouflage—a historical struggle between hiding and detecting—then it is harder still to exceptionalize it as nondestructive.87 In the Vietnam War era, image processing became the newest technique of beating the enemy’s camouflage strategies; this destructive reality is materialized in the database of test images, including tank and nontank images, the Lena image, images of other women’s faces, and images of territories. The fantasy of digital image processing was one that combined an entire vision process, from the moment of observation through transmission, and up to the moment of identification and analysis. The dream is that to see is to know and to know is to control. Test images outline this process by standing in for the people, places, and things that could be seen, known, and controlled.

CONVERSIONS

The history of test images is one in which the cultural labor of models, acting as stand-ins, is leveraged by engineers, scientists, and technicians, who exercise their power to delegate stand-ins for the world of images out there. Test images function as a stable set of pregiven data for image professionals to use to demonstrate their aptitude and skills. In addition to being a pregiven set of data, test images are the basis of commonality and community; they embody a sameness that enables the measurement of difference.

To become an industry standard and a pregiven set of data, the Lena image underwent three kinds of conversion: from paper to pixels, from analog to digital, and from the standards of a soft-core porn magazine into an image standard. When asked, SIPI engineers say that they converted the Lena image from a centerfold into a digital image as a response to their work environment and a desire for a dynamic image with distinct formal properties on “good paper stock.”88 They characterize their work environment as boring and their labor as repetitive, and they claim to be “desperate for a new test image”89 and “tired of their stock of usual test images, dull stuff dating back to television standards work in the early 1960s.”90

We now know that their existing images were either recycled from previous standards or dictated by their research goals. As a matter of labor, the engineers responded with sheer boredom to the repetitive use of their existing set of images and the Lena image appeared as a potential answer to boredom. A welcome break from things like tank and surveyor images, it was a chance to make a new kind of test image. The Lena image was not a haphazard fluke of engineering: it was the result of a concerted effort to crystallize the existing and familiar standards of porn’s pictorial style in a new technology.

The Lena image is frequently lauded for its formal features—often in the process of disavowing desire for the woman it portrays. In the Jargon file (a glossary of computing terms first compiled and hosted on ARPANET in 1975), the image is described as having “interesting properties—its complex feathers, shadows, [and] smooth (but not flat) surfaces,” and these properties are regularly cited in technical justifications for the image’s digitization and its continued use today.91 The implicit argument goes as follows: it is not that the Lena image was simply a centerfold; it was also a particularly testable image that gave engineers a set of problems to solve, including complex surfaces, reflections, and overlapping textures. This logic is repeated as new image techniques are often modeled on the Lena image, even when those techniques were developed using a much larger set of test images.92

But the story of the Lena image’s appearance in the SIPI lab is about so much more. When someone is said to have “walked in with a recent issue of Playboy,” what we’re getting is in fact a story of the material portability of compressed data and the ways that data can travel. The history of the Lena image’s conversion, then, is a series of stories about how an image is transformed and converted in order to enable its circulation and to make it portable. If no one compresses the Lena centerfold by folding it inside the magazine, it can’t be carried around an office by hand; if it isn’t compressed digitally, it can’t be transmitted over a network; and if isn’t cropped of its explicitly sexualized content, it can’t be used as a scientific instrument. It is all these acts, not just one in isolation, that lifted the Lena image from the private sphere of desire into the sphere of professional image analysis. Objects have to be made portable, but some objects can be moved more easily. It’s necessary to question why the Lena image moved so easily into the lab environment, through a new digital network, and into the pages of disciplinary journals. The ease with which it flowed from private domain to research network demonstrates that it already possessed some of the necessary affordances to move between environments as a ready-at-hand proxy for the world of images.

At the advent of networked computing, in a moment when engineers at USC were testing the potential of sending images as data, they chose to encode Playboy’s aesthetic template within their new medium. The centerfold was the most conspicuous section of an iconic magazine, chosen at a moment in which American porn was going mainstream. Not only does the Lena image appear in the highest-selling issue of Playboy ever, that issue appeared only a few months after the release of Deep Throat, one of the first hard-core porn films to be seen by a significant portion of the American public.93 And, in 1973, the same year that USC engineers digitized the Lena image, the US Supreme Court delivered a decision in Miller v. California that rewrote the limits of acceptable pornography by redefining the meaning of obscenity. The court said that while obscenity had been defined as something “utterly without socially redeeming value,” it would henceforth be defined as anything lacking “serious literary, artistic, political, or scientific value.”94 The ruling in Miller actually broadened what could be considered obscene and was widely viewed as a reaction to the recent popularization of mainstream porn. The new definition would narrow what was acceptable and widen the ambit of the state to censor sexualized texts.

But the implications of this timing are significant. In the year of the Supreme Court’s ruling in Miller, the engineers at SIPI inadvertently helped Playboy overcome the new standard for obscenity by incorporating the magazine’s centerfold into the knowledge infrastructure of their profession; by doing so, they relicensed the use of porn—already established by Lawrence Roberts—and demonstrated its potential scientific value by converting it to a test image. By inscribing Playboy’s aesthetic standard into the prototype of the internet, SIPI engineers highlighted the image as a professional object of study and encoded the practice of reading pornographic images as a professional commitment.

The history of test images from photography, film, and television through to digital image processing shows that the instrumentalization of women’s bodies was part and parcel of creating a new image standard. The Lena image continues this patterned, historical recurrence and reencodes a feminized whiteness as the prototype of image technologies. In the 1970s, with an emerging prototype of what would become the internet, computer scientists and image engineers were hard at work perfecting the most efficient and dependable ways of sending a now-cleansed bit of porn over a computer network.

The origins of digital image processing were not inevitable. While the Lena image ascended as a constant against which the variabilities of different image techniques were tested, it was always superseded by a more forceful constant: a model of professional vision conditioned by the prototypical whiteness of test images and a field shaped by the pursuit of controlling space and bodies through optical capture. The researchers at SIPI often focused on new techniques for managing warfare, analyzing space, and classifying enemies. They frequently ran this line of study alongside the segmentation, identification, and analysis of women’s faces and bodies. Digital image processing is a continuation of a longer history that weds militarization and the control of women’s bodies, and the practices of professional vision traced by the Lena image force us to see these two forms of optical control as inseparable.95

To talk about the politics of the Lena image is to, by extension, talk about image standards like JPEG and MPEG that its use helped establish. The politics of test images compel us to ask, “Who is seeing?” “What is this a representation of?” “Who is it for?” and “Who gets to use it?” Image standards are shaped by testing regimes and the practices of professional vision, which are in turn shaped by the cultural milieus where professionals work. In chapter 4, as we follow how the Lena image circulated outside USC, the image’s history traces the contours of a new discipline. The Lena image was exchanged, cited, lauded, canonized, challenged, resuscitated, and finally abandoned, as its circulation continued to expose who had the power to choose proxy images, how those images were used, and which contexts those images sought to represent.

Notes

  1.     1.    See “The Lenna Story,” http://www.lenna.org/.

  2.     2.    This is the story as told to Peter Nowak in Sex, Bombs, and Burgers: How War, Pornography, and Fast Food Have Shaped Modern Technology (Guilford, CT: Rowman & Littlefield, 2011), 173 (emphasis added).

  3.     3.    Jamie Hutchinson, “Culture, Communication, and an Information Age Madonna,” IEEE Professional Communication Society Newsletter 45, no. 3 (2001): 1 (emphasis added).

  4.     4.    Susanna Paasonen, Kylie Jarrett, and Ben Light, NSFW: Sex, Humor, and Risk in Social Media (Cambridge, MA: MIT Press, 2019).

  5.     5.    Eve Kosofsky Sedgwick, Between Men: English Literature and Male Homosocial Desire (New York: Columbia University Press, 1985).

  6.     6.    Lauren Berlant, The Queen of America Goes to Washington City: Essays on Sex and Citizenship (Durham, NC: Duke University Press, 1997), 59.

  7.     7.    Here, I am adapting Simidele Dosekun’s “spectacular femininity” to highlight the conspicuous ways that heterosexual masculinity is performed and transformed into a spectacle for the consumption of other men. Simidele Dosekun, Fashioning Postfeminism: Spectacular Femininity and Transnational Culture (Urbana: University of Illinois Press, 2020).

  8.     8.    Nathan Ensmenger, The Computer Boys Take Over (Cambridge, MA: MIT Press, 2010); Nathan Ensmenger, “‘Beards, Sandals, and Other Signs of Rugged Individualism’: Masculine Culture within the Computing Professions,” Osiris 30, no. 1 (2015): 38–65.

  9.     9.    Jacqueline Wernimont, Numbered Lives: Life and Death in Quantum Media (Cambridge, MA: MIT Press, 2019); Paul Dourish, The Stuff of Bits: An Essay on the Materialities of Information (Cambridge, MA: MIT Press, 2017); Matthew Kirschenbaum, Mechanisms: New Media and the Forensic Imagination (Cambridge, MA: MIT Press, 2008).

  10.   10.    Lisa Gitelman and Virginia Jackson, “Introduction,” in “Raw Data” Is an Oxymoron, ed. Lisa Gitelman (Cambridge, MA: MIT Press, 2013), 3.

  11.   11.    Kathryn Henderson, “The Visual Culture of Engineers,” Sociological Review 42, no. 1 (1994): 196–218.

  12.   12.    Li Cornfeld, “Babes in Tech Land: Expo Labor as Capitalist Technology’s Erotic Body” Feminist Media Studies 18, no. 2 (2018): 205–220; Cait McKinney, Information Activism: A Queer History of Lesbian Media Technologies (Durham, NC: Duke University Press, 2020); Amy Adele Hasinoff, Sexting Panic: Rethinking Criminalization, Privacy, and Consent (Urbana: University of Illinois Press, 2015); Sharif Mowlabocus, Gaydar Culture (London: Ashgate/Routledge, 2010); Kate O’Riordan and David J Phillips, eds., Queer Online: Media Technology and Sexuality (Bern, Switzerland: Peter Lang, 2007).

  13.   13.    Hortense J. Spillers, “Mama’s Baby, Papa’s Maybe: An American Grammar Book,” Diacritics 17, no. 2 (1987): 67; Wendy Hui Kyong Chun, Control and Freedom: Power and Paranoia in the Age of Fiber Optics (Cambridge, MA: MIT Press, 2006); Safiya Umoja Noble, Algorithms of Oppression (New York: NYU Press, 2018); Lisa Nakamura, Digitizing Race: Visual Cultures of the Internet (Minneapolis: University of Minnesota Press, 2008).

  14.   14.    On the materiality of gendered power and its intersection with professionalization, see Cynthia Cockburn, “The Material of Male Power,” Feminist Review 9, no. 1 (1981): 41–58.

  15.   15.    Sara Ahmed, Queer Phenomenology: Orientations, Objects, Others (Durham, NC: Duke University Press, 2006), 40 (emphasis added).

  16.   16.    Paul Du Gay, Stuart Hall, Linda Janes, Anders Koed Madsen, Hugh Mackay, and Keith Negus, Doing Cultural Studies: The Story of the Sony Walkman (London: SAGE, 2013); Lorraine Daston, ed., Things That Talk (Brooklyn: Zone Books, 2004).

  17.   17.    Marilyn Strathern, Reproducing the Future: Anthropology, Kinship, and the New Reproductive Technologies (London: Routledge, 1992), 33.

  18.   18.    Charles Goodwin, “Professional Vision,” American Anthropologist 96, no. 3 (1994): 606.

  19.   19.    Goodwin, “Professional Vision,” 626.

  20.   20.    Goodwin’s work has traveled far and wide in the study of the embodied practices of professional vision. See, for instance, the work of Janet Vertesi, who documents the construction of professional vision within the Mars Rover program. She clarifies that “professional vision” is not only a matter of what one does with one’s eyes; rather, it is a range of interrelated practices through which meaning is created through learned skill and embodied technique. Seeing Like a Rover: How Robots, Teams, and Images Craft Knowledge of Mars (Chicago: University of Chicago Press, 2015).

  21.   21.    Jacob Gaboury, “Image Objects: An Archaeology of 3D Computer Graphics, 1965–1979,” PhD dissertation, New York University, 2015; Ann-Sophie Lehmann, “Taking the Lid off the Utah Teapot towards a Material Analysis of Computer Graphics,” Zeitschrift für Medien-und Kulturforschung, no. 1 (2012): 169–184; Michael Baxandall, Painting and Experience in Fifteenth-Century Italy: A Primer in the Social History of Pictorial Style (Oxford: Oxford University Press, 1988).

  22.   22.    Here is an elegant example of how this happens: The Lena image has two Wikipedia pages, one for the centerfold and one for the test image. Depending on the day and the backstage debates among editors, the image will sometimes appear on the test image page (because it is fair use) but not on the centerfold page. See https://en.wikipedia.org/wiki/Lenna.

  23.   23.    Jonathan Sterne MP3: The Meaning of a Format (Durham, NC: Duke University Press, 2012); Dourish, The Stuff of Bits.

  24.   24.    Dylan Mulvin and Jonathan Sterne, “Scenes from an Imaginary Country: Test Images and the American Color Television Standard,” Television & New Media 17, no. 1 (2016): 21–43.

  25.   25.    David Salomon, Data Compression: The Complete Reference (London: Springer, 2007), 517.

  26.   26.    Shea Swauger, “Software That Monitors Students during tests Perpetuates Inequality and Violates Their Privacy,” MIT Technology Review (August 7, 2020), https://www.technologyreview.com/2020/08/07/1006132/software-algorithms-proctoring-online-tests-ai-ethics/.

  27.   27.    The experience, and explanation for this conflict, is described at-length by Sasha Constanza-Chock in a description of “traveling while trans.” Sasha Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need (Cambridge, MA: MIT Press, 2020).

  28.   28.    David G. Lowe “Object Recognition from Local Scale-Invariant Features,” Proceedings of the Seventh IEEE International Conference on Computer Vision, 2 (1999): 1150–1157.

  29.   29.    Nick Seaver, “Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems,” Big Data & Society 4, no. 2 (2017): https://doi.org/10.1177/2053951717738104.

  30.   30.    Ruha Benjamin, Race after Technology: Abolitionist Tools for the New Jim Code (Cambridge, UK: Polity, 2019).

  31.   31.    Simone Browne, Dark Matters (Durham, NC: Duke University Press, 2015); Benjamin, Race after Technology; Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Conference on Fairness, Accountability and Transparency, PMLR 81 (2018): 77–91. In addition, see the work of the Algorithmic Justice League (https://www.ajl.org).

  32.   32.    Browne, Dark Matters.

  33.   33.    Stuart Hall, “The Whites of Their Eyes: Racist Ideologies and the Media,” in Gender, Race, and Class in Media, eds. Gail Dines and Jean M. Humez (Thousand Oaks, CA: SAGE, 1995), 18–22; Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World (Cambridge, MA: MIT Press, 2018); Nakamura, Digitizing Race.

  34.   34.    Lewis Gordon, “Is the Human a Teleological Suspension of Man?” in After Man, towards the Human: Critical Essays on Sylvia Wynter, ed. Anthony Bogues (Kingston, Jamaica: Ian Randle, 2006), 242.

  35.   35.    Richard Dyer, White: Essays on Race and Culture (London: Routledge, 1997), 103.

  36.   36.    Richard Dyer, “White,” Screen 29 (Fall 1988): 44.

  37.   37.    Nakamura, Digitizing Race.

  38.   38.    Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern, “Predictive Inequity in Object Detection,” arXiv preprint:1902.11097 (2019), 2–3.

  39.   39.    John R. Feiner, John W. Severinghaus, and Philip E. Bickler, “Dark Skin Decreases the Accuracy of Pulse Oximeters at Low Oxygen Saturation: The Effects of Oximeter Probe Type and Gender,” Anesthesia & Analgesia 105, no. 6 (2007): S18–S23.

  40.   40.    Anna, C. Shcherbina, et al., “Accuracy in Wrist-Worn, Sensor-Based Measurements of Heart Rate and Energy Expenditure in a Diverse Cohort,” Journal of Personalized Medicine 7, no. 2 (2017): 3–14.

  41.   41.    For more discussion of the racial coding of technology and its violent outcomes, see Benjamin, Race after Technology.

  42.   42.    Lorna Roth, “Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity,” Canadian Journal of Communication 34, no. 1 (2009): 111–136; Mary Ann Doane, “Screening the Avant-Garde Face,” in The Question of Gender, eds. Judith Butler and Elizabeth Weed (Bloomington: Indiana University Press, 2011): 206–229; Genevieve Yue, “The China Girl on the Margins of Film,” October 153 (2015): 96–116; Mulvin and Sterne, “Scenes from an Imaginary Country”; Susan Murray, Bright Signals (Durham, NC: Duke University Press, 2018).

  43.   43.    Yue, “The China Girl.”

  44.   44.    Yue, “The China Girl,” 99.

  45.   45.    Coincidentally, Lena Forsén worked as a Shirley model as well as posing for Playboy. Linda Kinstler, “Finding Lena, the Patron Saint of JPEGs,” Wired (January 31, 2019), https://www.wired.com/story/finding-lena-the-patron-saint-of-jpegs/.

  46.   46.    Roth, “Looking at Shirley,” 112.

  47.   47.    Murray, Bright Signals; Jonathan Sterne and Dylan Mulvin, “The Low Acuity for Blue: Perceptual Technics and American Color Television,” Journal of Visual Culture 13, no. 2 (2014): 118–138.

  48.   48.    Mulvin and Sterne, “Scenes from an Imaginary Country.”

  49.   49.    The image, along with this quote, can be found in Gordon Comstock, “Jennifer in Paradise: The Story of the First Photoshopped Image,” The Guardian (June 13, 2014), https://www.theguardian.com/artanddesign/photography-blog/2014/jun/13/photoshop-first-image-jennifer-in-paradise-photography-artefact-knoll-dullaart.

  50.   50.    See also Philip W. Sewell, Television in the Age of Radio: Modernity, Imagination, and the Making of a Medium (New Brunswick, NJ: Rutgers University Press, 2014); Brian Winston, Technologies of Seeing: Photography, Cinematography and Television (London: British Film Institute, 1997); Murray, Bright Signals.

  51.   51.    Dyer, White, 94.

  52.   52.    Roth, Looking at Shirley; Dyer, White.

  53.   53.    Roth, Looking at Shirley, 121–122 (emphasis in original).

  54.   54.    Personal correspondence from David Myers.

  55.   55.    Anna Lauren Hoffman, “Terms of Inclusion: Data, Discourse, Violence,” New Media & Society (September 2020), https://doi.org/10.1177/1461444820958725.

  56.   56.    bell hooks, “Eating the Other: Desire and Resistance,” in Media and Cultural Studies: Keyworks, eds. Meenakshi Gigi Durham and Douglas Kellner (Malden, MA: Wiley, 2012), 308.

  57.   57.    The images discussed here only scratch the surface of the ways that images are used as standards and stand-ins, which could be extended to include everything from ambiguous images used in psychological testing to the stock image industry; see Peter Galison, “Image of Self,” in Things That Talk (Brooklyn: Zone Books), 257–294; and Paul Frosh, The Image Factory (Oxford, UK: Berg Publishers, 2003), respectively.

  58.   58.    See A Century of Excellence in Measurements, Standards, and Technology: A Chronicle of Selected NBS/NIST Publications 1901–2000, ed. David R. Lide (Washington, DC: US Department of Congress, 2001).

  59.   59.    Frank Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” Psychological Review 65, no. 6 (1958): 386–408.

  60.   60.    Azriel Rosenfeld, “From Image Analysis to Computer Vision: An Annotated Bibliography, 1955–1979,” Computer Vision and Image Understanding 84, no. 2 (2001): 298.

  61.   61.    Janet Abbate, Inventing the Internet (Cambridge, MA: MIT Press, 1999), 36; Lisa Gitelman, Always Already New (Cambridge, MA: MIT Press, 2006); Fred Turner, From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism (Chicago: University of Chicago Press, 2006).

  62.   62.    Lawrence G. Roberts, “Picture Coding Using Pseudo-Random Noise,” IEEE Transactions on Information Theory 8, no. 2 (1962): 145–154.

  63.   63.    Because she was a child when she posed for these images, I am not naming her in this work or reproducing the images here.

  64.   64.    William K. Pratt, “A Bibliography on Television Bandwidth Reduction Studies,” IEEE Transactions on Information Theory 13, no. 1 (1967): 114–115.

  65.   65.    Lawrence G. Roberts, Machine Perception of Three-Dimensional Solids (New York: Garland Publishing, 1980).

  66.   66.    “TAKE ME, I’M YOURS: The Autobiography of SAIL,” http://infolab.stanford.edu/pub/voy/museum/pictures/AIlab/SailFarewell.html. With thanks to Nathan Ensmenger for making me aware of this letter’s existence.

  67.   67.    Chris Garcia, “Robots Are a Few of My Favorite Things,” Computer History Museum. June 17, 2015, https://computerhistory.org/blog/robots-are-a-few-of-my-favorite-things-by-chris-garcia/?key=robots-are-a-few-of-my-favorite-things-by-chris-garcia.

  68.   68.    Because this reminiscence suggests that the model did not consent to being monitored by other men throughout the lab on a CCTV system, I am choosing not to provide a link to the images here.

  69.   69.    Robin Lynch, “Man Scans: The Matter of Expertise in Art and Technology Histories,” RACAR (Spring 2021, forthcoming).

  70.   70.    “About SIPI,” https://minghsiehece.usc.edu/groups-and-institutes/sipi/about/.

  71.   71.    The IPTO was founded in 1962, at which point ARPA became a prime funder of computer science in the United States. SIPI, then called “USC-IPI,” was funded by Contract number F08606–72–C-0008, Order number 1706 with ARPA’s IPTO. As Amy Slaton and Janet Abbate note, the ARPANET node at USC, housed at the nearby Information Sciences Institute, was the “biggest and most heavily used ARPANET site.” Amy Slaton and Janet Abbate, “The Hidden Lives of Standards: Technical Prescriptions and the Transformation of Work in America,” in Technologies of Power, eds. Michael Thad Allen and Gabrielle Hecht (Cambridge, MA: MIT Press, 2001), 131.

  72.   72.    William K. Pratt, Digital Image Processing: PIKS Scientific Inside (Hoboken, NJ: Wiley-Interscience, 2007), xvii.

  73.   73.    “About SIPI.”

  74.   74.    William K. Pratt and Harry C. Andrews, Transform Processing and Coding of Images (Los Angeles, SIPI, 1969), 1.

  75.   75.    Pratt and Andrews, Transform Processing.

  76.   76.    William K. Pratt, USCEE Report #411: Semi-annual Technical Report Covering Research Activity during the Period 3 August 1971 to 29 February 1972 (Los Angeles: SIPI, February 1972), i.

  77.   77.    Ivan Sutherland, “Oral History Interview with Ivan Sutherland” (Minneapolis: Charles Babbage Institute, University of Minnesota Digital Conservancy, 1989), http://purl.umn.edu/107642 (emphasis added).

  78.   78.    A Century of Excellence in Measurements.

  79.   79.    The “Girl” image still appears in recent editions of Pratt’s Digital Image Processing and Introduction to Digital Image Processing (Boca Raton, FL: Taylor & Francis, 2014).

  80.   80.    William Pratt, USCIPI Report #660: Semi-annual Technical Report Covering Research Activity During the Period 1 September 1975 to 31 March 1976 (Los Angeles: SIPI, March 1976).

  81.   81.    This is as far as I can determine from reading every digitized SIPI report. It’s possible that there are unavailable reports that would predate this publication.

  82.   82.    Gitelman, Always Already New, 97; on the historiography of computing and the dominance of a “Silicon Valley Mythology,” see Joy Lisi Rankin, A People’s History of Computing in the United States (Cambridge, MA: Harvard University Press, 2018), 2.

  83.   83.    Abbate, Inventing the Internet.

  84.   84.    Abbate, Inventing the Internet, 175 (emphasis added).

  85.   85.    Robert E. Kahn, “Oral History Interview with Robert E. Kahn,” (Minneapolis: Charles Babbage Institute, University of Minnesota Digital Conservancy, 1989), http://purl.umn.edu/107380.

  86.   86.    As Abbate notes in Inventing the Internet, IPTO leadership was very adept at construing a military justification for their work by often recharacterizing what was “research” and what was “development,” depending on what they thought Congress wanted to hear.

  87.   87.    Hanna Rose Shell, Hide and Seek: Camouflage, Photography, and the Media of Reconnaissance (New York: Zone Books, 2012).

  88.   88.    Nowak, Sex, 173.

  89.   89.    Nowak, Sex, 173.

  90.   90.    Hutchinson, “Culture, Communication, and an Information Age Madonna,” 1.

  91.   91.    The full entry is located at “Lenna,” The Jargon File, http://www.catb.org/jargon/html/L/lenna.html.

  92.   92.    For instance, in 2012, researchers in Singapore received widespread attention for printing a copy of the Lena image that measured only 50 micrometers across, the smallest image ever printed. “Playboy Centrefold Photo Shrunk to Width of Human Hair,” http://www.bbc.com/news/technology-19260550.

  93.   93.    Linda Williams, Hard Core: Power, Pleasure, and the “Frenzy of the Visible” (Berkeley: University of California Press, 1989).

  94.   94.    US Supreme Court, Miller v. California, 413 U.S. 15 (June 21, 1973).

  95.   95.    Cynthia Enloe, Maneuvers: The International Politics of Militarizing Women’s Lives (Berkeley: University of California Press, 2000).