3
The Porous Psyche
Brain watching … has made your mind, inner thoughts, political opinions, frustrations (including the sexual), aspirations—what we commonly call personality—the raw material of a humming, seemingly insatiable American industry.
—MARTIN GROSS,
The Brain Watchers, 1962
In 1958, with Joseph McCarthy’s red-baiting a fresh memory, the political journalist and former Communist sympathizer Richard Rovere reflected on the state of his fellow citizens’ privacy. In a wide-ranging essay for the American Scholar, Rovere called attention to wiretapping, bugging, and uses of state power that accompanied an age of heightened national security. But he also cataloged a surprisingly varied and seemingly more trivial set of intrusions to which Americans were subject: television cameras that tracked shoppers in grocery stores; on-the-job inquiries into employees’ drinking habits; the prying of behavioral scientists but also of neighbors; the work of professional social workers as well as volunteer organizations; even the sights and sounds of passersby. Invoking Louis Brandeis, both his 1890 essay and his dissent in the 1928 wiretapping case, Rovere called the “right to be let alone” unique in that “it can be denied us by the powerless as well as by the powerful—by a teen-ager with a portable radio as well as by a servant of the law armed with a subpoena.”
Rovere reflected that the latter, official kind of privacy violation might well be reined in by legislation or public policy. But the other sort was more nettlesome, tied as it was to “the growing size and complexity of our society” and involving rights of speech, press, and inquiry. Even if legal abuses—easy to conjure up in 1958—were curbed, it would leave “all those invasions that are the work not of the police power, but of other public authorities and of a multitude of private ones.” What exactly was the nature of these “private” invasions? Rovere ticked off an illustrative list: “A newspaper reporter asks an impertinent personal question; the prospective employer of a friend wishes to know whether the friend has a happy sex life; a motivational researcher wishes to know what we have against Brand X deodorant; a magazine wishing to lure more advertisers asks us to fill out a questionnaire on our social, financial and intellectual status.” Transgressions of the intimate realm, that is, were as much the work of the society and the citizenry as the state. Far from trivial, the persistent prying of a knowing society profoundly shaped the degree to which individuals could move through that society undisturbed and undisclosed. Rovere concluded, “My privacy can be invaded by a ringing telephone as well as by a tapped one. It can be invaded by an insistent community that seeks to shame me into getting up off my haunches to do something for the P.T.A. or town improvement or the American Civil Liberties Union.” The right to be let alone, he declared, “is a right I may cherish and from time to time invoke, but it is not a right favored by the conditions of the life I lead.”1 That the ACLU, the leading defender of civil liberties in the United States, appeared on this list indicates just how all-encompassing the invasions of citizens’ solitude could appear by the late 1950s.
Rovere’s meditations capture a paradox of the early Cold War era. Potential threats to citizens’ privacy from the national security state, in construction since the turn of the century and fortified by military conflicts around the globe, were real and well known. The tools of espionage and surveillance that Woodrow Wilson had seized during World War I were now state of the art, a “sub rosa matrix that honeycombed U.S. society with active informers, secretive civilian organizations, and government counterintelligence agencies.”2 A new kind of twilight conflict with the Soviet Union, coming immediately on the heels of World War II, meant that the nation remained, seemingly permanently, poised for war.3 The “culture of secrecy” that developed on both sides of the superpower divide altered the relationship of the U.S. government to its own people.4 Although the state kept more secrets in this era than in the past—cloaking a range of national security actions, including the atomic weapons program—it increasingly distrusted them in its citizens.5 As had been the case in Wilson’s day, vigilance in protecting a “free society” was turned inward as well as outward. By the 1950s, the House Un-American Activities Committee hearings, federal and state loyalty programs, the Smith and McCarran Internal Security Acts, the tapping of citizens’ phones, and extensive FBI dossier keeping were ample evidence of a government empowered to conduct domestic surveillance.6 Authorities’ attempt to root out subversives affected the personal and professional lives of suspected Communists, but also of progressives, labor union members, sexual minorities, and civil rights activists, along with their families and associates.7
Arguably, the perils posed to individual privacy by the U.S. state and its agencies ought to have overshadowed all others. Yet the focus of much public commentary was elsewhere. A vocal segment of the population turned its attention instead to the subtle pressures on the person flowing from modern social organization and, indeed, the surrounding culture itself: “policemen” to be sure, but also “prying acquaintances, sociological field workers, and psychoanalysts.”8 For these observers, the imagined threat to citizens’ sovereignty and solitude was neither the Cold War enemy nor the domestic state forged in its image. It was, rather, dominant American values and modes of living. And it brought into focus a host of daily trespasses by private citizens—whether marketers, teachers, employers, or neighbors. Together, they comprised, in critic Myron Brenton’s memorable phrase, “Big Brother in his civilian clothes.”9
Why, in an age of alarming infringements on civil liberties, should so many have worried about matters as mundane as a ringing telephone, an “impertinent” question on a job application, or an “insistent community”? Certainly, government surveillance, if more visible than it had been in earlier decades, was simultaneously more difficult to challenge in the cautious political climate of the 1950s. Too, most citizens did not consider themselves to be direct targets of national security measures and, as a consequence, worried little about their implications. More critical than either of these factors, however, was what was experienced as a sea change in the prospects for personal autonomy in the decades following World War II. Citizens’ entanglement with the institutions, gatekeepers, and norms of their society seemed indicative of a peculiarly modern form of unfreedom—a coercion that flowed as much from the encouraging tones of experts as the explicit control of official authorities.
Across the decade of the 1950s and into the first half of the 1960s in the United States, we can track a blossoming concern with the vanishing boundary between the self and the social world. It was a concern at once abstract and palpable, hard to pin down yet clearly felt. Social critics may have spied it first, but other Americans identified it in their own fashion. Whereas public discussions about privacy had up to that point focused on those prying into citizens’ affairs, in the 1950s they fastened on probes into the personal interior: the mind, emotions, thoughts, and psyche. This was not the late nineteenth-century concern about damage to reputation or even to “personality.” Nor was it a concern about the state administering the outer traces of individual identity, as in earlier twentieth-century controversies over fingerprinting and numbering. The puzzle of postwar privacy, as well as Cold War-era individualism, was that the person herself seemed porous, her perimeter unfixed, her very being improperly inhabited by the larger society. This was a new stage in Americans’ thinking about the known citizen, and it would make something akin to psychological privacy both an urgent problem and an elusive goal.
Incursions into the Interior
Threats to the “inviolate personality” had of course been the catalyst for modern privacy claims. The actions of intrusive journalists, advertisers, and photographers in the late nineteenth century were not, however, imagined actually to infiltrate that personality or to be capable of invading one’s psyche. Brandeis would later speculate that “advances in psychic and related sciences may bring means of exploring unexpressed beliefs, thoughts and emotions.”10 But this was not a live fear for Americans until the mid-twentieth century, when suddenly, it seemed, a host of parties sought to know citizens more thoroughly, inside and out, both for their own benefit and the good of the society. Loyalty boards—the infamous House Un-American Activities Committee (HUAC) and its state-level analogues—were the most prominent of these, their probes into citizens’ associations, beliefs, and histories attracting wide publicity. But the same impulse stimulated the growth of psychoanalysis, aptitude and personality testing, and motivational research in the postwar period. These practices did not simply attempt to render individuals intelligible; they attempted to get inside people’s heads. Citizens too sought a deeper knowledge of their psychological interiors in this era. But the prospect of one’s “inviolate personality” being tested, dissected, and revealed by an outside party could be unnerving. As such, the postwar person faced invasions right at the core of the self.
The incursions of a yet-more-knowing society accumulated in disparate precincts of American life during the late 1940s and 1950s. Only at the close of this period would they add up to a clear consensus of privacy imperiled. Complicating and perhaps delaying this analysis was the fact that citizens had themselves invited the intruders in. Postwar practices in business management and residential building, in selling goods and schooling children, hinged on more intense scrutiny of citizens’ values, beliefs, and behaviors. Yet—typically framed as benefiting consumers or employers or parents—they were installed without much public notice or protest. Likewise, middle-class Americans by the 1950s avidly pursued self-knowledge through therapy sessions and advice columns. Only gradually did they come to worry about the use of such material in others’ hands, grasping that psychological inquiry could be a route both for discovering the self and for infringing on it. One critic posed this as the chief dilemma of the “modern man,” a figure who allegedly thrilled to scientific “probes into the mind,” yet also resented them in the name of “his own little-remaining privacy.”11
A quickening sensibility around psychological privacy has been difficult to recognize due to the strong Cold War frame that dominates characterizations of the era. The postwar United States has been portrayed as indelibly imprinted by the gloomy geopolitics of the day, suffused by atomic fear and anticommunist hysteria.12 Its political culture has, perhaps too easily, been depicted as a hothouse of conformity, generated on the one hand by the Soviet threat and on the other by the strictures of the homegrown security state. Backyard fallout shelters and civilian defense measures, anti-fluoridation campaigns and renewed investments in the nuclear family, have all been treated as displacements of the superpower struggle.13 This has not left much room for the political and even philosophical questions that citizens posed, more insistently as the years passed, about the nature of their “interior” privacy.
Certainly, the terms of the Cold War surfaced regularly in postwar public life. The geopolitical struggle animated partisan logics, nurturing postwar conservatism and centrist liberalism. Yet historians have perhaps overstated the centrality of totalitarianism, the Soviet enemy, and even Communist subversion to public consciousness. Contemporary sociologist Samuel Stouffer instead emphasized how remote these issues were to the lives of most Americans. “The internal Communist threat,” he explained, “is not directly felt as personal. It is something one reads about and talks about and even sometimes gets angry about.” But he dismissed as “nonsense” the “picture of the average American as a person with the jitters, trembling lest he find a Red under the bed.” Stouffer’s detailed examination of a national cross-section of respondents, conducted at the height of the Red Scare in 1954, could not contain its surprise at how little Americans took Cold War concerns as their own “in spite of all the headlines and radio and television stimuli.” Published the following year, his Communism, Conformity, and Civil Liberties found that, unlike elite opinion leaders, the fraction of ordinary citizens voicing distress about the Communist threat was, “even by the most generous interpretation of occasionally ambiguous responses, less than 1%!”14
Even if Stouffer seriously underestimated or missed Americans’ unarticulated fears about Communism, it is fair to say that the issues that preoccupied postwar foreign policy elites, pundits, and politicians were not uppermost on the minds of most citizens. Still, the Cold War helped give words to citizens’ quite personal disquiet about their own society and the place of the private person in it. In official understandings, after all, the United States and the Soviet bloc were not simply enemies but polar opposites, with one of the primary points of contrast the value placed on privacy. Unlike Soviet citizens, Americans were known to honor the private sphere. This was the foundation of their rights as liberal-democratic subjects and a domain where the government, theoretically anyway, did not reach. The sovereignty of the private home and the belief that the individual citizen stood apart from, and before, the state were founding orthodoxies. Communists, by contrast, were thought to sacrifice their personal lives and their “most private selves” to “party discipline, including their decisions about love and marriage, childbearing and child rearing.”15 Already infiltrated by the state, Soviet citizens possessed no separate, inner, private realm to speak of.16
Yet, by the mid-1950s, a certain isomorphism between the contemporary United States and USSR informed many discussions of American society, with life in the Soviet Union regularly invoked as a mirror to developments at home. The effect sometimes was to sharpen the contrast. At other times a surprising commonality between “us” and the enemy gestured to parallel forces at work in the two societies.17 Some operated at the level of the state—and pointed directly to the links between a knowing society and an authoritarian one. From the vantage point of those charged with the nation’s security, the risks inherent in A-bombs and subversive activity explained the need to know, test, and vet people as thoroughly as possible. But the inquisitorial procedures of the House Un-American Activities Committee, the use of political informants, the state controls over information and the press, and the policing of dissent, all in the name of staunching the Communist threat, led some to ask: Was the United States approximating its totalitarian foe in the effort to contain it? As the Republican senator John Bricker put it, “I cannot believe that the road to freedom is one which requires us to adopt the methods of our potential enemies.”18 Similar sorts of blurring could be found in the daily “conditions of life,” to borrow Rovere’s phrase: the felt lack of privacy from one’s fellow countrymen on both sides of the Iron Curtain, as well as the discomfitingly similar techniques of the Communist “brainwasher” and the Madison Avenue “persuader.” As such, heavily freighted geopolitical categories helped to pinpoint unease with developments reshaping postwar America.
A knowing society provoked this unease even as it ministered to it. The proceedings of loyalty boards ran roughshod over some citizens’ liberties in order to reassure others that they were safe and secure. The powerful norms of middle-class suburban and corporate life offered guidance and belonging to those willing to live by them, but punished non-conformists. The interventions of experts in the applied human sciences were welcomed by some and considered harmful meddling by others. All of these developments raised questions about the proper boundary between the private citizen and the surrounding culture. Together, they undercut prized assumptions about the space for freely chosen action in American life. Soviet citizens, in thrall to ideology or state socialism, were thought to have forfeited this space, whether willingly or unwillingly. Americans appeared to be trading it away unthinkingly, ceding precious dimensions of their private existence for the sake of individual comfort and security—or, put more generously, economic and social progress.
Underlying this concern about the collapsing border between self and society was the intimate entwining of psychological discourse and public culture at mid-century, what one scholar calls a “watershed in the history of the exposure of Americans to psychological practice.”19 Psychologists had been recruited into the war effort in large numbers, and their confident advance out of the clinic continued during the Cold War. The rising star of the behavioral sciences was apparent in the military funding for studies of hypnosis and interpersonal influence, as well as the techniques of counterinsurgency and psy-ops.20 But psychology’s applications extended far beyond statecraft. With the passage of the National Mental Health Act of 1946 and individual therapy reaching new heights of popularity in the mid-1950s, its explanatory capacity was expanding dramatically.21 Therapeutic language coursed through public culture, even giving a name to the era. The “Age of Anxiety” was both the title that W. H. Auden gave to a book-length poem published in 1947 and a phrase that the historian Arthur Schlesinger used to title the first chapter of his defense of New Deal-style liberalism in an age of communism and fascism, The Vital Center (1949).22 Psychological frameworks were energetically applied to problems ranging from the persistence of racism to the roots of McCarthyism. Behavioral scientists’ embrace of psychoanalysis and the unconscious “as a tool for deciphering political behavior” meant that Communist Party membership could itself be figured as a mental disorder, as were garden-variety forms of political dissent.23
A host of professionals wielding expertise in human behavior and motivation were the tangible sign of this new order: psychotherapists and psychologists, but also marketers, advertisers, motivational researchers, corporate managers, personnel officers, school counselors, and personality testers. Formally trained in psychological science or not, they borrowed liberally from its insights. So did their clients and consumers. Despite its reliance on intimate intrusions, psychological expertise was not an unwelcome intruder in the postwar United States. On the contrary, middle-class Americans in the 1950s sought out psychological self-knowledge in great numbers, divulging their anxieties and their secrets to marriage counselors and psychotherapists. Those undergoing new techniques of family therapy allowed experts into their most private relationships, some even via one-way mirrors or filmed observation.24 Major newspapers discovered that advice columns with titles like “The Worry Clinic” and “Let’s Explore Your Mind” were indispensable to circulation, propelling a small industry of 10- to 15-cent self-help pamphlets.25 Citizens conversed in a newfound psychological key, influenced by popular Freudian thought that treated self-disclosure as an important pathway to self-knowledge.26
Even this, perhaps, understates the extent to which psychological practices had permeated American culture by mid-century. Life magazine could proclaim in a new series of 1957 that U.S. citizens lived—willingly, it seemed—in “the Age of Psychology,” the “science of human behavior” having thoroughly revolutionized daily life. As its first installment had it, psychological insights now informed the magazine advertisements and road signs people saw, the advice on family and marital relations they imbibed, the decisions corporations made, the evaluations school counselors issued, and even the news stories journalists chose to broadcast. Although psychologists had once been interested primarily in building their apparatus of facts and theories, the Life essay claimed, the majority now attempted to apply their knowledge “to help people live happier and more efficient lives.” Fully half of the 16,000 members of the American Psychological Association, the author noted, had moved out of teaching and into other realms: “as personnel men and efficiency experts for industry; as vocational counselors in colleges, the Veterans Administration and the U.S. Employment Service; as designers of tests for the Army; as counselors on children’s problems in public and private schools; as pollsters of public opinion.”27
This burgeoning corps of professionals was widely seen as a response to the “stresses placed upon the individual in an industrialized, urban environment.”28 The conjunction of expansive psychological expertise and national security imperatives in the postwar decades would however reinvigorate and politicize public debates about the known citizen. Mentally and emotionally secure individuals, some argued, were the building blocks of a democratic society, and therefore of central interest to the state.29 Private selves might properly be public concerns. But heightened devotion to the psychological self also led to efforts to identify the conditions supporting its healthy development. As historian Jamie Cohen-Cole has shown, expert prescriptions about the importance of tolerance, autonomy, and creativity came to constitute the ideal citizen and modal American in this period.30 This raised the stakes on the matter of psychological freedom. State or societal forces that impinged on that freedom—including attempts to know or sway the inner person—could come in for fierce condemnation as undemocratic, even totalitarian, in these years. Official propaganda and subtle social norms could both be reimagined as trespassing on an invisible yet essential personal boundary.
As citizens took up questions of internal freedom and psychic privacy, they turned their attention to an unlikely set of Cold War invaders: those modern experts who had aided and abetted the growth of a knowing society. The new status of the psychological sciences in American life, many recognized, could both reinforce and undermine individuality. What implicated its experts in debates over the boundaries of privacy was their capacious sense of what was knowable about the person, as well as what might be done with that knowledge. Whether in the pursuit of science, schooling, or sales, professionals in diverse fields claimed new efficacy in locating inner truths. Insofar as those truths called for interventions—and they generally did—expert knowledge about human motivation and behavior embedded itself in far-flung corners of postwar society. Citizens who actively contemplated their communities, their workplaces, their schools, and even their leisure time noticed the novel ways that scientific techniques and professional authority were being brought to bear on daily life. As they did, they discovered fresh dangers in old practices: the selling of products but also the assessment of schoolchildren and the conduct of social research. By the 1960s, “psychological surveillance” or “psychological espionage” was a live category, encompassing activities as disparate as market research, personality tests, lie detectors, opinion polling, subliminal suggestion, truth drugs, polygraphs, and “brain signal reading,” as well as “telemetry” and “mind control.” Others placed psychiatric evaluations and psychoanalysis itself in this category.31
The culture of experts and American society itself seemed to press more heavily on the person in the postwar years, at once knowing more and demanding more of the individual. Citizens’ qualms about this state of affairs—and in some cases, their resistance to it—accelerated between the close of World War II and the dawn of the 1960s. They occasionally lodged their complaints in a Cold War syntax, through charges of a creeping American-style totalitarianism. But more frequently, they articulated the dilemmas of their social order, both its norms and its knowingness, in the language of privacy. The new premium placed on psychological knowledge by society and citizen alike was at the root of this debate. It explains why a personality test could be as worrisome as communist intrigue, the vigilance of neighbors as unsettling as that of the national security state. Public concern about interiors breached—domestic spaces but also individual minds or psyches—was pervasive by the later 1950s. To grasp its contours requires a tour of the postwar middle-class world: the suburban home, the consumer marketplace, the public school, and the white-collar workplace.
Through the Picture Window
The outrages committed under the watch of legitimate governments during World War II virtually ensured that the private sphere would become a dominant concern of modern publics. George Orwell’s novel Nineteen Eighty-Four, published in 1949, supplied the imagery: Big Brother, the all-seeing eye of a totalitarian state, bent on obliterating the spaces for individual freedom and conscience.32 Where to seek cover from an overweening state and society? The place that many looked to first in the postwar era was that old redoubt, the single-family home.
In the postwar years, Hannah Arendt, the German emigré author of The Origins of Totalitarianism, located freedom in exactly this form. “The four walls of one’s private property,” she wrote, “offer the only reliable hiding place from the common public world, not only from everything that goes on in it but also from its very publicity, from being seen and being heard.”33 That Arendt, a formidable theorist not just of authoritarianism but also of the social invasion of the self, could valorize private property in this fashion perhaps bore the marks of her new homeland. But it was a sensibility that stretched across the industrialized democracies, a response to Hitler’s and Mussolini’s depredations, as well as the physical dislocations of world war. In West Germany alone, two million homes were destroyed, three million homes were damaged, and three million people made homeless, making “the urgent pursuit of privacy … inseparable from the dream of having a home of one’s own.” Survivors described the Nazi period as a world without privacy—indeed “without walls”—and where “police and militia were seemingly everywhere.” The “withdrawal into privacy” afterward, writes historian Paul Betts, was a conscious effort to “reimpose a strict line between self and society.”34 Even those who had not suffered at the hands of dictatorship placed new emphasis on the spaces within modern society that provided shelter from the state. British public housing reformers in these years, for example, underscored citizens’ “space rights, including the right to a plot of land.”35
Americans did not suffer the same deprivations as did other combatants in World War II. Yet they too were “homeward bound” in the postwar years, infusing home ownership with new meaning after years of Depression hardship and wartime constraint.36 The war had been sold to Americans in part as a battle to protect private interests, most especially the private family.37 During the early 1940s, advertisers and builders enticed citizens with detailed visions of plans for the modern family home that would be within their grasp once the fighting was over.38 The importance of a walled-off domestic sphere to American understandings of privacy went much deeper and further than this, however—all the way back to the nation’s founding myths. A scholar notes that “one of the most significant though underappreciated points of stability in privacy discourse” in the United States “has been the projection of privacy onto the home.”39 The dynamics of the Cold War era would both nourish and heighten this tendency. Following Arendt into the domestic sanctuary, we can begin to appreciate the ideological significance of the postwar home, as well as emerging fissures in its foundation.
Americans lived in many different kinds of homes at mid-century. But for commentators, the focal point for assessing the quality of private life in these years was a very specific configuration: white, middle-class, owner-occupied suburbia.40 A number of New Deal programs had already “institutionalized the suburban vision” before the war, ranging from the Home Owners Loan Corporation to greenbelt towns, where families with working wives were explicitly barred. The Federal Housing Administration, as one scholar summarizes, favored new construction over rehabilitation, the periphery over the central city, and segregation over integration, as did the Veterans Administration’s housing plan for returning servicemen.41 With the return of prosperity, new American suburbs were developed rapidly in the postwar decades, a product of pent-up wartime demand, federal housing policy, the new interstate highway system, and the baby boom.42 This was no neutral demographic fact, but rather an orchestrated reorganization of the population along lines of race, class, and sexuality. The new communities promised security in the white middle class, and thus ethnic and economic assimilation, for some. Others, African Americans and other nonwhites but also unmarried and homosexual Americans, were carefully filtered out.43 The flight to the suburbs was thus a private act full of political resonances, with important consequences for racial and gender politics.
Later critics would often equate suburbanization with retrograde motion, a cultural protest against the progress of desegregation and the entrance of women into the workforce.44 Even more prominently, ever since Lewis Mumford penned in 1934 his description of American suburbia as the “collective effort to live a private life,” analysts have associated these communities with a particular ethos, a “preference for the private over the public.”45 For mid-century sociologists as well as the historians who followed their lead, the postwar suburban explosion was proof positive of the embrace of private life by an expanding middle class. White Americans who could manage it seemingly flocked to the new communities to escape crowded urban domiciles and blue-collar and working-class urban neighborhoods—these spaces’ intensive multigenerational bonds along with their brew of racial and ethnic tensions.46 After the war, the economic boom and easy credit allowed them to put their money where their values were: that is, in residential, familial privacy.
Contemporary evidence lends some support to this analysis, if not in such clear-cut terms. The enormous market of potential home buyers beginning in the mid-1940s led developers to survey, and hew more closely to, consumer preferences than they had before the war. This effort allows us to glimpse what builders thought American home buyers wanted. One such study undertaken in 1949–1950 explained that builders were “coming to believe they must have some fundamental knowledge of the particular requirements and desires of people who are to be expected to buy their houses.”47 It found that the inadequacy of the buyer’s current housing, growth in family size, and job relocation were the main triggers for suburban house hunting, with the financial benefits of owning over renting playing a role too. Nine percent of those surveyed, however, did express a desire for “independence or privacy.” The study also revealed that Americans voted with their feet (or moving vans, at any rate) in prioritizing “good neighborhoods.” Although this formulation obscured more than it clarified, for many buyers “it was prerequisite that the new home must be ‘away from the center of town,’ ” indicating perhaps a wish for a measure of solitude or quiet. Most revealing, three out of ten, having already bought, declared themselves dissatisfied with the size of their lot, wishing it were wider so as to offer “more elbow room” from their next-door neighbors.48 Some Americans, at least, craved physical and perhaps also social distance from others.
Buyers may have offered up a range of motives for relocating to suburbia. But postwar planners and policy makers argued in one voice for the virtues of private life as enclosed in the single-family dwelling. As the longtime editor of the magazine House Beautiful, Elizabeth Gordon, avowed in 1953, “The modern American house—the good modern house … provides privacy for the family from the community, and privacy for individuals of the family from each other.” There was a politics to this notion of domesticity that diverged from the ideals of the Victorian household and its “aggregate,” patriarchal privacy. A postwar house with sufficient privacy was believed to promote “democratic living” by fostering the individuality of each of the family’s members: not just that of the man of the house but of his wife and children too.49 Individual bedrooms for teenagers would, for instance, first become normative for middle-class families in the 1950s. Spaces for retreat within the home were the fruit of prosperity and smaller family sizes following a brief uptick after the war. That a measure of physical but also psychic privacy was critical to one’s emotional health—and perhaps even the health of American liberal democracy—was also the growing consensus of child development experts.50
State and private-sector policies conspired to shore up this message, and not only through the stream of federal monies subsidizing white suburbanites’ mortgages. “Bedroom privacy” in particular was written into public housing codes starting in the 1930s and then literally built into the new suburbs. Endorsed by psychological experts as well as middle-class reformers dismayed by the sexual mores on display in crowded urban streets and apartments, the private adult bedroom and, increasingly, the private bathroom, became standard in postwar Levittowns.51 Architects sought to ensure privacy within the house, placing stairs and hallways such that intimate quarters were not in easy reach of the home’s more public areas: outsiders were thereby channeled away from areas reserved for “sleeping,” “excreting,” and “love making.” In a major departure from prewar dwellings, even low-budget homes usually came with a second lavatory or half-bathroom for guests.52 Builders too, through careful attention to window placement and materials that muffled sound, sought to ensure privacy between adjacent “family units.” Privacy, “one of the most widely discussed aspects of postwar residential architecture,” became an advertisement for and asset of suburban life, a saleable feature and part of a property’s price tag.53
Paradoxically, another part of that price tag was intrusion onto one’s private affairs. Achieving the carefully structured homogeneity of postwar suburbia required digging deeply into the lives of potential residents. In perhaps only this respect, the new communities mirrored the other track of residential construction underwritten by the state in this period: the urban housing projects increasingly filled by low-income African American residents. The 1937 federal housing program that established those projects was both means- and morals-based. With public housing reserved for only the most deserving and respectable, eligibility came through intensive examination of personal character and habits. Applicants “underwent rigorous and lengthy interviews” in order that the program could know whom to weed out, namely the unsteady, the unemployed, and those not properly schooled in middle-class aspirations. Writes historian Rhonda Williams, “Personal worthiness and good housekeeping—in addition to low incomes, substandard housing, and traditional family forms—were indispensable attributes for securing residency.”54 In parallel fashion, developers, builders, lenders, and realtors with a stake in property values found common cause in populating the suburbs with those considered to be the most desirable, reliable residents: white married couples with children (or those planning on having children, and thus requiring extra bedrooms). To that end, potential homeowners were carefully scrutinized for telltale signs of non-normative sexuality, less-than-harmonious marriages, job trouble, and other “unstable family conditions,” each of which signaled potential foreclosure or financial risk.55
Even for those who made it past that gauntlet, the assurance of individual and familial privacy could falter. A truly secluded life in the new communities was elusive and registered as such—perhaps precisely because so much had been promised. The very design features meant to encourage livability, comfort, and informality, notably the big picture windows or “window walls” of suburban ranch houses, almost immediately raised privacy concerns.56 The brainstorm of a glass manufacturer, picture windows debuted in American homes in the early 1930s and gained cachet over the next decade for both their aesthetic and market value (a “view” being a new item added to appraisal forms in the 1940s real-estate market). Standardized in postwar construction, these windows, however, quickly evoked complaints about living in a goldfish bowl. If residents could easily peer out, after all, neighbors could just as easily peer in. The market responded in its way, one company advertising its blinds as “windows that peeping Toms can’t see through.”57 And home magazines obsessed over fences, trellises, and screens that would artfully block neighbors’ views. An entire issue of House Beautiful in 1960 focused on “Landscaping and Privacy,” the editor demanding (the answer obvious): “Is Privacy Your Right or a Stolen Pleasure?”58 Builders seem to have paid heed. Over time, suburban construction was modified so that “fewer and smaller windows appeared on the street façade,” coincident with peepholes and intercom systems that allowed those inside to screen those at the home’s exterior.59
Even walls and landscaping did not promise impermeability, however. Adding to the portrait of the not-so-private suburban home was Americans’ heightened awareness of the ways that invisible technologies could invade the domestic sphere. Beginning in the 1940s, a host of popular television programs about spies and other federal agents entered suburban homes, captivating viewers with new-fangled gadgets.60 Americans would soon realize that domestic espionage was not just the stuff of entertaining diversions. While surveillance itself—either the desire or the practice—was not new, mid-century commentators became alarmed by “the marriage of advanced scientific technology” to “classic surveillance methods.”61 A national discussion of electronic spying was in full flower by the mid-1950s, with extensive press coverage of its breakthroughs in penetrating private spaces.
Some of these capacities, notably wiretapping, were quite old, having lurked in the background of American life since the late nineteenth century.62 But in the 1950s, the open secret that various parties—the police, the FBI, and, increasingly, private detectives—were listening in on some citizens’ rather less open secrets gained fresh attention. By one account, seventy-eight magazine articles on electronic eavesdropping or the use of concealed microphones were published between 1930 and 1955.63 Several experts who convened at mid-decade to consider the state of the law noted “considerable public awareness about wiretapping”—including the belief that the practice was rampant, its practitioners “wildly tapping every phone within their reach.” They also cited rumors of fantastic new technologies, including “a super-sonic ray which can be beamed at a wall or window to retrieve voice–sound vibrations” from within a building.64 This last was a reference to wartime technologies that were being repurposed by domestic “snoopers,” who were learning how to break and enter virtually, without leaving a trace.
Surveillance did not need to be high tech to cause consternation. The sudden uptick in the use of private investigators in this era was of special concern. Whether hired by suspicious spouses, employers, or credit and insurance companies, these investigators relied on the fact that neighbors were often the guardians of each other’s secrets. Suburbia was a treasure trove of information for those with a financial stake in personal “character” and habits, knowledge of which permitted finer discrimination among clients.65 And so agents roamed residential neighborhoods in search of peers who would talk. For example, a routine insurance report on a California man, after characterizing his family life and neighborhood, noted that the applicant “is known to drink wine and other intoxicants moderately, but has never been known to drink to excess. He has not been seen driving while under the influence of intoxicants. He is well regarded by his neighbors and there is no criticism of his habits and morals.”66 Personal details of this sort had long been coveted by creditors and insurers.67 But reliance on neighbors as “informants” was newly troublesome, simultaneously undermining suburbia’s promise of familial and individual privacy and conjuring up the tactics of a police state. As one critic mused, “Imagine all the varieties of hell that would be raised if a government agency relied on the blanket use of neighbors to obtain information.”68 Yet the practice was perfectly legal, and the private outfits employing it seemed to have no trouble securing cooperation with their inquiries.

3.1. Americans at mid-century were becoming aware of both the prevalence of wiretapping and the porousness of their homes.
Still more banal practices, from telephone calls to direct-mail advertising, infiltrated the postwar home. It was no coincidence that the sociologist Edward Shils, in a meditation on the elusiveness of contemporary privacy, plucked many of his examples from the domestic realm:
A religious zealot insinuates himself across the threshold of a dwelling and then refuses to leave. The telephoning solicitor of a commercial custom or the telephone interviewer, who, having got the subscriber to “answer” the ringing telephone, presses his listener to take some form of action or to answer certain questions, approximates a coercive entry into a private space. The clutter of postal advertising that falls through one’s letter slot is clearly a coercive, if minor, intrusion. The noise of one’s neighbor’s television set that comes in through one’s walls or windows is coercive, even if not so intended by the neighbor.69
The affront to privacy caused by advertisements, surveyors, or televisions may sound trifling. But it was a persistent complaint. Technologies that had once dazzled the homeowner, like the telephone, were by the 1950s as often regarded as irritations or intrusions. “It goes without saying that no invention, not even the doorbell or the mailbox, is as effective as the telephone in penetrating the inner recesses of our homes,” charged critic Myron Brenton. Writing in an age before answering machines or call screening, he viewed the phone as an agent of special disruption: “The salesman who rings the doorbell may be ignored; the advertising circular that comes through the mails may be tossed unopened into the wastebasket; but it takes an iron constitution and a will made of unearthly stuff to disregard the persistent ringing of the telephone.”70 The fact that the telephone and the doorstep alike had become platforms for salesmen and opinion pollsters—a means, that is, for the external world to perforate domestic tranquility—was especially decried. One market researcher pinned increasing “public confusion, annoyance, and distrust of field interviewers” to the fact that sellers posing as “legitimate” market surveyors had overtaken the field.71 But it was not clear that Americans made such fine distinctions: they simply disliked uninvited encounters in their homes.
Nor was Shils’s mention of noise an isolated complaint. The sensory intrusions of strangers and neighbors was one facet of the problem, but so was the clatter within houses, captive to new-fangled household appliances like blenders, garbage disposals, vacuum cleaners, air conditioning units, and clothes dryers—prompting one writer to call the kitchen “the noise center of the modern home.”72 Other sounds of modern life, from highway traffic to jet planes and supersonic booms, could “penetrate houses and become the unsuspected cause of such ills as dizziness and fatigue.”73 Experts increasingly remarked on the “difficult to show” and yet significant “subjective effects of noise on individual and societal mental well-being.”74
Betty Friedan, author of The Feminine Mystique, would soon critique the suburban residence from another angle: for its erosions of privacy within the family. This was particularly a problem for middle-class married women, who lived more fully within the home than either their spouses or children, who regularly departed for the office and for school. As critic August Heckscher put it, in the new suburbs, “women never quite withdraw into these homes, and yet never entirely emerge from them.”75 Designers had structured kitchens and the new “family rooms,” including their typical placement overlooking a back yard, to facilitate mothers’ watch over their children. Women could thus “run the house without ever leaving the kitchen.”76 In Friedan’s analysis, however, the “open plan” design of the suburban home, which did away with walls and doors, enabled the surveillance of women as well. Never truly alone in this “private” space, she “could forget her own identity in these noisy open-plan houses.”77 No mere design flaw, the open plan exposed the regulating functions even of supposedly secluded space, the way the promise of privacy could be thwarted by other social imperatives. The home, actually or symbolically, could not offer shelter from the press of modern society for the simple reason that it was part of it, as anxieties about the status of one’s appliances and keeping up with one’s neighbors would soon reveal.
In the 1950s, there were many signs that the private home was not the sanctuary it was held out to be. And yet postwar suburbia without a doubt offered more opportunities for retreat and more cordoned-off spaces than did prewar housing. Siblings, boarders, and stray aunts and cousins were no longer bunked in with others as they were in tenement neighborhoods. Live-in servants were not privy to family secrets as they were in urban bourgeois dwellings in the nineteenth century. Broader yards meant that neighbors did not accost each other right on the doorstep. The historian John Demos, prompted to reflect on contemporaries’ cries of declining privacy by his study of colonial American family life, protested. He declared, “We in our homes of the mid-twentieth century have more privacy, more actual living space per capita, than any previous generation in history.”78 It was also true, though, that despite the fact that “most postwar Americans lived more privately than ever before,” as another scholar puts it, “evidence indicates that they worried far more about it.”79
Why did they worry so much? Americans had written about efforts to secure domestic privacy and had also moved to suburbs since the early nineteenth century.80 It was only in the Cold War era that this quest was transformed from a practical struggle into a sign of a culture in trouble. Some Americans may have sought out suburbia as a haven for the development of private life and personality, but sociologists, novelists, and cultural critics did not believe that they had found it. Indeed, these commentators suspected that suburbanites did not truly want privacy, or know what to do with it—itself a sign of how thoroughly their private selves had been compromised.
Such observers found in the new suburban communities something like a natural experiment. The swiftness of their development, their sudden mixing of unfamiliar residents, and the emerging patterns of a new style of living all cried out for investigation. One of social scientists’ first discoveries was that if American suburbia was a product of postwar citizens’ aspirations to privacy, it also exposed the ambivalence of that project. Like the picture windows that permitted a view both out of and into the suburban home, citizens claimed to value their privacy even as they exhibited themselves, and their houses, in new ways. Indeed, the picture window became a fixation of cultural critics, who saw it a vehicle for putting the family and its increasingly conspicuous consumption on display. One novelist berated it as a “vast and empty eye” staring at its identical counterpart across the street.81 For another critic, “the picture window, serving in the typical housing development more as a means for having others look in than for letting the owner look out,” stood “as a perfect symbol of the confusion of realms.”82 The place that was supposed to serve as a refuge from society—“the only reliable hiding place from the common public world,” in Arendt’s words—turned out to be infiltrated by it through and through. This was not simply the consequence of new technologies, investigative practices, or design choices. It was a product of residents’ own desires to showcase their status through consumer goods and gadgets and chase after the approval of their peers.83
The social relations that took place in suburban living rooms and across backyard fences became a cottage industry for postwar social scientists. Most famously, sociologist William H. Whyte’s 1956 study of the community of Park Forest analyzed the micropolitics of “the social ethic”: the vigilance with which suburbanites watched their neighbors and the creeping totalitarianism—to return to Arendt’s territory—of the peer group. As Whyte saw it, suburbanites were never actually alone. Even when physically solitary, they were always shadowed by the community and its coercive social expectations. “Fact one” about suburban privacy was that there wasn’t much, wrote Whyte. “In Park Forest not even the apartment is a redoubt; people don’t bother to knock and they come and go furiously. The lack of privacy, furthermore, is retroactive.” Whyte illustrated this with the poignant words of one of his informants: “ ‘They ask you all sorts of questions about what you were doing’ … ‘Who was it that stopped in last night? Who were those people from Chicago last week? You’re never alone, even when you think you are.’ ”84
The observer of these patterns could not himself escape the prying eyes of neighbors. Whyte noted that “one of the occupational hazards of interviewing is the causing of talk, and I am afraid my presence seriously embarrassed some housewives in several suburbs.” As the sociologist ruefully recounted, “In one of the instances I later learned about, a husband arrived home to be greeted by a phone call. ‘You don’t know who I am,’ a woman’s voice announced, ‘but there’s something you ought to know. A man stopped by your house this afternoon and was with your wife three hours.’ ” Whyte concluded that “even the most outgoing” found the neighborly life of suburbia exhausting and sought occasional respite through a complex set of social codes. “To gain privacy, one has to do something,” he explained. One man Whyte interviewed disclosed for instance that “he moves his chair to the front rather than the court side of his apartment to show he doesn’t want to be disturbed.” But the sociologist noted that “there is an important corollary of such efforts at privacy—people feel a little guilty about making them.” As Whyte judged it, rather than being prized by suburbanites, privacy itself had become “clandestine” and thus suspect.85 The surprise of his study wasn’t that there was less opportunity for privacy in the postwar suburbs than its residents had experienced before, but that they resisted so strenuously the privacy their new environment offered.
Community norms had typically been understood as a force for good, undergirding social order and ensuring cultural continuity. But in 1950, David Riesman’s sociological study, The Lonely Crowd—which claimed that modern Americans had become “other-directed,” having lost their internal nineteenth-century compass—led the bestseller list.86 Sloan Wilson’s bestselling 1955 novel, The Man in the Gray Flannel Suit, which dramatized a white-collar father and husband’s existential concerns about whether to be true to himself or the corporation, was made into a well-regarded Hollywood film the next year.87 Social norms—or, more pejoratively, social conformities—were becoming a fraught topic. For Riesman the central problem facing postwar citizens was “other people” and the subtle force of their judgments.88 A host of other analysts saw in suburbia the encapsulation of a deeply invasive culture, its demands all but suffocating postwar selves.89 Songwriter Malvina Reynolds could later count on public familiarity with the image of conformist suburbia in her hit, “Little Boxes,” recorded by Pete Seeger in 1963, which derided the “ticky tacky houses all in a row.” Picture windows and nosy neighbors, in this critique, were not trivial. They stood in for insidious social forces that subtly entered the person, adjusting and adapting him to reigning norms. What was worse was that the residents of suburbia did not chafe at such invasions, but seemed to welcome them.
Appropriately enough in the “Age of Psychology,” some therapists would connect the dots between the new mode of suburban living and dilemmas of mental health. Psychotherapist Sidney Jourard, author of The Transparent Self (1964), was one of them. Individual psychological and spiritual well-being, he wrote, required “private places,” inviolable by others except by express invitation. Yet such refuges—where “a person can simply be rather than be respectable,” a critical distinction in the watchful neighborhoods of suburbia—were in short supply. Jourard argued that “in present-day America, architecture and living arrangements are such as to make it extremely difficult for people to find inviolate privacy either for solitude or for unobserved time spent in the company of another person.” Like Betty Freidan, he singled out houses built on the open plan, where “inhabitants are seldom out of sight or earshot of one another.” Escaping the observation of others thus became a “desperate, futile, and costly quest,” making a “prison or a dormitory out of one’s daily living arrangement.” Jourard was especially concerned about the way public pressures forced individuals into social scripts, confining the individual to “his usual roles.” These were not the conclusions of a mainstream practitioner, as was suggested by Jourard’s admiration for the Beats, the poets and writers who colorfully cast off society’s restrictions in the mid-1950s. Yet his prescription for “public mental health”—“socially acceptable check-out places to which people could go whenever they found their daily existence dispiriting”—was perhaps the logical conclusion of anxious postwar debates over Americans’ privacy, seemingly thoroughly defeated by neighbors and norms.90

3.2. The vigilance with which suburbanites watched each other was the subject of social criticism as well as humor by the mid-1950s.
Suburban walls had seemed a solution to the predicament of privacy in modern American society. But more and more observers were coming to the conclusion that the private home only shielded the real problem: its inhabitants. It would be the consensus of commentators that it was not the interiors of homes, but the interiors of individuals, that posed the most profound challenge to the postwar ideal of domestic privacy. It was not the physical structures of American society but rather its psychological structures that were to blame for a status-driven, conformist population with too little regard for privacy. This meant that even if access to solitude and seclusion was in some literal sense expanding for those in the middle and aspiring middle classes, postwar Americans could easily be convinced that it was shrinking.
Mediated Minds
Criticisms of suburbanites’ inability to enjoy or respect privacy suggested that something more fundamental even than the failings of the home or the community was at stake. Perhaps the postwar person was to blame, lacking some essential capacity for integrity or boundedness. This was an analysis that took hold in the literature on suburbia, but resonated far beyond it. The susceptibility of individual psyches to outside influence is an unmistakable theme of the era’s popular culture, from George Orwell’s dystopian novel of 1949 to the science fiction film The Invasion of the Body Snatchers (1956), in which aliens took over the bodies of individual humans by assimilating their personalities. Real-life scenarios issuing from the world of media and marketing appeared to echo these plots rather too closely. Although anxiety about propaganda had a longer lineage in American culture, it moved to the very center of privacy debates at mid-century.91 Postwar experts’ application of ever-more subtle techniques of probing and persuasion impinged on what some feared were the only remaining oases of seclusion in modern society: individuals’ innermost thoughts and beliefs.
The fragility of the human psyche—and more pointedly, the American psyche—in the face of social pressures found sensational focus in the debate over brainwashing in the 1950s.92 Indeed, brainwashing (and its filmic representation in 1962’s The Manchurian Candidate) has stood as a kind of perfect Cold War fantasy, coupling as it did the enemy’s duplicity and Americans’ worrisome lack of control over their own inner resources.93 The term first appeared publicly in 1950, three months into the Korean War, launched by a journalist and undercover CIA agent, Edward Hunter, who hoped to alert U.S. citizens to the Chinese Communists’ program of “thought reform.”94 The POW scandal during the Korean War—in which not only numerous U.S. soldiers made damning confessions but also twenty-one American servicemen refused repatriation—brought brainwashing squarely into public view. The scandal, which triggered a congressional investigation in 1958, kicked up a firestorm over the mettle of American troops and the nature of their betrayal: Were they willing collaborators or subject to forces beyond their control?95 Although the POW issue would fade, brainwashing’s presence in public life lingered. Traveling under the names of coercive persuasion and “menticide”—brainwashing’s “pretentious twin”—the concept circulated far beyond its origins, becoming a touchstone for fears about vulnerable American interiors.96
As most behavioral researchers then recognized, brainwashing was less Communist plot than species of science fiction (a fiction that the CIA, nevertheless, did its best to make operational).97 Like its fictional counterparts, it never referred only or primarily to external threats. Instead, brainwashing’s peculiar durability in postwar culture came from the ways it reflected back on American consumer society and its own forms of “programming”: propaganda and persuasion in the form of advertising and public relations. Brainwashing became believable in some sense only because it lined up with more routine practices taking hold in the general society.98 Indeed, the questions that brainwashing provoked made room for a sustained critique of American capitalist enterprise in an age usually noted for its free-market orthodoxy and political quietism.99
That brainwashing was useful shorthand for debating “thought control” within American society is bolstered by the large gap between popular and scientific views of the phenomenon. Nearly all military scientists, behavioral researchers, and communications theorists downplayed the novelty and efficacy of techniques going by that name. They lamented the term’s capture by journalists and supposed victims, debunking “an all-powerful, irresistible, unfathomable, magical method of achieving total control over the human mind.”100 Yet, the concept exerted a stubborn hold. The term was only occasionally used precisely, one critic observed: that is, to refer to Chinese Communist practices. More often it was deployed as a “diffuse term of abuse to refer to any persuasive attempt one dislikes.”101 There were plenty of such attempts to choose from at mid-century: the fields of public relations and political consulting, the work of market researchers and advertisers, and the calculated allure of popular entertainments. Many, for example, took it for granted that “information management—a polite term for media manipulation—was inevitable in all modern political regimes,” the United States and the USSR alike.102 All societies, that is, employed the best instruments on hand to convince and compel.
Social scientists tended to have a more skeptical view of the powers of suggestion than did popular commentators. Behavioral scientists were well aware of the barriers to mass-media manipulation and other popularly hyped propaganda techniques. The limits of persuasion and the notion of the individual as a selective rather than passive receiver of messages were hallmarks of mid-century communications theory.103 Still, as had cinema and radio before the war, TV in particular attracted a good many social scientists interested in its social effects.104 An earlier generation of scholars had inquired into how new entertainments remade the “use of leisure.”105 But the fundamentally psychological tilt of postwar scholarship opened up the possibility of media’s deeper and longer-lasting mark on its audiences. A 1949 study found for example that, “particularly to children, television is not something intruding upon already established patterns, but is an accepted fact in their lives, present virtually from the beginning.” Researchers speculated that TV was “adding a completely new dimension to the experience of these children,” and turned to the question of “how the medium is changing the habits, attitudes, and values of individuals and family groups.”106 Even if television was thought to bring families together in suburban living rooms, who could know the limits of its sway over suggestible viewers?
Public uptake of theories of mass persuasion meant that what Americans chose to do in their “free” time could appear considerably less free in the postwar years than it had previously. This was especially apparent in worries over the influence of the new television sets, ubiquitous by 1950. The central place that TV occupied in suburban living rooms triggered fears of uninvited ideas and images beaming themselves into Americans’ homes and minds.107 The rock ‘n roll music entering teenagers’ bedrooms through transistor radios prompted similar questions from parents.108 So did graphic reading delivering sex and violence to youth. Public debates over comic books, ignited by the writings of psychologist Fredric Wertham, made evident the complex privacy concerns raised by mass culture.109 Media, unbidden, could infiltrate the mind and personality of the susceptible consumer. And in some cases, the very privacy allotted to postwar middle-class youth, now often housed in their own separate bedrooms, facilitated the incursion.
Even more obviously, new techniques of marketing and advertising appeared to trouble the boundaries between private and public, the inner person and external social forces. How exactly, some citizens wondered, did advertisements work on individual consumers? And what could—and should—experts know about purchasers and audiences in order to sell? Modern market research practices, especially researchers’ boasts about burrowing into consumers’ minds with the help of psychological insights, provided plenty for fodder for such questioning at the height of the affluent society.110
Most important was the self-consciously scientific approach of what was called “motivation research” or MR. It was often attributed to Ernest Dichter, an Austrian Jew who emigrated to the United States in 1938 as a refugee from fascism and who founded the Institute for Motivational Research eight years later.111 As a 1960 tract explained, motivational research was simply psychological, psychiatric, sociological, and anthropological knowledge applied to the consumer in order to glean “what induces people to react favorably or unfavorably to various products and sales appeals.” The field was gripped by the question not of “what” but of “why”—particularly, the author noted, “in those instances where the consumer himself may not know or may be unwilling to give an accurate answer.”112 The problem, as marketers saw it, was that asking people about their opinions or habits often yielded misleading results, producing “rationalization or evasion.” This was not a sign of dishonesty; rather, it was because individuals did not know their own minds, and it was “in an attempt to surmount this obstacle that more subtle research techniques have been tried.”113 The insight was a hallmark of popular Freudian thought, which had become mainstream in U.S. culture by the 1950s.114 MR represented marketers’ embrace of Freud, an attempt to get around the problem of consumers’ lack of awareness of their true motivations.115
A 1948 study of “motivational analysis” could still deal in direct—or what would later be termed “surface”—questions to the respondent, such as whether the beer purchaser enjoyed a hoppy flavor.116 Across the next decade, however, captured by the promise of psychoanalysis along with the rest of American society, marketers turned to a new quarry: the consumer’s unconscious. If people’s choices flowed from “influences at work below the level of the conscious mind,” it followed that those influences would be discovered only by delving down to that level.117 Practitioners were not shy about borrowing from clinical psychiatry and its focus on “biologically and socially unacceptable motivational considerations such as sex, toilet-training and nursing,” not to mention “selfishness, greed, envy, prestige, sadism, love of violence” and other hidden impulses. Moreover, the new-fangled marketer aimed to diagnose such motives “without the consumer being aware that this is being done.”118 Projective techniques such as sentence-completion exercises, word association problems, Rorschach inkblots, and assessments like the Thematic Apperception Test—in which subjects were offered standardized images of people in ambiguous scenes to write stories about—especially fascinated mid-century marketing professionals. The Rorschach, for example, was considered a kind of “X ray” of the psyche that was highly useful because “only those well versed in the technical literature and in Freudian symbolism could convincingly fake responses.”119 The “depth interview,” although never well defined, was still another calling card of the modern marketer.120
All of these techniques sought to know purchasers at a more fundamental level than their own conscious reflections allowed: indeed, better than the purchasers could know themselves. By giving consumers a “chance to project their views” onto a standard series of images, it was believed that they would “reveal some hidden motivations that influence their buying behavior.” Likewise, sentence-completion and rapid association tasks told the researcher “a little more about the emotional values and tensions” of respondents, exposing not the directly accessible parts of their thinking but what noted market researcher Lawrence Lockley called the “side lights” of the mind. The “penetrative and exploratory capacities” of such psychological techniques made them the preferred route for “prying information out of the unconscious minds of respondents.”121
By the mid-1950s, it was clear to those in the field that a new and shiny, psychologically inflected version of motivational research was making both “a great splash in the market research world” and a large dent in corporate budgets.122 Although debates persisted about MR’s worth, there is no doubt that it transformed the business of sales. By one account, it was the primary technique applied “to the problems of selling insurance, to the riddle of brand preference for bread, to consumer attitudes toward broadloom carpeting.” Preferences about coffee, canned soup, household cleaners, automobiles, men’s shoes, and soft drinks were all probed via these methods, as were the mysteries of why some people traveled by air and others turned to photography as a hobby.123 MR’s vogue in fact gave rise to warnings that psychological techniques were being employed indiscriminately, improperly, and by those with “inadequate training” to peer inside people’s heads.124
It is not difficult to understand why those concerned about the borders of the American psyche would look askance at such practices. Some chafed against the newly intimate relationship between sellers and buyers. They objected not only to the highly personal questions that market researchers asked but also the psychological material they became privy to thereby. Others responded angrily to marketers’ boasts that they knew things even about those they had never probed, reading into specific purchases a psychological significance. A columnist for the Atlanta Constitution charged in 1958, for instance, that it was “high time for someone to protest further invasion of the private lives of millions of Americans” by that “newcomer to Madison avenue,” the MR expert. His complaint was not about psychological investigation in its appropriate context. The professional psychologist, he noted, was “devoted to helping individuals unmask their conflicts and frustrations in the privacy of a mental health clinic.” But the motivation researcher sought “to unclothe all of our most personal and deeply-hidden desires” for his own gain. What consumers selected—from the color of toilet paper to the model of car—telegraphed their inner secrets to experts, the “psychological urges underneath their buying.”125 One’s personality and even one’s “innermost feelings” were thereby improperly exposed to others, “rolling stark naked down the street for one and all to see.”126
In other words, while clinical psychologists gained the consent and trust of the humans they “worked” on, motivational researchers gazed, knew, and revealed without permission. They sounded a person’s depths less for the insight than the profits it would bring. Coopted by the capitalist market, therapeutic tools could thus be used to sway rather than heal. By the very act of examining someone’s insides, an expert could know—as well as act on and perhaps fundamentally alter—that person. This could be a desirable prospect for an individual seeking change. But what if that desire originated on the outside, from the expert looking in? Motivational researchers in this way compromised the self-knowledge that therapy offered, turning an age of psychology into an age of psychological exploitation.
Critiques of such methods of selling gained a wide audience in the later 1950s. The warm reception of Vance Packard’s The Hidden Persuaders—by ordinary readers if not marketing professionals—was indicative of heightened concern about the integrity of the inner self.127 Packard’s 1957 book, which lambasted the “black arts” of publicity and advertising, spent a full eighteen weeks on the bestseller list.128 Its dust jacket proclaimed, “This book is your eye opener, your guide to the Age of Manipulation,” and in its pages readers were invited to see themselves as pawns in marketers’ psychological games. They were by turns intrigued and offended by the techniques Packard so ruthlessly exposed. But many recognized the ubiquity of attempts to “get inside” their heads, including one Kentucky housewife infuriated by the MR experts who “with dollar signs instead of hearts,” assumed they could hoodwink shoppers through their shadowy techniques in the supermarket.129 This protest was not simply about being revealed and thus embarrassed or exposed, not simply a denunciation of “publicity” in the nineteenth-century sense. It was also a concern about being changed by being known—perhaps without even being aware of the forces at work on one’s interior.
A practice that Packard decried, subliminal advertising, inspired fierce public controversy on precisely these grounds. Psychologist and market researcher James Vicary’s 1957 experiment to stimulate the consumption of popcorn and soda might have gone unnoticed. But his announcement that he had successfully done so by repeatedly inserting the words “Eat Popcorn” and “Drink Coca-Cola” into a movie in 1/3,000-second installments—“long enough for the subconscious to pick up, but too short for the viewer to be aware of it”—unleashed a major media blitz. As it turned out, Vicary’s impressive results (he claimed an 18.1 percent increase in Coke sales and a stunning 57.8 percent increase in popcorn sales) were fraudulent. Under the glare of publicity, it became evident that he had perhaps not even conducted such an experiment, as the manager of the movie theater in question claimed. At any rate, no one, including Vicary, could replicate the results, and the man who had caused the furor would later downgrade his finding to a “gimmick” with too little data behind it to be meaningful.130 Meanwhile, however, public outrage about “invisible” advertisements and their attack on moviegoers’ minds triggered congressional hearings.131 The media frenzy sent Vicary into disrepute and then nearly into hiding: the psychologist cancelled all public appearances and even got an unlisted phone number.132
Much like other dramatic accounts of mass persuasion during the Cold War era, subliminal advertising’s suggestive powers would be readily cut down to size. But the vociferous reaction to Vicary says much about growing public sensitivity to the ways experts—and a mass-mediated culture—seemed to know and act on individual minds. The person’s buried thoughts and motives, some worried, were being made transparent precisely in order to be controlled. In this understanding, a known psyche could never be fully autonomous. The consequence of selves becoming visible was that they became not just less private, but less stable. Or perhaps there was no real inner self at all, an emerging view of social psychologists and sociologists in this era.133 Either way, an anxiety about the plasticity of American interiors pervaded the entire realm of social thought on “conformity,” whether it stemmed from suburban norms or advertising maxims.134
Frustration with such “omnipresent intrusions on our privacy” could even prod some commentators into sympathy with the Russians. A columnist for the Los Angeles Times defended the Soviet premier Nikita Khrushchev’s complaints about the U.S. radio channel, Voice of America, and “the meddling efforts of western diplomats to incite unrest and rebellion in the Soviet zone of influence” in just these terms. Musing that “somehow this is one of the extremely human things” the Soviet leader had articulated, the journalist argued that “people everywhere were fed up with” such meddling. Americans, he added, could hardly complain about Russian propaganda “when we perpetrate the same nuisances ourselves.” In his estimation, what was wrong with both societies (with a nod to the unfolding struggle over integration in Little Rock, as well as reported interference in labor union elections) was the impulse toward “intervention” as against “the right to personal, regional and national privacy.”135 Characterizing U.S. psychological propaganda as an improper invasion of the Soviet enemy’s sovereignty, rather than a critical tool for winning hearts and minds in the Cold War, this writer’s analysis is difficult to parse without an appreciation for the burgeoning debate over psychological privacy and the known citizen at home.
The Surveilled Student Psyche
If the neighborhood and the world of goods both looked more invasive of Americans’ interiors in the 1950s than they had before, so too did the very training ground of youth: the public school. This was because the institutions tasked with imparting knowledge also sought—in the name of improving teaching, as well as managing the social relations of the classroom—to better know their charges. Psychology’s march into the schools across earlier decades had paved the way.
“Do you have fewer friends than other children? Should you mind your folks even when they are wrong? Do you wish you could live in some other home? Do you feel that no one at home loves you?” These were the queries that infuriated a parent enough to dash off a letter to the Los Angeles Times in 1957. The questions were drawn from the California Test of Personality, an instrument sanctioned by the state department of education and administered to children in kindergarten through third grade “without the knowledge or consent of the parents.” Against the belief of school psychologists that the school had the right “to obtain as much information as possible” from students to further the learning process, this writer viewed such questions as a clear trespass against the “privacy of the home.”136 As like-minded critics saw it, prying into students’ psyches encroached on the home in two distinct ways. First, in assessing, analyzing, and guiding the character of the child, it treaded on parents’ traditional domain. Second, personality tests revealed information that otherwise would not have been disclosed: individual interests and capabilities, perhaps, but also large caches of information about a student’s domestic and internal life. It did not go unnoticed that the personal affairs of adults—the schoolchild’s parents, most prominently—could be caught in the same net. As such, the school joined the home and marketplace as a site for debating postwar privacy.
Testing had come under fire before. During World War I, a controversy raged over intelligence testing in the Army and what it revealed about the average mental state of enlisted men, with an array of commentators disparaging the test’s utility and validity.137 The debate had a different flavor in the post–World War II era. It would focus less on a particular instrument and more on the expansive place and power of testing—and especially psychological testing—in American society. The scale of such testing was indeed vast. In 1960 alone, according to the New York Times, approximately 130,000,000 psychological tests were administered to U.S. students. This was above and beyond intelligence and aptitude tests, amounting to an average of nearly three tests per student from first grade to graduate school.138 A series of popular magazines took up the question in the late 1950s, asking if testing was “overdone” or overrated, perhaps even counterproductive. Was it possible that testing primarily served the function of pigeonholing? What sorts of valuable characteristics and traits did the tests not test?139 A leading exposé in 1962 described a veritable “tyranny of testing.”140 The question of the test taker’s privacy arrived late to that discussion, but it did arrive. Could such instruments, some wondered, probe too far into students’ lives?
Two years after that angry letter in the Los Angeles Times, the Houston Independent School District, one of the largest in the nation, summarily ordered that the answer sheets to a series of socio-psychometric tests be burned. Although the instruments in question had been approved by school officials—and diligently filled out by 5,000 ninth graders—parents’ complaints to trustees concerning the “content and purpose of the tests” sealed their fate in ashes. Journalists aired some of the offending items, which once again made clear the prominence of popular Freudianism in American culture: “I enjoy soaking in the bathtub”; “Sometimes I tell dirty jokes when I would rather not”; “Dad always seems too busy to pal around with me.” It did not seem to matter that some of the questions had been adapted from the Texas Cooperative Youth Study of 1956—a survey, puzzled a psychologist, that had been administered “without parental objection” just a few years earlier.141 Concluding that “the public relations of psychometricians is in a sad state,” she recommended that in the future testing “be preceded by a public ‘warm up.’ ” Psychologists could not assume that “their ethic is shared by the people they study,” she lamented. Where the “student of behavior” understood his actions to be serving science and the community, his misguided subject saw only “intrusion on my privacy.”142
Bonfires did not sweep the United States in the wake of this instance of test burning. But nor was the Houston school board’s action an oddity. A year or so later, a similar debate flared in semi-rural Columbia County, New York, over a community psychological research program that had been in place since 1955. The specific study focused on aggression in third-grade children, involving classroom “games and tests” with a team of psychologists, as well as parental interviews. Despite carefully laying the groundwork with teachers and local leaders (a strategy that included information meetings, consultations, speaking engagements, and news items, as well as cocktail parties), a campaign was launched against the research by the local American Legion. Irate parents in response demanded a meeting with school authorities and the researchers, ultimately engaging a lawyer “to see what action could be taken to prevent this study from going any further.” Although resistance came primarily from a few vocal parents (themselves suspected of being proxies for organized interests), the decision was made to suspend testing until an investigation could take place; parents could also request to have their own children’s records destroyed.143
These and other incidents revealed a percolating anger over intrusive psychological testing in schools, and perhaps expert knowledge more generally. Much of the initial resistance came from right-wing opponents of mental health measures.144 Controversies over school psychological instruments were often spearheaded by suburban women just beginning to flex their organizational muscles in a grassroots conservative movement.145 Psychology was “a suspect science” in their view because it seemed to undermine family, Christianity, individualism, and patriotism. It also had been invoked to consign those same conservatives to the paranoid fringe of American society. In the 1950s, the “mental health establishment” would become a favorite punching bag for the Far Right, encompassing “research psychologists, therapists, psychiatrists, community mental health workers, guidance counselors, government bureaucrats, and anyone else who advocated a progressive interventionist vision for psychological expertise in society,” often on behalf of minorities or disadvantaged groups.146 In these protests, privacy was often a screen for other, more pressing ideological agendas. Schools in this period were swept into a number of key partisan conflicts: over the legitimacy of mandatory education, local control over school systems, the prerogatives of parents vis-à-vis educators, and, the most contentious of all, desegregation. Those arrayed against “Communist influence” and liberal threats to family values discovered “privacy” to be a valuable rhetorical resource in these fights.
Researchers in the Columbia County case, although aware of this constituency, did not, however, consider conservatives their central problem. Most parents did not believe that “members of the research were partaking in a communist plot to implant alien, subversive ideas in the minds of innocent children,” they reported. On the other hand, “many thought that we were indeed invading their privacy,” sure that experts were there “to tell them how to raise their children.” Others who objected to the aggression study did not seem to trust the researchers’ promise of anonymity and confidentiality. Another set, reflected one of the psychologists involved, simply “thought we were prying into what is none of our business.” The fact that the same upstate New York community had recently been subjected to other scrutiny—a study conducted by the American Cancer Society on personal health habits, and a family life study probing the causes of juvenile delinquency—may have cemented this point of view.147
Such protests began as partisan affairs, the work of local anticommunist campaigns. But they went mainstream by the 1960s, as critiques of expert culture jumped political lines. Whereas earlier liberals had placed great faith in social scientific as well as military experts, a more radical position on the dangers of bureaucratized knowledge was gaining ground.148 As the psychologist and concerned onlooker Michael Amrine explained, psychological testing and mental health services initially had been a lightning rod for “the extreme right wing of American politics.” But soon enough these had become matters of concern “to the other side of the political spectrum, not to mention those in the middle of the road.” By the time he wrote, in 1966, Amrine lamented that “tests and testers are … attacked by the right and the left, from outside psychology and from inside.”149 In 1964 conservative spokesman Russell Kirk was protesting in the conservative National Review that psychological tests “may force children to incriminate themselves, cast aspersions upon their parents, and expose their sensibilities to any snoop.”150 Nearly identical sentiments were voiced that same year by liberal critic Myron Brenton. He castigated such tests as “designed to lay bare each student’s psyche and expose the most secret parts of his personality”: the child’s worries, fears, family relations, dating habits, feelings about sex, and the like. As did those on the Right, he objected in part to the school’s takeover of “highly personal matters” that were “once considered the exclusive province of parents.”151
Critiques sometimes led to action. One instance was the halting in 1963 of a questionnaire “inflicted upon the incoming freshmen” at a community college in Montgomery County, Maryland, that surveyed students’ attitudes, ambitions, and family backgrounds. It was condemned on the grounds of invading students’ and parents’ privacy.152 Monroe Freedman, a liberal law professor at nearby Georgetown, cheered the decision. He argued that the manner in which students and their families “live their private lives, and think their private thoughts, is none of the business of the Educational Testing Service.” He added, “There is a point at which the inviolability of the candidate’s personality is more precious than the completeness of the testers’ dossiers.”153 The critic Jacques Barzun also registered the change in attitudes toward testing. In the 1940s, he claimed, “testing by check-mark was established everywhere in American life,” and it was “manifestly useless to raise even a question about the value and effect of these tests.” By the early 1960s, however, the tide had turned, and it was “the testers who are on the defensive.”154
Dale Tillery, an educational researcher, could corroborate this change. He described “fatiguing and fascinating encounters with students, parents, administrators, teachers, trustees, press, radio, television and political groups” that had resulted from his involvement in a large longitudinal youth study (90,000 subjects) called Project SCOPE, funded by the College Examination Board. Although the researchers had been prepared for some public resistance, “the extent of the concern for the protection of the individual student and his home was far greater than had been anticipated.” Tillery was surprised that some of the most vigorous pushback to the study came from politically liberal teachers, who raised concerns about what was going to be done with students’ private information “very much like those from groups that were clearly associated with the extreme right.” Whereas liberals seemed to think the study was “connected to the C.I.A.,” conservatives were convinced it was “connected with the Kremlin,” he wryly observed. In his telling, the research team had expended “disproportionate energy” simply to get schools to agree to take part in the study; later, “irrational or destructive” protests and “face-to-face confrontation”—some of it sparked by inaccurate media coverage—had required substantial resources and time to manage. Given these standoffs, Tillery pondered how best to “bring others into the research enterprise with us.”155 Some on the other side of the issue proposed more radical remedies, including a test takers’ Bill of Rights.156
One commentator, trying to sum up the state of American privacy in 1958, reflected that “one hardly needs to emphasize the inquisitorial spirit that has characterized the past decade. A man’s beliefs and convictions, even the degree of his enthusiasm or doubt, have been matters for public inquiry. More than that, the opinions of his parents—or his own distant and perhaps indiscreet youth—have seemed fit subjects for the authorities to investigate.”157 The writer referred to McCarthyism, HUAC, and the FBI. But the “inquisitorial spirit” of psychological testing was raising similar hackles in less obviously political sites: the classroom, as we have seen, but also the corporation.
Brain Watchers at Work
From the vantage point of 1964, the critic Myron Brenton mused, “There was a time—right up to the approaches of World War II—when a person applying for a job in an office or a factory was usually asked to fill out just four items of a personal nature on the employment application form.” One’s name, address, emergency contact, and after the mid-1930s, one’s Social Security number, were all that was required. Brenton’s point with this minimalist list was that times had changed. “These days the job applicant may face ‘in-depth’ employment application forms, private detectives, lie detector tests, psychological tests, maybe even a direct interview with the company psychologist, before the hiring decision is made.” In the postwar era, multi-page applications detailing one’s “life history in miniature” were increasingly the rule.158 As another critic protested, where once hiring had depended on resumes and interviews, it was now “abdicated” to the “new working Sovereign—the tester.”159 But this was not the end of the matter. As men in white-collar occupations especially came to understand, personality tests were a permanent presence in the workplace, used not simply to screen new hires but also to evaluate employees on the job, gauge their readiness for promotion, and probe their management prowess. This battery of evaluations would, in time, generate a sustained debate about psychological secrets and to whom—the expert who pried them loose, the employer who paid for them, or the employee who housed them—they rightly belonged.
From one angle, the advent of personality testing at the office was surprising. Suburban housewives, feminized mass-market consumers, and school-age minors all seemed more likely candidates for this kind of psychological monitoring than were white middle-class men. But from another angle, the white-collar workplace was an especially hospitable environment for the promises dangled by psychological knowledge. As the field of human relations gradually became the domain of psychologists, it increasingly focused on matters of personality and character, motivation and morale. These in turn became the ruling concerns of personnel specialists in American corporations.160 Experts’ ability to detect the unsavory motivations and personality flaws that simmered under an otherwise pleasant façade could save a corporation from any number of bad investments. Indeed, one 1959 study found that “personality and temperament” headed the list of factors employers considered when hiring. Wrote one legal expert, “In many instances, ‘personality’ has become more important as an employment criterion than other qualifications”—astonishingly, this included even the “ability to perform the required task.”161 The emphasis was related to the premium on teamwork and “getting along” in the postwar corporation. William Whyte, who wrote the book on the matter, explained that whereas the old boss wanted your sweat, “the new man wants your soul.”162 And in fact, even the boss’s soul attracted the gaze of psychological experts, as the robust market for executive-specific evaluations attested.
The personality test had been pioneered in World War I, but its future was assured during World War II.163 The war was a boon for psychological testing, with some 9 million servicemen sitting for the Army General Classification Test.164 As exemplified by the Armed Services Vocational Aptitude Battery, which became the most widely used such test in U.S. schools, the translation of psychological instruments from the military to the civilian sphere was relatively seamless. What distinguished these tests from their predecessors, which had most often been self-reports, was their “actuarial interpretation” based on specific statistical correlations and the trained eye of an expert.165 Although exact numbers are difficult to come by, there is no doubt that personality testing on the job—as in the schools—expanded precipitously in the postwar years. The American Management Association estimated that the proportion of companies employing selection tests of any sort increased from 57 percent in 1947 to 75 percent in 1953. Personality tests’ ascent was still more dramatic. A Fortune survey recorded a jump in corporations using them from 33 to 60 percent between 1952 and 1954.166 One critic suggested that psychological testing, born of the elevation of “personality” in American life as well as the managerial revolution, affected some 50 million Americans by the time he wrote in 1962.167 Charles Alex’s How to Beat Personality Tests of 1966, which revealed the tests’ secrets in the hope that individuals might better be able to conquer them, suggested the popular currency of these measurements in American life.168
Personality testing was a big business. As Martin Gross’s contemporary exposé brought home, it was also an unregulated one, with a wide array of instruments and practitioners jostling for preeminence in the postwar period. The 125-question “Personality Inventory,” for example, inquired straightforwardly about emotional stability, sex life, work attitudes, health matters, and religious values. The Activity Vector Analysis was an eighty-one-adjective word game. Other tests resembled those favored by motivational researchers, complete with “inkblots, free-association techniques, uncaptioned cartoons, and nude drawings,” which as Gross wryly observed had originally been designed to detect “paranoids and schizophrenics.” Like marketers, most personality testers aimed squarely at the subconscious. They did so not to divine purchasing motivations, however, but in hopes of finding the “company man”—or screening out deviants or slackers. In service to this goal, testers sometimes plied their trade brazenly. According to Gross, “wife-testing” was on the increase in some corporations. This was a practice whereby a man’s spouse was subjected to scrutiny, whether formally or surreptitiously, in order to shed light on the prospective hire. There were even reports of “undercover” psychologists who posed as an employee from another city in order to share a hotel suite with a prospect and “keep watch” for a few days.169
The most popular instrument, and the one that would garner the most public attention, was the Minnesota Multiphasic Personality Inventory or MMPI. The MMPI had its origins in 1940 as a diagnostic test for identifying and sorting mental disorders. But its claim to reveal a spectrum of normal personality types—its 566 questions a divining rod for neuroses of all sorts—was evident to many parties, making its leap from the clinic to the corporation surprisingly swift. By one account, demand for the MMPI was so heavy by 1946 that its publisher couldn’t keep up with orders. Its uses steadily expanded to “spheres far beyond the mental ward: business suites, army barracks, courtrooms, high schools, doctors’ offices.” In the early 1960s it was administered “at least as often to normal people as to psychiatric patients, used to screen job applicants, offer vocational advice, settle custody disputes, and determine legal status.”170
The MMPI’s attractiveness to human relations specialists—and its threat to employees—lay in its deep and wide-ranging excavation of the individual psyche. Employers in an earlier day regularly pried into workers’ politics and union membership, a key fear raised by the issuing of Social Security cards. But the MMPI contained questions that were once considered well beyond the bounds of legitimate corporate concern. These were a mix of medical, sexual, political, social, and psychological items designed, in a contemporary opponent’s view, to “cut through the work record and references painstaking built up over the years” so as to reveal the otherwise invisible inner man.171 The MMPI offered access to matters that employees were determined to hide but that employers were keen to know. Its inventory was used to screen for “neurotics” but also to ferret out hypochondriacs and alcoholics—those who would, through their character or habits, hinder a company’s productivity. It could also bring to light hidden “sexual deviates,” especially those indicating “homosexual tendencies.” An alarming score on the masculinity-femininity scale was more damning than a revealed tendency for absenteeism or, indeed, almost anything else.172
Like other such instruments, the MMPI promised to disclose things of which the worker might not be aware and, indeed, would not learn. As with motivational research, it turned on its head the kind of self-knowledge that emerged through therapy. The psychological profile it churned out was the property of the corporation; its subject was rarely privy either to its insights or to its uses. The test taker, moreover, could never be sure what the tester was after or how his own answers might betray him. As a Washington Post article explained, only a small fraction of the items on a personality inventory dealt with sensitive topics of sex and religion. The great majority, it reported, were more along the lines of “I drink an unusually large amount of water every day,” or “It takes a lot of argument to convince some people of the truth.”173 But this was not particularly reassuring. What became clearer as the contemporary debate unfolded was in some ways more unsettling than brash questions about intimate matters. It was the testers’ admission that they often did not actually care about the test takers’ religious or moral beliefs despite asking about them. These were merely “surface questions,” designed to get at deeper matters.174 Like sex, religion was simply “one of these areas of life in which feeling is strongly expressed and the real ‘I’ comes out.” Their significance deliberately masked, the test items were a device for finding out something more buried, perhaps deliberately, about the subject’s “mode of behaving and performing.”175 And so the test taker could know that his responses had consequences—just not how they were consequential. Powerful parties, experts and employers, could shed new light on the worker’s insides, but left the employee himself in the dark.
If suburbia’s picture windows had raised the question of improper peeping, the one-way mirror of the corporate personality test was even more disturbing. And so, even as the furor over brainwashing ebbed in the late 1950s, reservations about what the muckraker Martin Gross skewered as “brain watching” were on the rise. Critiques issued from a number of quarters. Some, like William H. Whyte, saw personality tests as operating hand in hand with suburbia’s subtly coercive norms, the corporation and the community neatly conspiring against the individual. “Loaded with debatable assumptions and questions of values,” he charged, personality tests produced “a set of yardsticks that reward the conformist, the pedestrian, the unimaginative.”176 In his analysis, personality tests not only crossed the line in terms of employees’ privacy and led to new forms of gatekeeping by self-appointed experts. They also imperceptibly but exquisitely insinuated their norms—and the culture’s—into the test taker. What the tests measured best was how thoroughly the employee was already pervaded by the corporation.177 In a move similar to suburbia’s critics, Gross for his part lambasted not just the tester’s invasiveness but also the testee’s willingness to submit. This tendency itself was a product, he thought, of a too-knowing culture—of frequent exposure to motivational research, opinion polling, school guidance counselors, and credit checks. Corporations’ access to employees’ “psychological innards” was facilitated by the fact that the average worker was “almost aggressively voluble about himself without too much prodding from the tester.”178
But the vital question was whether individuals ought to have to submit to probes of their feelings, emotions, and values as a condition of employment.179 As Whyte framed it, “Is the individual’s innermost self any business of the organization’s?”180 Criticism along these lines would come even from within psychology’s ranks.181 As one of the leading authorities on psychological testing, Lee Cronbach, explained, “Any test is an invasion of privacy for the subject who does not wish to reveal himself to the psychologist.” If this was true of intelligence and subject-area testing, personality testing led to even deeper feelings of violation. “Every man has two personalities,” observed Cronbach: “the role he plays in his social interactions and his ‘true self.’ ” The personality test sought to align the two by searching out attitudes and beliefs that the individual typically kept hidden. Whether a test attempted to assess reactions to authority, the love of a mother for a child, or the strength of “sexual needs,” it sought information “on areas which the subject has every reason to regard as private, in normal social intercourse.” The person being tested was normally “willing to admit the psychologist into these private areas only if he sees the relevance of the questions to the attainment of his goals,” noted Cronbach.182 Corporate personality tests, of course, were attuned to the goals not of the test taker but of the test giver. Unlike the voluntary disclosures made by a person seeking a psychologist’s assistance, “institutional testing tries to determine the truth about the individual, whether he wants that truth known or not.”183
Novelists and sociologists captured the unease with these instruments emerging by the mid-1950s. Tom Rath, the hero of 1955’s The Man in the Gray Flannel Suit, revealed his strength of character by refusing to disclose his personal history during an interview for a position with a public relations firm.184 William Whyte offered sly guidance to the tested in an appendix to The Organization Man (1956) titled “How to Cheat on Personality Tests.” His key piece of advice was to fake “normality.” “When in doubt,” he counseled, “repeat to yourself: I loved my father and my mother, but my father a little bit more. I like things pretty much the way they are. I never worry much about anything. I don’t care for books or music much. I love my wife and children. I don’t let them get in the way of company work.”185
Although it is difficult to track down firsthand accounts of the tested, contemporary research supported Cronbach’s notion that individuals were generally reluctant to disclose information about their “personality,” as opposed to their interests and behavior, and indeed sometimes considered this a harm.186 In a rare study devoted to objections to their own procedures—triggered by the controversy over testing—a pair of psychologists in 1965 set out to examine which questions on the MMPI truly bothered respondents. One group of subjects was asked to omit all items they considered objectionable “under any circumstances,” another to strike those questions offensive specifically in a job context. Among other things, the study revealed a weary familiarity with the test. One subject refused to participate when he learned that the “task involved taking the MMPI”; several others reportedly complained, “Oh, no, not again.”187 The study found wide variability in responses, but four areas stood out as objectionable: “Sex,” “Religion and Religious Beliefs,” “Family Relationships,” and “Bladder and Bowel Function.” Another catchall area concerned mental processes that the subject “does not, should not, or cannot reveal to others.” Finally, questions about political attitudes and what were classified as “confession-type” items also surfaced as problematic. In the end, the researchers identified a total of 76 “Objectionable Items” out of a total of 566.188 Perhaps not surprisingly, the psychologists concluded that the offensive questions ought not to be deleted, citing the “selective loss of important behavioral information.” Instead they called for “better administration” so as to preempt the subject’s “predictable reaction”—evidently, a negative one—“after he has completed the inventory.”189
Psychologists’ certainty that the MMPI was “a sensitive matter,” but one that could be finessed by better prepping subjects, underestimated the resentment building against personality testing. In the court of popular opinion, observers did not generally debate the precise content of “objectionable items” or what questions were appropriate for which contexts. Instead, personality tests—known through press reports, critical exposés, or people’s own experience submitting to them—were distrusted and even feared for their intrusiveness, their influence, or both. More problematically for those in the personality business, similar reservations about what would be dubbed “The ‘I Love My Mother’ Tests” would begin to emanate from employment lawyers as well as Congress, where politicians aired scandalous questionnaire items for their colleagues—questions, conservative Congressman John Ashbrook of Ohio maintained, referring to school psychological exams, that “literally undress young people.”190 Stripped of their proprietary secrecy and expert rationale, test items could spark political outrage.
As items from diagnostic tests were leaked by politicians and the press, they became targets of public mockery. The range and incongruity of the topics could indeed be truly bizarre: “I believe in the second coming of Christ,” “I am seldom troubled by constipation,” “I very much like hunting,” even, “Someone has control over my mind.”191 Their familiarity in capsule form is perhaps the best evidence for their prominence in postwar culture. Humor columnist Art Buchwald, for instance, published a satiric questionnaire of true-false statements that, he implied, would not look amiss in the MMPI. Among them: “Spinach makes me feel alone”; “I am never startled by a fish”; “Frantic screams make me nervous”; “As a child I was deprived of licorice”; “When I look down from a high spot, I want to spit”; “My eyes are always cold.”192 Such derision mixed easily with foreboding about such instruments’ power to invade citizens’ inner lives and mete out society’s rewards. As such, personality testing became a key target of animus even in a culture awash in psychological knowledge—and the topic of congressional hearings by the mid-1960s.193
The MMPI and its kin raised specific fears about the diagnostic instruments taking root in postwar institutions: both the unaccountable authority they carried and the impact of their judgments. They also roused more general misgivings about the invasive social world in which mid-century Americans found themselves. Experts and neighbors, corporate norms and mass media, seemed not only to coexist with or surround the postwar citizen: they threatened to infiltrate and saturate her. The troubling prospect of invasions into the individual interior helps explain why the promised privacy of the suburban home could not properly reinforce the division between public and private in modern America. The real barriers that needed fortifying against intruders were those within the person herself.
By the time the 1960s rolled around, a veritable explosion of public discussion centered on the shrinking sphere of personal privacy in American life.194 In 1964, Myron Brenton—author of the best-selling The Privacy Invaders—observed that “during the past 20 years less than half-a-dozen popular magazine articles dealt at all with the meaning of privacy in our lives and tragedy of its loss to modern man.”195 This, however, had changed almost overnight. A rash of exposés in the 1960s, including Brenton’s, warned of privacy’s imminent eclipse: Privacy: The Right to Be Let Alone (1962), The Naked Society (1964), The FBI Nobody Knows (1964), The Intruders (1966), Privacy and Freedom (1967), and, what would be a winning formulation, The Death of Privacy (1969).196
For Brenton, growing intrusions in the marketplace, at work, and in the community—ranging from direct mail advertising to life insurance inspections, “in depth” employment application forms to corporate spying—added up to a “prying, digging, peering and poking” Goldfish Age.197 Vance Packard, author of The Hidden Persuaders, reinforced this vision of U.S. society in another bestseller of 1964. Each chapter of his The Naked Society, including “How to Strip a Job-Seeker Naked,” “The Hidden Eyes of Business,” “The Very Public Lives of Public Servants,” and “Are We Conditioning Students to Police State Tactics?” added to Packard’s dire portrait of “mounting surveillance” on all fronts.198 Popular journalists were among the first to sound the alarm, but politicians, academics, and activists—an emerging corps of privacy specialists—were not far behind. Together, they publicized an ever-growing list of surveillance impulses, psychological invasions, and technological breakthroughs in piercing previously impenetrable privacies.
Whether old like wiretapping or frighteningly new like subliminal advertising, the techniques of invasion appeared to be escalating in citizens’ daily lives. The threat came not from one particular direction but from every corner of American society.199 The government and the military, corporations and workplaces, universities and hospitals, media and marketers were each and every one “intruders.” To Senator Edward Long of Missouri, author of his own tract about incursions into citizens’ private lives in the 1960s, this amounted to an “undeclared war on privacy.”200 The regions of private life susceptible to prying appeared boundless, with some of the most threatening incursions being invisible and imperceptible. This would prompt a lawyer, reflecting on personality testing, to describe “the type of searches made by colonial patriots” as “in many respects, preferable to those of the present day which seem to search the private actions, habits, and innermost thoughts of our citizens.201
The most important questions raised by peer surveillance in suburbia, by marketers’ tools of persuasion and projection, and by psychological testing at school and work were, at root, the same. Who needed to know psychological secrets, what techniques could be employed to discover them, and what effects might this kind of probing have on the “inner person”? When was a personal habit or tendency a public concern, leaking into one’s social duties and roles, and when was it in fact “private” and no one else’s business? How transparent ought the citizen be to the society—and even to him- or herself? And what leverage did individuals possess against unwanted personal or psychological disclosure? As brainwashing’s persistent presence in postwar culture suggested, the intimate, private sphere—even the domain of the mind—had come to appear worrisomely porous. The American citizen, as much as the Soviet one, was fully interpenetrated by the social world. In contrast to the unproblematic distinction that late nineteenth-century privacy advocates had drawn between one’s public image or reputation and the private self, postwar observers began to wonder if anything remained beyond the reach of society. What personality, indeed, could be inviolate in an age of knowing neighbors, psychological probes, and expert vision?
The considerable gulf between the vulnerable person found in postwar discussions and the autonomous liberal actor idealized in U.S. civil society would demand a resolution. Indeed, if a realm of privacy appeared more difficult to achieve in the Cold War era, it also seemed ever more imperative for citizens to hold onto. The mid-1960s would witness the first concerted effort to stake out its boundaries. A constitutional right to privacy would be defined against the backdrop of a new psychologically inflected understanding of how the postwar person might be known—and being known, might be altered. Individual rights talk in the 1960s has often been lauded as the language of an empowered citizenry. But it was also rooted in Americans’ worries about the weakness of their interiors in the face of a highly socialized, organized existence.