5
Eupsychian Visions

“I’ve had only one idea in my life—a true idée fixe. To put it as bluntly as possible—the idea of having my own way. ‘Control’ expresses it. The control of human behavior. In my early experimental days it was a frenzied, selfish desire to dominate. I remember the rage I used to feel when a prediction went awry. I could have shouted at the subjects of my experiments, ‘Behave, damn you! Behave as you ought!’ ”

T. E. FRAZIER IN B. F. SKINNER’S Walden Two1

Most humanistic psychologists weren’t shy about their idealism. At the extreme they were starry-eyed, brimming with optimism, gushing about the possibility of remaking the world as they knew it. More typically they were optimistic but methodical, interested in defining the upper limits of human potential, and driven by the conviction that a positive focus to psychology would restore balance to a field that had gone too far in the direction of pathology. Both average and extreme humanistic psychologists entered the 1960s as cultural nonentities, but they did so with the confidence that their visions could compete in a marketplace saturated with revolutionary visions.

Sociologist C. Wright Mills captured the tone of the new decade’s revolutionary energy in his “Letter to the New Left,” at the same time that he rejected the tendency to dismiss idealists as delusional and naïve. “ ‘Utopian’ nowadays I think refers to any criticism or proposal that transcends the up-close milieu of a scatter of individuals: the milieu which men and women can understand directly and which they can reasonably hope directly to change,” he wrote. “In this exact sense, our theoretical work is indeed utopian—in my own case, at least, deliberately so. What needs to be understood, and what needs to be changed, is not merely first this and then that detail of some institution or policy.”2 By aiming high, by taking on problems larger than themselves, activists and intellectuals like Mills hoped to achieve more than they would if they were to traffic in the smaller parcels of specific problems. The plan for the New Left movements was the plan for humanistic psychology: to have grander aspirations, keener vision, and a broader plan of action.

This kind of utopianism animated Maslow. He defended, on numerous occasions, the value of setting an ideal of optimal functioning to which individuals could aspire. He tried, in exhaustive lists and endless publications, to enumerate the qualities of self-actualization for which individuals should always be striving. He tried as well to develop a useful vision of utopia, which he termed “Eupsychia.”

Eupsychia, as Maslow conceived it, was a society that would hypothetically come into existence when a thousand psychologically healthy families migrated to a desert island. Although Maslow didn’t claim to know the specifics of what this would look like, he knew a couple of things. The society would be philosophically anarchistic, Taoistic but loving, governed by tolerance and free choice, and lacking in violence and control. It would succeed mainly because, when given the ability to choose, healthy individuals would draw on their innate ability to make the right choices.3

How would Eupsychia be attained? With his answer, Maslow struck the same note for the society that Rogers struck for the individual: the promotion of growth-fostering conditions, unconditional positive regard, acceptance, understanding. “The implicit theory in Eupsychian ethics,” he wrote, “is that if you trust people, give them freedom, affection, dignity, etc., then their higher nature will unfold & appear.”4

Maslow believed that a Eupsychian society was possible and that American culture could move toward this prospect. It wouldn’t be a perfect place. It would be full of self-actualizers who were themselves flawed. “They too are equipped with silly, wasteful, or thoughtless habits,” he wrote. “They can be boring, stubborn, irritating. They are by no means free from a rather superficial vanity, pride, partiality to their own productions, family, friends, and children. Temper outbursts are not rare. [They] are occasionally capable of an extraordinary and unexpected ruthlessness.” He also described them as, at times, possessing a surgical coldness, a lack of social graces, as well as absentmindedness, guilt, anxiety, and internal strife.5

This wasn’t your ordinary utopia. It was less fantastical, more grounded.6 Maslow didn’t want to be an irresponsible idealist, and described his aversion to “perfectionists, the sick ones in the Freudian sense, destructivists, nihilists, the ones who could never conceivably be satisfied by anything actual because it can never live up to the perfect fantasies in their heads.”7 He was clear that his cultural ideal would be attained not when everyone achieved perfection, but when everyone was striving toward it.

Like other utopians, however, he failed to offer solutions to the obvious difficulties with creating such a society. How would health be attained in the first place, in order that it might promulgate more health? What would a healthy society look like on a day-to-day basis? And how could such abstract ideals take material shape in societal operations?

Maslow’s Eupsychia was most useful as a thought experiment. As unattainable as it seemed, it served a practical purpose. It was an imagined society that could help concretize our collective ideals and serve as a measuring stick against which to assess our current culture. He explained that “the word Eupsychia can also be taken in other ways. It can mean ‘moving toward psychological health’ or ‘healthward.’ It can imply the actions taken to foster and encourage such a movement, whether by a psychotherapist or a teacher. It can refer to the mental or social conditions which make health more likely. Or it can be taken as an ideal limit; i.e., the far goals of therapy, education or work.”8

In this sense, Eupsychia straddled the tension between the lofty and the practical, and it did so in a way that compelled the kinds of Americans who were notoriously pragmatic and idealistic at the same time. One Pacifica Radio interviewer exemplified the typical public interest in Maslow’s theory. In his 1962 interview, he paid a disproportionate amount of attention to Maslow’s ideas about good societies, at the expense of inquiries into his more systematic work.9 This skewed fascination served as an impediment to the concretization of his more abstract insights and far-flung musings.

Liberal radio and its cultural equivalents were not ideal instruments for transcending Maslow’s abstraction. Interestingly, Maslow found the corporate world a preferable environment in which to make his theory practicable. In the early 1960s, his warmest reception came from executives who were invested in making their employees happier and more productive. Business, much more than psychotherapy, seemed to be a realm in which he could really pin down what his ideas would look like implemented.

As humanistic psychologists worked, in the early years of the movement, to bring their lofty ideas down to earth, and to forge a unified vision out of many disparate views of health, human nature, motivation, and behavior, they were helped tremendously by having an adversary. B. F. Skinner, whose theories were enormously popular in the 1950s and 1960s, was the spokesman for a modern form of behaviorism. His “radical behaviorism” was a departure from Watson’s methodological approach. It didn’t abnegate consciousness, feelings, and mental states as Watson’s theory had; rather, it relegated them to other forms of inquiry, and other schools of psychology.

Skinner’s experimental analysis of behavior (EAB) tried neither to unravel the causes of behavior (which he considered a concoction of all prior conditioning, too complex to be distilled), nor to account for its products (perceptions and emotions, which others mistook to be the “causes” of behavior). Rather, it aimed to understand, and change, specific behavioral responses.10

It was in the potential for behavioral change that the charm (and the threat) of this form of behaviorism resided. Radical behaviorism’s highly pragmatic and rational nature appealed uniquely to Americans; it reflected their modern interests in the advancement of automation and mass production. “The new need for rote, repetitive responses on assembly lines,” writes one psychologist, “promulgated support for a psychology that promised a technology of manipulation and control of such actions.”11 It also resonated with their residual progressivist commitments, symbolized as an inclination to optimize social betterment through objective findings.12

Radical behaviorism, like humanistic psychology, also responded meaningfully to Americans’ feelings of postwar vulnerability. It offered a technology that was so rational it could reorder (and improve) our daily lives on its way to reordering the world.

The technologies of behaviorism were concrete in a way that other products of psychology couldn’t hope to be. Skinner’s concept of keeping a baby, throughout the day and night, in an “air crib,” for example—a practice he described in Ladies’ Home Journal—promised a happy, well-adjusted baby and at the same time offered to save young mothers time, effort, and stress. The crib looked something like a fish tank. It was temperature controlled (so the baby could always remain unclothed and unswaddled), had a rolling bottom sheet that could be cranked when soiled, and featured a large pane of safety glass (through which the child could be viewed and smiled at).13

Like the humanistic psychologists, Skinner was a utopian, and a human liberationist. He wanted the application of behavioral science to rank among the scientific achievements whose products had yielded “the design and construction of a world which has freed [man] from constraints and vastly extended his range.”14 His vision, which paralleled Maslow’s thought experiment with Eupsychia, was transmitted to the public in his novel Walden Two.

The book, which eventually sold well over a million copies, describes a thousand-person technological utopia called Walden Two, in which efficiency, and the satisfaction it brings, have been literally engineered by a psychology professor named T. E. Frazier.15 In this experimental community no one works more than four hours a day, all income is shared, and the burden of cooking and housework is dispersed among the members. People are happy, productive, and creative. They are even better looking.

“Here we are not so much at the mercy of commercial designers,” explains Frazier, “and many of our women manage to appear quite beautiful simply because they are not required to dress within strict limits.”16

Behavioral control, in Walden Two, originates with six Planners who can be neither elected nor impeached, and members are governed by a group of managers who are selected by the Planners. The social management techniques they employ affect the lives of members in every way, extending from the control of minute details (like the shape of teacups) to the management of reproductive choices (the community encourages rigorous population growth, with children in their teens encouraged to procreate).

The kind of engineering Skinner dramatized was, to judge from the book’s sales, appealing to many readers. It was also profoundly threatening for several reasons. First, it didn’t feel real, but rather contrived, inauthentic. Also, for Americans, who valued supremely their sense of uniqueness and capacity for self-determination, Walden Two seemed to strip them of their autonomy, reducing them to the status of rats in a maze. Skinner didn’t necessarily understand this charge. “There is this strange feeling,” he mused, “that if you deny the individual freedom or deny an interpretation of the individual based upon freedom and personal responsibility that somehow or other the individual vanishes.”17 For Skinner, an individual could experience freedom at the same time that he was being controlled. This happened all the time, he argued, to greater or lesser extents.

For many Americans, control existed on a spectrum that ranged from relative autonomy to intensifying levels of intervention. While they were able to accept that their driving was controlled by laws designed to ensure their safety, they preferred to think their reproductive decisions, food choices, and career choices were entirely self-determined. The exercise of overt control in these realms stank of totalitarianism. By disavowing a democratic philosophy of human nature, many feared Skinner was undermining democracy itself.18

Discussing the book in the sixties with Carl Rogers, Skinner recognized the public aversion to the extent of control he described. “People object violently to the scene in Walden Two,” he explained, “in which the children are to wear lollipops around their necks but are not to touch them with their tongues during the day.” He defended, however, this way of educating people in moral and ethical self-control. And he argued that, despite what Americans chose to believe, bravery could be taught: people could be conditioned to “take necessary painful stimuli without flinching” and “not be disturbed by what would otherwise be terribly emotional circumstances.”19 All societies, Skinner argued, were behavioral experiments. What differed was the consciousness with which they were planned. Greater levels of intentionality, he felt, could reduce the potential misuse of behavioral technologies.

Maslow’s Eupsychia, in contrast, allowed for a level of chaos that threatened its failure. If Rogers and others were right that, unhindered, humans would produce solely positive products (Maslow wasn’t sure), Eupsychia might succeed. But if negative tendencies had any innate basis, and if even healthy individuals wrestled with, and on occasion fell victim to, their demons, such a loosely configured society couldn’t possibly succeed. Rogers’s ethic of growth-promoting characteristics might work in the psychotherapists’ office, but could it succeed in the culture at large?

In 1962, radical behaviorism went toe-to-toe with humanistic psychology in a debate between Carl Rogers and B. F. Skinner titled “Education and the Control of Human Behavior.”20 In addition to providing an elucidation of two very different perspectives on psychological practice, the meeting represented a war of worldviews, a clash in visions of the future of society with implications that seemed to extend to the lives of every one of the five-hundred-plus people in attendance. The main points of contention centered on the concept of freedom, the value of subjective experience, and the requirements of treating individuals humanely.

Rogers started out on the offensive, constructing behaviorism as a grave threat to individual freedom and to subjectivity. “There seems to be no doubt,” he said, “that the behavioral sciences will move steadily in the direction of making man an object to himself, a complex sequence of events no different in kind from the complex chain of equations by which various chemical substances interact to form new substances or to release energy.” This self-objectification was to Rogers the death of the individual. It was also antagonistic to the form of psychotherapy he so highly valued, or really to any form of psychotherapy (Skinner had no interest in such insight-based talking cures).

Rogers attempted to convey to Skinner the goals of client-centered therapy, which he described as “a self-initiated process of learning to be free.” “Clients move away,” he explained, “from being driven by inner forces they do not understand, away from fear and distrust of these deep feelings and of themselves, and away from living by values they had taken from others.” At the same time, they move toward introspection, self-acceptance, and self-worth. In the ideal situation, their new values will be based on their own inner experience, rather than prior environmental learning.21

Skinner, of course, refused to accept this picture of the unencumbered, internally motivated, insight-driven individual. He didn’t see individual behavior as separable from learning. We are not, he felt, deprogrammable in that way. He also categorically denied the value of introspection in behavioral understanding and behavioral change, going so far as to discredit his own perceptions of his inner states. “I would put more faith,” he told Rogers, “in someone else’s proof that I had been angry toward you than I would in evidence from my own inner feelings.”22

This perspective was, no doubt, shocking to his audience. How could we conceive of a world in which our own thoughts and perceptions (our reality!) were not to be trusted? One clear advantage that Rogers held over Skinner, in gaining popular acceptance, was that the subjective experience of insight just felt true. Adopting a blanket mistrust of our inner world, and leaving knowledge of our own minds to detached scientific experts, felt like an abdication of our power, our autonomy, our self-possession. In this respect, it was determinism to the nth degree.

Rogers argued, further, that behaviorism not only imperiled our sense of self and our belief in the veracity of our own perceptions, but also jeopardized our social principles. It was opposed, he argued, to the goals of liberal democracy—self-determination, equal access and participation, civil liberties, and human rights—and threatened instead a dehumanizing and dictatorial control. “To the extent that a behaviorist point of view in psychology is leading us toward a disregard of the person, toward treating persons primarily as manipulable objects, toward control of the person by shaping his behavior without his participant choice, or toward minimizing the significance of the subjective—to that extent I question it very deeply,” argued Rogers.23

In his response, Skinner accused Rogers of naïveté and challenged his evasion of their shared assumptions about the current cultural predicament. We have already, he argued, created a world in which we are controlled, governed, employed, and hired, and the issue at stake is not whether or not we are controlled, but whether that control is punitive and surreptitious or health-promoting. Skinner’s goal was social control based on a knowledge of human behavior and a sensitivity to the human condition.24 He didn’t refute Rogers’s allegation that he was discarding a democratic philosophy of human nature. In fact, his argument implicitly opposed the idea that our society could be called democratic in the first place.

Skinner’s undemocratic philosophy was hard for many to stomach. In spite of the clear distinctions he drew, its mechanisms were reminiscent of fascism, and its disavowal of the significance of the self-determining individual ran against the tide Rogers and others had identified as sweeping the culture in the early 1960s. Although behaviorism had dominated American psychology for more than fifty years, there were signs that it was injured, or even dying. Humanistic psychologists had moved in from one direction, employing a rhetoric that harmonized with modern cultural concerns and garnering more and more professional attention, while cognitivists had attacked from another angle.

The cognitive revolution marked the gradual displacement of the behaviorist paradigm by an interdisciplinary movement bridging psychology, anthropology, and linguistics (cognitive science). Cognitivists’ main critique of behaviorism was that it unnecessarily excluded consideration of what was going on inside the mind. This exclusion had had practical consequences. During World War II, for instance, behaviorists had failed to provide practical assistance in training soldiers to use complex equipment, and in dealing with the attentional deficits that resulted from battle-related stress.25 After the war, they had little to say about how to treat the debilitating anxiety that characterized many shell-shocked soldiers.

More significant, for cognitivists, was what they saw as the theoretical and empirical weakness of behaviorism’s position. In his proclamatory statement, Watson had discarded all references to consciousness, perception, sensation, purpose, motivation, thinking, and emotion.26 He’d argued that mental processes were scientifically unknowable and, moreover, that everything worth knowing could be deduced from observation of behavior. Beginning around 1956, cognitivists took on both propositions, arguing that it was not only possible but productive to make testable inferences about mental processes.27

George Miller was the first to gain recognition for espousing the new approach. His Psychological Review article “The Magical Number Seven, Plus or Minus Two,” summarized several studies of the effective reorganization of mental processes, specifically related to the expansion of short-term memory.28 In 1958 David Broadbent’s Perception and Communication challenged behaviorist learning theory with an information-processing model.29 And in 1959, in one of the most significant moments in academic history, Noam Chomsky published “A Review of B. F. Skinner’s Verbal Behavior,” which took apart the behaviorist idea of language as a learned habit and, in the process, essentially ended the reign of behaviorism.30

While the interests of cognitive psychologists differed from those of humanistic psychologists, they weren’t antagonistic. Albert Ellis, an applied psychologist who forged a technique of cognitive therapy in the late 1950s, had a foot in both worlds. He was involved in AAHP’s founding and considered himself a humanistic psychologist, though he also recognized that the meaning of that self-description varied widely.31 In 1947 he had received, like Rogers and May, a doctorate from Teachers College, Columbia, where he had begun to develop a critique of behaviorist methods and psychoanalytic techniques. He had more cleanly broken from psychoanalysis in 1953, and began calling himself a “rational therapist.”32

Like the founders of humanistic psychology, Ellis subscribed to the guiding concepts of self-actualization (he believed all humans possess an innate motivation for reaching their potential) and the idea that individuals could determine their emotional fate.33 His applied techniques, however, differed markedly from the client-centered approach that came to dominate humanistic psychology. His work with patients was directive; he helped them to identify their self-defeating and irrational beliefs and behaviors, and to replace them systematically with more rational ideas through a form of self-talk. He began to teach this technique in the 1950s, formally proposed his theory in the latter part of the decade, and altered it somewhat in the 1960s in connection with Aaron Beck’s cognitive-behavioral therapy. While his methods fit under the umbrella of humanistic psychology in the early 1960s, a time when the field of psychotherapy was still firmly dominated by psychoanalysis and all other approaches had to band together for recognition, cognitive therapy would become firmly established in the 1970s, and, by the 1980s, would be king.

Though cognitivism had begun to challenge behaviorism in the late 1950s, Skinner himself, when asked in 1987 what had happened to behaviorism, contended that the ascendance of humanistic psychology (specifically as incarnated in Rogers, Maslow, and others) had been the greatest factor in its demise.34

If Skinner was right that humanistic psychology was the agent of behaviorism’s destruction, you wouldn’t have known it from looking at the average psychology department in the early 1960s. While humanistic ideas began to leach out into the wider culture almost at the moment of the movement’s founding, they met many obstacles to acceptance from American academic psychologists—a group that tended to be resistant to utopian thinking, slow to respond to intellectual paradigm shifts, and slower still to open up their definitions of science to revision.

Although Maslow and Rogers continued to publish at a fever pitch, and remained determined to reach intellectual audiences and generate acceptably rigorous scientific research, they experienced their academic positions as increasingly stultifying. Within their universities, they tended to feel unsupported, intellectually and financially. As early as 1959, Maslow wrote, “Very pleasant to be a big shot but doesn’t do my Brandeis salary much good. Nor can I get my papers published. Nor do grad students do my work for me.”35

Although Maslow would continue to publish in a way that more than fulfilled the expectations of a tenured professor, he grew increasingly impatient with performing the kind of science his peers seemed to expect. The frequency with which he proposed unconventional new theories without testing them was increasing. “It’s just that I haven’t got the time to do careful experiments myself,” wrote Maslow. “They take too long, in view of the years that I have left and the extent of what I want to do. So I myself do only ‘quick-and-dirty’ little pilot explorations, mostly with a few subjects only, inadequate to publish but enough to convince myself that they are probably true and will be confirmed one day. Quick little commando raids, guerrilla attacks.”36

Maslow did publish many of these pilot explorations, albeit with disclaimers about their methodological shortcomings and the necessity of further testing (by others).37 “At present,” wrote Maslow, “the only alternative is simply to refuse to work with the problem.” He apologized, in his study of self-actualizers, to those who “insist on conventional reliability, validity, sampling, etc.38 For the most part the apology wasn’t accepted.

Maslow felt most appreciated outside of psychology departments. In 1961, he spent a sabbatical semester at a privately funded institute, the Western Behavioral Sciences Institute (WBSI) in La Jolla, California, where engineer-entrepreneur Andrew Kay funded his fellowship.39 The following year, he accepted another invitation from Kay to spend the summer consulting for his corporation, Non-Linear Systems, a plant at which workers assembled digital voltmeters (instruments for measuring the electric potential difference between two points in an electric circuit). Maslow was paid handsomely to visit the plant once a week, collecting his perceptions of management techniques and employee satisfaction and applying his theory of motivation in recommendations for increasing employee satisfaction.40

Rogers had been comparably dissatisfied with academia. Having accepted a research position at the University of Wisconsin in 1957, he spent the next several years in escalating conflict with his colleagues. In January of 1963 Rogers sent a memorandum to the faculty indicating his inclination to leave the university and describing his dissatisfaction with the department’s “fixed policies and philosophy.”41

The same year, Rogers received an offer from WBSI. Though he had turned down similar offers before, Rogers began to rethink his position. He wrote, “What was a university, at this stage in my career, offering me? I realized in my research it offered no particular help; in anything educational, I was forced to fit my beliefs into a totally alien mold; in stimulation, there was little from my colleagues because we were so far apart in thinking and in goals.”42

Compelled by the absence of “bureaucratic entanglements,” the “stimulation of a thoroughly congenial interdisciplinary group,” and what he saw as the superiority of the group’s educational model, Rogers accepted the offer from WBSI, leaving Wisconsin in 1964.43 Among his new senior colleagues were Lawrence Solomon, a humanistic psychologist who insisted on the necessity of upholding the intellectual standards of humanistic psychology, and Sigmund Koch, a former behaviorist who had developed humanistic leanings while putting together a six-volume report, commissioned by the APA, on American psychology’s first fifty years.44

In addition to offering exceptional peer support, WBSI provided a hospitable environment for innovation and creativity in research, and thus for the execution of truly humanistic science. In a letter to his friends in 1963, Rogers wrote of WBSI, “It offers the complete and untrammeled freedom for creative thought of which every scholar dreams. I will have no obligation except to be a creative contributor to a new, congenial, pioneering organization.”45 WBSI director Richard Farson wrote that the institute’s “independence enabled it to avoid the limiting effects of the politics of knowledge that dominate establishment institutions, often closing down the investigation of unconventional thinking.”46 Independent research institutes also freed researchers from rigid expectations about consistent and measured contributions to their fields. Farson wrote that even beyond the impossibility of exercising “groundbreaking creativity” within universities, their sheer size was a major impediment to the production of novel theory, as “scale is the enemy of innovation.”47 Rogers agreed and explained to his friends that “this new emphasis in psychology—a humanistic, person-centered trend—has not had a chance to flower in University departments.”48

In spite of the significant advantages of affiliation with independent research institutes like WBSI, however, the disadvantages were also numerous. The biggest problem was that they further estranged innovative thought from mainstream academia, thus reducing its ability to effect meaningful change in the field. The maintenance of university affiliation forced humanistic psychologists to try, at least occasionally, to change minds within the system, a dynamic that had been key to the influence of scholars like William James and Gordon Allport.49

Private institutes also gave researchers license to disregard even the most valid constraints of academic psychology. Without institutionally imposed standards for the content and methodology of scientific experimentation, many researchers took more liberties. Some went to extremes in this regard. Stanley Krippner, for example, was compelled by the powerful pull of the experiential, transcendental, and transpersonal. Krippner, who had earned his PhD from Northwestern University in 1961 and taught and directed a child-study center at Kent State for several years, transferred to the Maimonides Medical Center Dream Laboratory in 1964.50 He soon located himself on the outskirts even of humanistic psychology, pursuing investigations of parapsychology and telepathy.51

Most humanistic psychologists, though, were rooted enough in their training, and committed enough to the idea of revising psychological science to be more inclusive of humanistic methods, to continue pursuing “reputable” science even at the new institutes. Abraham Maslow expressed his hope that what was then called “humanistic psychology” would one day just be called “psychology.”52