28
MIND MINUS METAPHYSICS

At the end of 1959 the film director Alfred Hitchcock was producing a movie in absolute secrecy. Around the lot at Revue Studios, part of Universal Pictures in Los Angeles, the film was known on the clapper board and in company designation by its codename, ‘Wimpy.’ When it was ready, Hitchcock wrote to film critics in the press, begging them not to give away the ending and announcing at the same time that no member of the public would be allowed in to the film after it had started.

Psycho was a screen ‘first’ in many different ways. Hitherto Hitchcock had directed top-quality murder stories, set in exotic locations and usually made in Technicolor. In deliberate contrast, Psycho was cheap in appearance, filmed in black and white, and focused on an area of sleaze.1 There were unprecedented scenes of violence. Most arresting of all, however, was the treatment of madness. The film was actually based on the real-life case of Ed Gein, a ‘cannibalistic Wisconsin killer’ whose terrible deeds also inspired The Texas Chain Saw Massacre and Deranged. In Psycho, Hitchcock – fashionably enough – pinpointed the source of Norman Bates’s homicidal mania in his narrow and inadequate family and sexual history.2

The film starred Anthony Perkins and Janet Leigh, both of whom worked for Hitchcock for well below their usual fee in order to gain experience with a master storyteller (Leigh’s character was actually killed off halfway through the film, another innovation). The film is rich in visual symbolism meant to signify madness, schizophrenia in particular. Apart from the gothic setting in a gingerbread-house motel on a stormy night, each of the characters has something to hide – whether it is an illicit affair, stolen cash, a concealed identity, or an undiscovered murder. Mirrors are widely used to alter images, which are elsewhere sliced in two to suggest the reversal of reality and the cutting, split world of the violently insane.3 Anthony Perkins, who pretends he is in thrall to his mother when in reality he has killed her long ago, spends his time ‘stuffing birds’ (nightbirds, like owls, which also watch him). All this tension builds to what became the most famous scene in the film, the senseless slashing of Janet Leigh in the shower, where ‘the knife functions as a penis, penetrating the body in a symbolic rape’ and the audience watches – horrified and enthralled – as blood gurgles down the drain of the shower.4 Psycho is in fact a brilliant example of a device that would become much debased as time passed – the manipulation of the cinema audience so that, to an extent, it understands, or at least experiences, the conflicting emotions wrapped up in a schizophrenic personality. Hitchcock is at his most cunning when he has the murderer, Perkins/Bates, dispose of Janet Leigh’s body by sinking it in a car in a swamp. As the car is disappearing in the mud, it suddenly stops. Involuntarily, the audience wills the car to disappear – and for a moment is complicit in the crime.5

The film received a critical pasting when it was released, partly because the critics hated being dictated to over what they could and could not reveal. ‘I remember the terrible panning we got when Psycho opened,’ Hitchcock said. ‘It was a critical disaster.’ But the public felt otherwise, and although the movie cost only $800,000 to make, Hitchcock alone eventually recouped more than $20 million. In no time the movie became a cult. ‘My films went from being failures to masterpieces without ever being successes,’ said Hitchcock.6

Attempts to understand the mentally id as if their sickness is a maladaptation, a pathology of logic or philosophy rather than a physical disease, has a long history and is at the root of the psychoanalytic school of psychiatry. In the same year as Hitchcock’s film, a psychoanalytic book appeared in Britain that also achieved cult status quickly. Its author was a young psychiatrist from Glasgow in Scotland who described himself as an existentialist and went on to become a fashionable poet. This idiosyncratic career path was mirrored in his theories about mental illness. In The Divided Self, Ronald D. Laing applied Sartre’s existentialism to frankly psychotic schizophrenics in an attempt to understand why they went mad. Laing was one of the leaders of a school of thought (David Cooper and Aaron Esterson were others) which argued that schizophrenia was not an organic illness, despite evidence even then that it was grouped in families and therefore to some extent inherited, but represented a patient’s private response to the environment in which he or she was raised. Laing and his colleagues believed in an entity they labelled the ‘schizophrenogenic’ – or schizophrenia-producing – family. In The Divided Self and subsequent books, Laing argued that investigation of the backgrounds of schizophrenics showed that they had several things in common, the chief of which was a family, in particular a mother, who behaved in such a way that the person’s sense of self became separated from his or her sense of body, that life was a series of ‘games’ which threatened to engulf the patient.7

The efficacy of Laing’s theories, and their success or otherwise in generating treatment, will be returned to in just a moment, but Laing was important in more than the merely clinical sense: insofar as his approach represented an attempt to align existential philosophy with Freudian psychology, his theories were part of an important crossover that took place between about 1948 and the mid-1960s. This period saw the death of metaphysics as it had been understood in the nineteenth century. It was philosophers who laid it to rest, and ironically, one of the chief culprits was the Waynflete Professor of Metaphysical Philosophy at Oxford University, Gilbert Ryle. In The Concept of Mind, published in 1949, Ryle delivered a withering attack on the traditional, Cartesian concept of duality, which claimed an essential difference between mental and physical events.8 Using a careful analysis of language, Ryle gave what he himself conceded was a largely behaviourist view of man. There is no inner life, Ryle said, in the sense that a ‘mind’ exists independently of our actions, thoughts, and behaviours. When we ‘itch’ to do something, we don’t really itch in the sense that we itch if a mosquito bites us; when we ‘see’ things ‘in our mind’s eye,’ we don’t see them in the way that we see a green leaf. This is all a sloppy use of language, he says, and most of his book is devoted to going beyond this sloppiness. To be conscious, to have a sense of self, is not a byproduct of the mind; it is the mind in action. The mind does not, as it were, ‘overhear’ us having our thoughts; having the thoughts is the mind in action.9 In short, there is no ghost in the machine – only the machine. Ryle examined the will, imagination, intellect, and emotions in this way, demolishing at every turn the traditional Cartesian duality, ending with a short chapter on psychology and behaviourism. He took psychology to be more like medicine – an agglomeration of loosely connected inquiries and techniques – than a proper science as generally understood.10 In the end, Ryle’s book was more important for the way it killed off the old Cartesian duality than for anything it did for psychology.

While Ryle was developing his ideas in Oxford, Ludwig Wittgenstein was pursuing a more or less parallel course in Cambridge. After he had published Tractatus Logico-Philosophicus in 1921, Wittgenstein abandoned philosophy for a decade, but he returned in 1929 to Cambridge, where at first he proceeded to dismantle the philosophy of the Tractatus, influential though that had been, and replace it with a view that was in some respects diametrically opposite. Throughout the 1930s and the 1940s he published nothing, feeling ‘estranged’ from contemporary Western civilisation, preferring to exert his influence through teaching (the ‘deck-chair’ seminars that Turing had attended).11 Wittgenstein’s second masterpiece, Philosophical Investigations, was published in 1953, after his death from cancer in 1951, aged sixty-two.12 His new view took Ryle’s ideas much further. Essentially, Wittgenstein thought that many philosophical problems are false problems, mainly because we are misled by language. All around us, says P. M. S. Hacker, who wrote a four-volume commentary on Philosophical Investigations, are grammatical similarities that mask profound logical differences. ‘philosophical questions are frequently not so much questions in search of an answer as questions in search of a sense. “Philosophy is a struggle against the bewitchment of our understanding by means of language.” ‘For example, ‘the verb “to exist” looks no different from such verbs as “to eat” or “to drink” but while it makes sense to ask how many people in College don’t eat meat or drink wine, it makes no sense to ask how many people in College don’t exist.13

This is not just a language game.14 Wittgenstein’s fundamental idea was that philosophy exists not to solve problems but to make the problems disappear, just as a knot in a piece of string disappears when it is unravelled. Put another way, ‘Problems are solved, not by giving new information, but by [re]arranging what we have always known.15 The way forward, for Wittgenstein, was to rearrange the entire language.16 No man could do that on his own, and Wittgenstein started by concentrating, as Ryle had done, on the mind-body duality. He went further in linking with it what he called the brain-body duality. Both dualities, he said, were misconceptions. Consciousness was misconceived, he said, when it was ‘compared with a self-scanning mechanism in the brain.17 He took as his example pain. To begin with, he explains that one does not ‘have’ a pain in the sense that one has a penny. ‘A pain cannot go round the world, like a penny can, independent of anyone owning it.’ Equally, we do not look to see whether we are groaning before reporting that we have a pain – in that sense, the groan is part of the pain.18 Wittgenstein next argued that the ‘inner’ life, ‘introspection,’ and the privacy of experience have also been misconceived. The pain that one person has is the same that another person has, just as two books can have covers coloured in the same red. Red does not exist in the abstract, and neither does pain.19 On inspection, Wittgenstein is saying, all the so-called mental things we do, do not need ‘mind’: ‘To make up one’s mind is to decide, and to be in two minds about something is to be undecided…. There is such a thing as introspection but it is not a form of inner perception … it is the calling up of memories; of imagined possible situations, and of the feelings that one would have if…’20 ‘I want to win’ is not a description of a state of mind but a manifestation of it.21 Talk of ‘inner’ and ‘outer’ in regard to ‘mental’ life is, for Wittgenstein, only metaphor. We may say that toothache is physical pain and that grief is mental. But grief is not painful in the sense that toothache is; it does not ‘hurt’ as toothache hurts.22 For Wittgenstein, we do not need the concept of mind, and we need to be very careful about the way we think about ‘brain.’ It is the person who feels pain, hope, disappointment, not his brain.

Philosophical Investigations was more successful in some areas than in others. But by Wittgenstein’s own criteria, it made some problems disappear, the problem of mind being one of them. His was one of the books that helped move attention toward consciousness, which Wittgenstein did not successfully explain, and which dominated the attentions of philosophers and scientists at the end of the century.

The consequences of Philosophical Investigations for Freudian psychoanalysis have never been worked through, but Wittgenstein’s idea of ‘inner’ and ‘outer’ as merely metaphor to a large extent vitiates Freud’s central ideas. The attack on Freud was growing anyway in the late 1950s and has been chronicled by Martin Gross. Although the interwar years had been the high point of the Freudian age, the first statistical doubts over the efficacy of psychoanalytic treatment occurred as early as the 1920s, when a study of 472 patients from the clinic of the Berlin Psychoanalytic Institute revealed that only 40 percent could be regarded as cured. Subsequent studies in the 1940s at the London Clinic, the Chicago Institute for Psychoanalysis, and the Menninger Clinic in Kansas likewise revealed an average ‘cure rate’ of 44 percent. A series of studies throughout the 1950s showed with some consistency that ‘a patient has approximately a 50–50 chance of getting off the couch in somewhat better mental condition than when he first lay down on it.’23 Most damaging of ad, however, was the study carried out in the mid-1950s by the Central Fact-Gathering Committee of the American Psychoanalytic Association (the APsaA), chaired by Dr Harry Weinstock. His committee collected evidence on 1,269 psychoanalytic cases treated by members of the APsaA. The report, on the largest sample to date, was eagerly awaited, but in December 1957 the association decided against publication, noting that the ‘controversial publicity on such material cannot be of benefit in any way.’24 Mimeographed copies of the report then began to circulate confidentially in the therapeutic community, and gossip about the results preoccupied the psychiatric profession until the APsaA finally consented to release the findings – a decade later. Then the reason for the delay became clear. The ‘controversial material’ showed that, of those originally accepted for treatment, barely one in six were cured. This was damning enough, being the profession’s own report; but it wasn’t just the effectiveness of psychoanalysis that came under threat; so did Freud’s basic theories. His idea that we are all a little bisexual was challenged, and so was the very existence of the Oedipus complex and infantile sexuality. For example, penile erection in infants had been regarded by psychoanalysts as firm evidence of infantile sexuality, but H. M. Halverson observed nine infants for ten days each – and found that seven of them had an erection at least once a day.25 ‘Rather than being a sign of pleasure, the erections tended to show that the child was uncomfortable. In 85 percent of cases, the erection was accompanied by crying, restlessness, or the stiff stretching of legs. Only when the erection subsided did the children become relaxed.’ Halverson concluded that the erection was the result of abdominal pressure on the bladder, ‘serving a simple bodily, rather than a Freudian, need.’ Likewise, sleep research shows that the forgetting of dreams – which according to psychoanalysis are repressed – can be explained more simply. We dream at a certain stage of sleep, now known as REM sleep, for the rapid eye movements that occur at this time. If the patient is woken during REM sleep, he or she can easily remember dreams, but grows very irritated if woken too often, indicating that REM sleep is necessary for well-being. After REM sleep, however, later in the sleep cycle, if that person is wakened, remembrance of dreams is much harder, and there is much less irritation. Dreams are naturally evanescent.26 Finally, there was the growth in the 1950s of anti-Freudian anthropological evidence. According to Freudian theory, the breast-feeding of infants is important, helping to establish the basic psychological bond between mother and child, which is of course itself part of the infant’s psychosexual development. In 1956, however, the anthropologist Ralph Linton reported on the women of the Marquesas Islands, ‘who seldom nurse their babies because of the importance of breasts in their culture.’ The Marquesan infant is simply laid on a stone and casually fed a mixture of coconut milk and breadfruit.27 Nonetheless, the Marquesan children grew up without any special problems, their relationships with their mothers unimpaired.

Beginning in the 1950s, Freud and Jung came in for increasingly severe criticism, for being unscientific, and for using evidence only when it suited them.

Not that other forms of psychology were immune to criticism. In the same year that Wittgenstein’s posthumous Philosophical Investigations appeared, Burrhus F. Skinner, professor of psychology at Harvard University, published the first of his controversial works. Raised in the small Pennsylvania town of Susquehanna, Fred Skinner at first wanted to be a writer and studied English at Hamilton College, where Robert Frost told him that he was capable of ‘real niceties of observation.’ Skinner never developed as a writer, however, because ‘he found he had nothing to say.’ And he gave up the saxophone because it seemed to him to be ‘the wrong instrument for a psychologist.’28 Abandoning his plan to be a writer, he studied psychology at Harvard, so successfully that in 1945 he became a professor.

Skinner’s Science and Human Behavior overlapped more than a little with Ryle and Wittgenstein.29 Like them, Skinner regarded ‘mind’ as a metaphysical anachronism and concentrated on behavior as the object of the scientist’s attention. And like them he regarded language as an at-times-misleading representation of reality, it being the scientist’s job, as well as the philosopher’s, to clarify its usage. In Skinner’s case he took as his starting point a series of experiments, mainly on pigeons and rats, which showed that if their environment was strictly controlled, especially in regard to the administration of rewards and punishments, their behavior could be altered considerably and in predictable ways. This demonstration of rapid learning, Skinner thought, was both philosophically and socially important. He accepted that instinct accounted for a sizeable proportion of human conduct but his aim, in Science and Human Behavior, was to offer a simple, rational explanation for the rest of the behavioral repertoire, which he believed could be done, using the principles of reinforcement. In essence Skinner sought to show that the vast majority of behaviors, including beliefs, certain mental illnesses, and even ‘love’ in some circumstances, could be understood in terms of an individual’s history, the extent to which his or her behavior had been rewarded or punished in the past. For example, ‘You ought to take an umbrella’ may be taken to mean: ‘You will be reinforced for taking an umbrella.’ ‘A more explicit translation would contain at least three statements: (I) Keeping dry is reinforcing to you; (2) carrying an umbrella keeps you dry in the rain; and (3) it is going to rain…. The “ought” is aversive, and the individual addressed may feel guilty if he does not then take an umbreda.’30 On this reading of behavior, Skinner saw alcoholism, for example, as a bad habit acquired because an individual may have found the effects of alcohol rewarding, in that it relaxed him in social situations where otherwise he may have been ill at ease. He objected to Freud because he thought psychoanalysis’s concern with ‘depth’ psychology was wrongheaded; its self-declared aim was to discover ‘inner and otherwise unobservable conflicts, repressions, and springs of action. The behavior of the organism was often regarded as a relatively unimportant by-product of a furious struggle taking place beneath the surface of the mind.’31 Whereas for Freud neurotic behavior was the symptom of the root cause, for Skinner neurotic behavior was the object of the inquiry – stamp out the neurotic behavior, and by definition the neurosis has gone. One case that Skinner considers in detail is that of two brothers who compete for the affection of their parents. As a result one brother behaves aggressively toward his sibling and is punished, either by the brother or the parents. Assume this happens repeatedly, to the point where the anxiety associated with such an event generates guilt in the ‘aggressive’ brother, leading to self-control. In this sense, says Skinner, the brother ‘represses’ his aggression. ‘The repression is successful if the behavior is so effectively displaced that it seldom reaches the incipient state at which it generates anxiety. It is unsuccessful if anxiety is frequently generated.’ He then goes on to consider other possible consequences and their psychoanalytic explanations. As a result of reaction formation the brother may engage in social work, or some expression of ‘brotherly love’; he may sublimate his aggression by, say, joining the army or working in an abattoir; he may displace his aggression by ‘accidentally’ injuring someone else; he may identify with prizefighters. For Skinner, however, we do not need to invent deep-seated neuroses to explain these behaviors. ‘The dynamisms are not the clever machinations of an aggressive impulse struggling to escape from the restraining censorship of the individual or of society, but the resolution of complex sets of variables. Therapy does not consist of releasing a trouble-making impulse but of introducing variables which compensate for or correct a history which has produced objectionable behavior. Pent-up emotion is not the cause of disordered behavior; it is part of it. Not being able to recall an early memory does not produce neurotic symptoms; it is itself an example of ineffective behavior.’32 In this first book, Skinner’s aim was to explain behavior, and he ended by considering the many controlling institutions in modern society – governments and laws, organised religion, schools, psychotherapy, economics and money – his point being that many systems of rewards and punishments are already in place and, more or less, working. Later on, in the 1960s and 1970s, his theories enjoyed a vogue, and in many clinics ‘behavior therapy’ was adopted. In these establishments, symptoms were treated without recourse to any so-called underlying problem. For example, a man who felt he was dirty and suffered from a compulsive desire to collect towels was no longer treated for his inner belief that he was ‘dirty’ and so needed to wash a great deal, but simply rewarded (with food) on those days when he didn’t collect towels. Skinner’s theories were also followed in the development of teaching machines, later incorporated into computer-aided instruction, whereby pupils follow their own course of instruction, at their own pace, depending on rewards given for correct answers.

Skinner’s approach to behavior, his understanding of what man is, was looked upon by many as revolutionary at the time, and he was even equated to Darwin.33 His method linked Ryle and Wittgenstein to psychology. He maintained, for example, that consciousness is a ‘social product’ that emerges from the human interactions within a verbal community. But verbal behavior, or rather Verbal Behavior, published in 1957, was to be his undoing.34 Like Ryle and Wittgenstein, Skinner understood that if his theory about man was to be convincing, it needed to explain language, and this he set about doing in the 1957 book. His main point was that our social communities ‘select’ and fine-tune our verbal utterances, what we ‘choose’ to say, by a process of social reinforcement, and this system, over a lifetime, determines the form of speech we use. In turn this same system of reinforcement of our verbal behavior helps shape our other behaviors – our ‘character’ – and the way that we understand ourselves, our consciousness. Skinner argued that there are categories of speech acts that may be grouped according to their relationship to surrounding contingencies. For example, ‘mands’ are classes of speech behavior that are followed by characteristic consequences, whereas ‘tacts’ are speech acts socially reinforced when emitted in the presence of an object or event.35 Essentially, under this system, man is seen as the ‘host’ of behaviors affected by the outside, rather than as autonomous. This is very different from the Freudian view, or more traditional metaphysical versions of man, that something comes from within. Unfortunately, from Skinner’s point of view, his radical ideas suffered a withering attack in a celebrated – notorious – review of his book in the journal Language in 1959, by Noam Chomsky. Chomsky, thirty-one in 1959, was born in Pennsylvania, the son of a Hebrew scholar who interested his son in language. Chomsky’s own book, Syntactic Structures, was also published in 1957, the same year as Skinner’s, but it was the review in Language and in particular its vitriolic tone that drew attention to the young author and initiated what came to be called the Chomskyan revolution in psychology.36

Chomsky, by then a professor at MIT, just two stops on the subway from Harvard, argued that there are inside the brain universal, innate, grammatical structures; in other words, that the ‘wiring’ of the brain somehow governs the grammar of languages. He based much of his view on studies of children in different countries that showed that whatever their form of upbringing, they tended to develop their language skills in the same order and at the same pace everywhere. His point was that young children learn to speak spontaneously without any real training, and that the language they learn is governed by where they grow up. Moreover, they are very creative with language, using at a young age sentences that are entirely new to them and that cannot have been related to experience. Such sentences cannot therefore have been learned in the way that Skinner and others said.37 Chomsky argued that there is a basic structure to language, that this structure has two levels, surface structure and deep structure, and that different languages are more similar in their deep structure than in their surface structure. For example, when we learn a foreign language, we are learning the surface structure. This learning is in fact only possible because the deep structure is much the same. German or Dutch speakers may put the verb at the end of a sentence, which English or French speakers do not, but German, Dutch, French, and English have verbs, which exist in all languages in equivalent relationship to nouns, adjectives, and so on.38 Chomsky’s arguments were revolutionary not only because they went against the behaviorist orthodoxy but because they appeared to suggest that there is some sort of structure in the brain that is inherited and that, moreover, the brain is prewired in some way that, at least in part, determines how humans experience the world.

The Chomsky-Skinner affair was as personal as Snow-Leavis. Skinner apparently never finished reading the review, believing the other man had completely – and perhaps deliberately – misunderstood him. And he never replied.39 One consequence of this, however, was that Chomsky’s review became more widely known, and agreed with, than Skinner’s original book, and as a result Skinner’s influence has been blunted. In fact, he never denied that a lot of behavior is instinctive; but he was interested in how it was modified and could, if necessary, be modified still further. His views have always found a small but influential following.

Whatever the effects of Chomsky’s attack on Skinner, it offered no support for Freud or psychoanalysis. Although conventional Freudian analysis remained popular in a few isolated areas, like Manhattan, several other well-known scientists, while not abandoning Freudian concepts entirely, began to adapt and extend them in more empirically grounded ways. One of the most influential was John Bowlby.

In 1948 the Social Commission of the United Nations decided to make a study of the needs of homeless children: in the aftermath of war it was realised that in several countries large numbers of children lacked fully formed families as a result of the men killed in the fighting. The World Health Organization (WHO) offered to provide an investigation into the mental health aspects of the problem. Dr Bowlby was a British psychiatrist and psychoanalyst who had helped select army officers during the war. He took up a temporary appointment with the WHO in January 1950, and during the late winter and early spring of that year he visited France, Holland, Sweden, Switzerland, Great Britain, and the United States of America, holding discussions with workers involved in child care and child guidance. These discussions led to the publication, in 1951, of Maternal Care and Mental Health, a famous report that hit a popular nerve and brought about a wholesale change in the way we think about childhood.40

It was this report that first confirmed for many people the crucial nature of the early months of an infant’s life, when in particular the quality of mothering was revealed as all-important to the subsequent psychological development of a child. Bowlby’s book introduced the key phrase maternal deprivation to describe the source of a general pathology of development in children, the effects of which were found to be widespread. The very young infant who went without proper mothering was found to be ‘listless, quiet, unhappy, and unresponsive to a smile or a coo,’ and later to be less intelligent, bordering in some cases on the defective.41 No less important, Bowlby drew attention to a large number of studies which showed that victims of maternal deprivation failed to develop the ability to hold relationships with others, or to feel guilty about their failure. Such children either ‘craved affection’ or were ‘affect-less.’ Bowlby went on to show that studies in Spain during the civil war, in America, and among a sample of Copenhagen prostitutes all confirmed that delinquent groups were comprised of individuals who, more than their counterparts, were likely to have come from broken homes where, by definition, there had been widespread maternal deprivation.42 The thrust of this research had two consequences. On the positive side, Bowlby’s research put beyond doubt the idea that even a bad home is better for a child than a good institution. It was then the practice in many countries for illegitimate or unwanted children to be cared for in institutions where standards of nutrition, cleanliness, and medical matters could be closely monitored. But it became clear that such an environment was not enough, that something was lacking which affected mental health, rather in the way that vitamins had been discovered to be lacking in the artificial diets created for neglected children in the great cities of the nineteenth century. And so, following publication of the WHO report, countries began to change their approach to neglected children: adoptions were favoured over fostering, children with long-term illnesses were not separated from their parents when they went to hospital, and mothers sent to prison were allowed to take their young babies with them. At work, maternity leave was extended to include not just the delivery but the all-important early months of the child’s life. There was in general a much greater sensitivity to the nature of the mother—child bond.43

Less straightforward was the link the WHO report found between a disrupted early family life and later delinquency and/or inadequacy. This was doubly important because children from such ‘broken’ families also proved in many cases to be problem parents themselves, thus establishing what was at first called ‘serial deprivation’ and later the ‘cycle of deprivation.’ Not all deprived children became delinquent; and not all delinquent children came from broken homes (though the great majority did). The exact nature of this link assumed greater intellectual prominence later on, but in the 1950s the discovery of the relationship between broken homes and delinquency, mediated via maternal deprivation, offered hope for the amelioration of social problems that disfigured postwar society in many Western countries.

The great significance of Bowlby’s report was the way it took an essentially Freudian concept – the bond between mother and child – and examined it scientifically, using objective measures of behavior to understand what was going on, rather than concentrating on the inner workings of ‘the mind.’ As a psychoanalyst, Freud’s work had led Bowlby to focus on the mother-child bond, and to discover its vital practical significance, but Maternal Care and Mental Health has only one reference to Freud, and none at all to the unconscious, the ego, id, or superego. In fact, Bowlby was as much influenced by his observations of behavior among animals, including a series of studies carried out in the 1930s in Nazi Germany. So Bowlby’s work was yet another instance of ‘mind’ being eschewed in favour of behavior. The fact that he was a psychoanalyst himself only underlined the inadequacy of traditional Freudian concepts.

Interest in the child as a psychological entity had been spasmodically entertained since the 1850s. The Journal of Educational Psychology was founded in the United States in 1910, and the Yale Psycho-Clinic, which opened a year later, was among the first to study babies systematically. But it was in Vienna, in the wake of World War I, that child psychology really began in earnest, due partly to the prevailing Freudian atmosphere, now much more ‘respectable’ than before, and partly to the straitened circumstances of the country, which affected children particularly badly. By 1926 there were forty different agencies in Vienna concerned with child development.

The man who was probably the greatest child psychologist of the century was influenced less by Freud than by Jung. Jean Piaget was born in Neufchâtel, Switzerland, in 1896. He was brilliant even as a boy, publishing his first scientific paper when he was ten, and by fifteen he had a Europe-wide reputation for a series of reports on molluscs. He studied psychiatry under both Eugen Bleuler (who coined the term schizophrenia) and Carl Jung, then worked with Théodore Simon at the Sorbonne.44 Simon had collaborated with Alfred Binet on intelligence tests, and in Paris Piaget was given the task of trying out a new test devised in England by Cyril Burt. This test had questions of the following kind: ‘Jane is fairer than Sue; Sue is fairer than Ellen; who is fairer, Jane or Ellen?’45 Burt was interested in intelligence in general, but Piaget took something rather different from this test, an idea that was to make him far more famous and influential than Burt ever was. Piaget’s central idea had two aspects. First he claimed that children are, in effect, tabulae rasae, with no inbuilt logical – i.e., intellectual – capabilities; rather, these are learned as they grow up. Second, a child goes through a series of stages in his or her development, as he or she grasps various logical relations and then applies them to the practicalities of life. These theories of Piaget arose from a massive series of experiments carried out at the International Centre of Genetic Epistemology which Piaget founded in Geneva in 1955. (Genetic epistemology is concerned with the nature and origins of human knowledge.)46 Here there is space for just one experiment. At six months a baby is adept at reaching for things, lifting them up, and dropping them. However, if an object is placed under a cushion, the baby loses interest even if the object is still within reach. Piaget claimed, controversially, that this is because the six-month-old child has no conception that unseen objects continue to exist. By roughly nine months, the child no longer has this difficulty.47

Over the years, Piaget described meticulously the infant’s growing repertoire of abilities in a series of experiments that were close to being games.48 Although their ingenuity is not in doubt, critics found some of his interpretations difficult to accept, chiefly that at birth the child has no logic whatsoever and must literally ‘battle with the world’ to learn the various concepts needed to live a successful life.49 Many critics thought he had done no more than observe a maturational process, as the child’s brain developed according to the ‘wiring’ set down at birth and based, as Chomsky had said, on the infant’s heredity. For these critics, logic ‘was the engine of development, not the product,’ as Piaget said it was.50 In later years the battle between nature and nurture, and their effects on behaviour, would grow more heated, but the significance of Piaget was that he aligned himself with Skinner and Bowlby in regarding behavior as central to the psychologist’s concern, and showing how the first few years of life are all-important to later development. Once again, with Piaget the concept of mind took a back seat.

One other development in the 1950s helped discredit the traditional concept of mind: medical drugs that influenced the workings of the brain. As the century wore on, one ‘mental’ condition after another had turned out to have a physical basis: cretinism, general paralysis of the insane, pellagra (nervous disorder caused by niacin deficiency) – all had been explained in biochemical or physiological terms and, more important, shown themselves as amenable to medication.51

Until about 1950 the ‘hard core’ of insanity – schizophrenia and the manic-depressive psychoses – lacked any physical basis. Beginning in the 1950s, however, even these illnesses began to come within the scope of science, three avenues of inquiry joining together to form one coherent view.52 From the study of nerve cells and the substances that governed the transmission of the nerve impulse from one cell to another, specific chemicals were isolated. This implied that modification of these chemicals could perhaps help in treatment by either speeding up or inhibiting transmission. The antihistamines developed in the 1940s as remedies for motion sickness were found to have the side effect of making people drowsy – i.e., they exerted an effect on the brain. Third, it was discovered that the Indian plant Rauwolfia serpentina, extracts of which were used in the West for treatment of high blood pressure, was also used in India to control ‘overexcitement and mania.’53 The Indian drug acted like the antihistamines, the most active substance being promethazine, commercially known as Phenergan. Experimenting with variants of promethazine, the Frenchman Henri Laborit hit on a substance that became known as chlorpromazine, which produced a remarkable state of ‘inactivity or indifference’ in excited or agitated patients.54 Chlorpromazine was thus the first tranquiliser.

Tranquilisers appeared to work by inhibiting neurotransmitter substances, like acetylcholine or noradrenaline. It was natural to ask what effect might be achieved by substances that worked in the opposite way – might they, for instance, help relieve depression? At the time the only effective treatment for chronic depression was electroconvulsive therapy. ECT, which many viewed as brutal despite the fact that it often worked, was based on a supposed antagonism between epilepsy and schizophrenia: induction of artificial fits was believed to help. In fact, the first breakthrough arose accidentally. Administering the new antituberculosis drug, isoniazid, doctors found there was a marked improvement in the well-being of the patients. Their appetites returned; they put on weight and they cheered up. Psychiatrists quickly discovered that isoniazid and related compounds were fairly similar to neurotransmitters, in particular the amines found in the brain.55 These amines, it was already known, were decomposed by a substance called monoamine oxidase; so did isoniazid achieve its effect by inhibiting monoamine oxidase, preventing it from decomposing the neurotransmitters? The monoamine oxidase inhibitors, though they worked well enough in relieving depression, had too many toxic side effects to be lasting as a family of drugs. Shortly afterward, however, another relative of chlorpromazine, Imipramine, was found to be effective as an antidepressant, as well as increasing people’s desire for social contact.56 This entered widespread use as Tofranil.

Ad these substances reinforced the view that the ‘mind’ was amenable to chemical treatment. During the 1950s and early 1960s, many tranquilisers and antidepressants came into use. Not all were effective with all patients; each had side effects. But whatever their shortcomings, and despite the difficulties and complexities that remain, even to this day, these two categories of drugs, besides relieving an enormous amount of suffering, pose profound questions about human nature. They confirm that psychological moods are the result of chemical states within the brain, and therefore throw into serious doubt the traditional metaphysical concept of mind.

In trying to be an amalgam of Freud and Sartre, of psychoanalysis and existentialism, R. D. Laing’s ideas were going against the grain then becoming established in psychiatry. Why then, when it is debatable whether Laing’s approach ever cured anyone, did he become a cult figure?

In the context of the times, Laing and colleagues such as David Cooper in Britain and Herbert Marcuse in America focused their attention on the personal liberation of individuals in a mass society, as opposed to the earlier Marxist idea of liberation of an entire class through revolution. Gregory Bateson, Marcuse, and Laing all argued that man lived in conflict with mass society, that society and the unconscious were constantly at war, the schizophrenic simply the most visible victim in this war.57 The intolerable pressures put on modern families led to the famous ‘double bind,’ in which all-powerful parents tell a child one thing but do another, with the result that children grow up in perpetual conflict. Essentially, Laing and the others were saying that society is mad and the schizophrenic response is no more or less than a rational reaction to that complex, confusing world, if only the private logic of the schizophrenic can be unravelled. For Laing, families were ‘power units’ on top of whatever else they might be, and it is liberation from this power structure that is part of the function of psychiatry. This led to experiments in specially created clinics where even the power structure between psychiatrist and patient was abolished.

Laing became a cult figure in the early 1960s, not only because of his radical approach to schizophrenia (anti-psychiatry, and radical psychiatry became popular terms), but also because of his approach to experience.58 From about 1960, Laing was a fairly frequent user of the so-called mind-altering drugs, including LSD. Like others, he believed that the ‘alternative consciousness’ they provided could be clinically useful in the liberation of false consciousness created by schizophrenogenic families, and for a time he persuaded the British Home Office to give him a licence to experiment (in his offices in Wimpole Street, London) with LSD, which was then manufactured commercially in Czechoslovakia.59 As the 1960s progressed, Laing and Cooper were taken up by the New Left. The linking of psychiatry and politics seemed new, radical, in Britain but went back to the teachings of the Frankfurt School and its original attempts to marry Marx and Freud. This is one reason why the Laing cult was overshadowed by the Marcuse cult in America.

Herbert Marcuse, sixty-two in 1960, had been part of the Frankfurt School and, like Hannah Arendt, studied under Martin Heidegger and Edmund Husserl. With Max Horkheimer and Theodor Adorno he had emigrated to the United States following Hitler’s rise to power, but unlike them, he did not return once the war was over. He put his linguistic skills at the disposal of wartime intelligence and remained in government service some time after 1945.60 As an erstwhile Marxist, Marcuse’s mind was radically changed by Hitler, Stalin, and World War II. Afterward he was motivated, he said, by three things: that Marxism had not predicted the rise of Nazism, the emergence out of capitalist society of an irrational, barbaric movement; the effects of technology on society, especially Fordism and Taylorism; and the fact that prosperous America still contained many hidden and uncomfortable assumptions and contradictions.61 Marcuse’s attempt at a rapprochement of Freud and Marx was more sophisticated than either Erich Fromm’s or Laing’s. He felt that Marxism, as an account of the human condition, failed because it took no measure of individual psychology. In Eros and Civilisation (1955) and One-Dimensional Man (1964), Marcuse examined the conformist mass society around him, where high-technology material goods were both the epitome of scientific rationalism and the means by which conformity in thought and behavior was maintained, and he offered a new emphasis on aesthetics and sensuality in human life.62 For him, the most worthwhile response to mass society on the part of the individual was negation (an echo of Sartre’s l’homme revolté). The United States was one-dimensional because there were no longer any permissible alternative ways to think or behave. His was, he said, a ‘diagnosis of domination’. Life moved ‘forward’ by means of ‘progress,’ thanks to reason and ‘the rigidity’ of science.63 This was, he said, a stifling totality that had to be countered with imagination, art, nature, ‘negative thought,’ all put together in ‘a great refusal.’64 The already disastrous results in recent decades of very conformist societies, the new psychologies of mass society and affluence, what were perceived as the dehumanising effects of positivist science and philosophy – all combined, for Marcuse, into ‘a criminally limited’ one-dimensional world.65 For many, Laing and Marcuse went together because the former’s schizophrenics were the natural endpoint of the one-dimensional society, the reject-victims of a dehumanising world where the price of nonconformity carried the risk of madness. This had uncomfortable echoes of Thomas Mann and Franz Kafka, looking back even to the speeches of Hitler, who had threatened with imprisonment the artists who painted in ways he thought ‘degenerate.’ In the early 1960s the baby-boom generation was reaching university age. The universities were expanding fast, and on campus the notions of Laing, Marcuse, and others, though quite at variance with the clinical evidence, nonetheless proved irresistible. Riesman had found that it was a characteristic of the ‘other-directed’ personality that it hated its own conformist image. The popularity of Laing and Marcuse underlines that. And so the stage was set for personal, rather than political change. The 1960s were ready to begin.