Across its fairly short history, psychology has had several major theoretical movements. Perhaps the most important are Behaviorism, Gestalt Theory, Psychoanalytic and Psychodynamic Theory, Humanistic Theories, Attachment Theory, Sociobiology, Neurobiological Theories, and Cognitive Science. Some of these movements were natural outgrowths of earlier ones. Others were reactions against earlier approaches. Most contemporary psychologists do not limit themselves to any one theoretical view. Nonetheless, these movements have shaped psychology’s history and continue to influence contemporary psychologists’ orientation to research and practice. Thus, it is important to understand the major movements in the history of psychology to truly appreciate modern findings.
Behaviorism is the school of psychology that considers observable behavior to be the only worthwhile object of study. Behaviorists believe mental phenomena are impossible to measure objectively and thus impossible to prove. They therefore focus on the processes underlying behavioral change, specifically classical (or associative or Pavlovian) conditioning and operant conditioning. These basic learning principles operate in humans and animals alike, or at least in mammals and birds.
Major figures of behaviorism included John B. Watson (1878–1958), Edward Thorndike (1874–1947), and B.F. Skinner (1904–1990). Although Freudian psychoanalysis, educational psychology, and other mental schools of psychology continued in tandem, behaviorism was the dominant force in American psychology until well into the middle of the twentieth century.
Edward Thorndike (1874–1947) was originally a student of Henry James, although his research veered far from James’s fascination with consciousness. As a side note, he was also the author of the Thorndike dictionary. Thorndike turned to the study of chickens while still a graduate student and then expanded his research to observations of cats and dogs. By placing an animal in a puzzle box, or an enclosure with only one means of escape, he could study how the animal learned to escape the box. He observed that animals initially stumble on the escape route (e.g., stepping on a peddle or biting a string) through trial and error. With repeated trials, however, animals take less time to find their way out.
Based on this research, Thorndike formulated two laws of learning. The Law of Effect states that the effect of an action will determine the likelihood that it will be repeated. In other words, if the response generates a satisfying effect (the cat pulls the string and the door opens), the cat is more likely to pull the string again. If the action generates a negative impact, the animal is less likely to repeat the action.
This concept forms the bedrock of B.F. Skinner’s later theory of operant conditioning. Thorndike’s Law of Exercise likewise contributed to theories of associative conditioning. Here he stated that the strength of an association between a response and a stimulus will depend on the number of times they have been paired and the strength of their pairing. Thorndike thus took the associationism of philosophers such as Thomas Hobbes and placed it into a scientific paradigm.
In this view, the mind is no more than an opaque black box inserted between stimulus and response. As no one can see inside of it, it is not worthy of study. This extreme antimentalism of the behaviorists has been frequently criticized and was finally put to rest by the cognitive revolution in the 1960s. While the behaviorists made invaluable contributions to psychology regarding the fundamental principles of behavioral change, their devaluing and dismissal of subjective experience was extremely limiting.
Behaviorism is best described as a theory of learning and, in fact, is often referred to as learning theory. However, the mental process of learning had to be translated into behavioral terms. Thus learning occurs when a new behavior is repeatedly and consistently performed in response to a given stimulus.
Ivan Pavlov (1849–1936) was a Russian scientist who was originally interested in the digestive processes of animals. When trying to study how dogs digest food, he noticed the animals’ tendency to salivate at the sound or sight of their keeper shortly before feeding time. In other words, they salivated in the absence of actual food. Initially, this phenomenon was a nuisance, interfering with his study of digestion, but later it became the focus of his research. Pavlov’s studies provided the basis of the theory of classical conditioning, also known as associative or “Pavlovian” conditioning.
Pavlov developed his famous theory of classical conditioning after watching dogs salivate right before mealtime at the mere sight or sound of their keeper (iStock).
Although strict behaviorists avoided all emotional terms, learning theory fully depends on emotion. In Thorndike’s Law of Effect and the theory of operant conditioning that followed, the likelihood that a behavior will be increased or decreased depends on its emotional impact. Behavior is increased when it elicits positive emotion (reward) and reduced when it elicits negative emotion (punishment). While it is more difficult to speak of emotions in animals, modern scientists assume that the simple emotional processes involved in learning theory—that is, forms of pleasure and pain—apply to both animals and humans.
Associative conditioning, also called classical or Pavlovian conditioning, refers to a form of learning in which a person or animal is conditioned to respond in a particular way to a specific stimulus. If a neutral stimulus is paired with an emotionally meaningful one, than the neutral stimulus will become associated with the second stimulus and elicit the same response. For example, if a child learns to associate a particular perfume with a beloved grandmother, the child will develop a positive response to the perfume. In contrast, if the child learns to associate going to the doctor with getting a painful shot, then the child will learn to fear the doctor. This basic concept is used in child rearing, advertising, political campaigns, the treatment of addictions, and much of animal training.
The unconditioned stimulus is the stimulus that elicits a natural and unlearned response. For example, the child does not have to learn to feel pain from the shot. A dog does not have to learn to feel pleasure when fed. The conditioned stimulus is a formerly neutral stimulus that now elicits a response through its pairing with the conditioned stimulus. The perfume that the child associates with his grandmother is a conditioned stimulus. The doctor that the child associates with the shot is also a conditioned stimulus.
The unconditioned response is the innate, unlearned response, for example loving the grandmother or feeling pain at the shot. The conditioned response is the learned response, for example loving the grandmother’s perfume or fearing the doctor.
Classical conditioning pervades everyday life. When we develop food aversions (e.g., a hatred of fish), phobias (a fear of dogs), positive associations (an association of Paris with a romantic vacation) our behavior reflects classical conditioning. It is therefore no accident that so many advertising campaigns hire young, beautiful, and skimpily clad models. The advertisers want consumers to associate their product—be it a washing machine, paper clip, or automobile—with youth, beauty, and sex.
As (non-human) animals lack higher cognitive abilities, such as complex reasoning, symbolic thought, or language, associative conditioning is a primary way that animals learn. Does your cat love to sit on the couch and purr? Does she associate the couch with affection and attention? Does your dog start to bark and wag his tail when you put on your shoes? Does he associate your shoes with his walk?
In operant conditioning, pioneered by B.F. Skinner (1904–1990), behavior is influenced less by the stimulus with which it is associated than by the effect of that behavior. Operant conditioning builds on Thorndike’s Law of Effect. If the effect of the behavior is positive, then it is reinforced, and the behavior is more likely to recur. If the effect of the behavior is negative, then it is punished and therefore less likely to be repeated.
Starting in 1920, John Watson conducted a series of experiments on a baby named Albert B. to investigate classical conditioning in human beings. While these experiments successfully support the principles of conditioned learning, Watson was chillingly insensitive to the emotional impact of his research methods on the baby.
When Albert was about nine months old he was exposed to a series of white fuzzy items, including a white rat, rabbit, dog, monkey, and masks with and without white cotton hair. The presence of the rat was then paired with a loud noise created by banging a hammer against a steel pipe. This was repeated several times until little Albert grew terrified at the mere sight of the rat. Later experiments showed that Albert’s fearful reactions had generalized to other fuzzy white items, including a rabbit, dog, and Santa Claus mask. This generalized fear was still present several months after the original experiment.
Today human subjects review committees are required in all research institutions in order to protect the rights of research subjects.
Reinforcers are consequences of a behavior that increase the likelihood that the behavior will be repeated. For example, if a child is given an ice cream cone as consolation after throwing a temper tantrum, the temper tantrum has been reinforced. Reinforcers can be either positive or negative.
Positive reinforcement is also called reward, and refers to the positive consequence of a behavior, which increases its likelihood of recurring. For example, employees are paid to do their job and performers who perform well are applauded. Negative reinforcement, to be distinguished from punishment, involves the removal of a negative condition as a consequence of the targeted behavior. If you lose weight by dieting, that is negative reinforcement.
In the 1980s, a man with a saxophone and a small kitten on his shoulder frequented the New York City subways. He would play a harsh note on the saxophone as loudly as possible and offer to stop only if the passengers gave him money. This man was utilizing the principles of negative reinforcement (though it might also be called blackmail).
One example of positive reinforcement is a mouse receiving cheese for successfully navigating a maze (iStock).
Punishment involves the introduction of a negative consequence to a behavior with the intent of diminishing the frequency of the behavior. When a child is grounded for getting into a fight, this is punishment. The parents are trying to diminish the targeted behavior. Likewise, the criminal justice system relies on punishment to maintain an orderly and lawful society. Punishment can be extremely effective but it also has drawbacks. Although the early behaviorists avoided mental considerations, it is now clear that punishment, if done too frequently, creates anger, fear, and resentment and can breed an oppositional mindset, in which people try to cheat the system instead of willingly following the rules. B.F. Skinner distrusted punishment as well, stating that it had only short term effects and it did not teach alternative behavior.
Operant conditioning is in evidence in almost every aspect of daily life. When we are paid for our work, evaluated for a merit raise by our managers, thanked by a friend for being considerate, penalized for paying taxes late, or even given a parking ticket, operant conditioning is in play.
Most of animal training involves operant conditioning. When we spray our cat with a squirt gun after he jumps on the kitchen counter or give our dog a treat after he rolls over, we are using operant conditioning. Even pigeons can be shaped to do a particular behavior, such as peck at a lever, by successively rewarding behavior that more closely approximates the desired behavior.
When the association starts to erode between the stimulus and the response (in classical conditioning) or between the behavior and the reinforcement (in operant conditioning), a behavior becomes extinguished. A behavior is extinguished when it is no longer performed. This can be a positive thing if the behavior was undesirable to begin with. It can also be negative if the behavior was valued. In general, the behavior should eventually extinguish if it is no longer accompanied by either the prior reinforcement or the unconditioned stimulus. If you stop paying people to go to work, they will probably stop going. If you stopped taking your dog for a walk after you put on your shoes, the dog will eventually stop barking and wagging his tail each time you put them on. The association between sneakers and walk will be extinguished.
Classical conditioning is central to the process of drug addiction. Addiction treatment often focuses on the management of craving. Craving, or the urge to use the problem drug, can be very strong and frequently leads to relapses in people striving for sobriety. Craving is triggered by cues, both external and internal, via the process of classical conditioning. In other words, the person encounters a reminder of drug use (such as drug paraphernalia or the bar where the person used to drink) and the association stimulates craving. This is basically the same conditioning process Pavlov noticed with his dogs. External cues include environmental factors (people, places, and things). Internal cues include emotions, thought, or physical sensations that previously led to drug use.
Although the principles of conditioning are very simple, they are less simple in practice. A number of factors affect the effectiveness of conditioning. Timing is important, specifically the time separating the unconditioned and conditioned stimulus. If the sneakers go on too soon before the dog is walked, it will be hard to associate the shoes with the walk.
Relatedly, the reinforcement should closely follow the behavior for the act to be connected with the consequence. This is why news of global warming has had so little effect until recently, although we’ve known about it for decades. The consequences were not immediate. This is also why it is so difficult to instill healthy habits in the young, when the consequences of their self care will not be evident for decades. The schedule of reinforcement also affects learning. Should the behavior be reinforced every time it occurs? What kind of reinforcement makes a behavior most resistant to extinction?
Intermittent reinforcement, in which the behavior is only reinforced intermittently, best protects a behavior from extinction. If people do not expect the behavior to be reinforced every time it occurs, they will be less likely to stop the behavior when it is not reinforced. It will take longer for them to give up on the behavior. Further, when intermittent reinforcement is unpredictable, it is even more resistant to extinction.
In gambling, the behavior of betting is rewarded intermittently and unpredictably. When the gambler’s bet is not rewarded, the gambler continues to bet, expecting that another win will follow sooner or later. If the gambler had been rewarded for every bet, it would take fewer losses for the gambler to disassociate betting with winning and for the act of gambling to be extinguished. Thus casinos take advantage of an intermittent and unpredictable reinforcement schedule in order to keep gamblers gambling as long as possible.
As the reign of behaviorism continued, the limits of the paradigm became more evident. Animals kept behaving in ways that could not be explained by behaviorist theory alone. For example, Skinner had thought that any animal could be taught any behavior with the appropriate reinforcement schedule. But this did not turn out to be the case. The same behavior was learned easily by some animals, with difficulty by others, and not at all by still others. Rats could easily learn to press a bar for food, while cats would do so only with difficulty. These findings suggest that the genetics of each animal species set the parameters of what could and could not be learned. There were limits to what could be taught.
Edward Chase Tolman (1886–1959) was a devoted behaviorist who studied maze-running behavior in rats (a favorite topic of behaviorist researchers). Despite his expectations, he repeatedly observed behavior in rats that he could not explain solely by stimulus-response connections. He noticed that rats in a maze would often stop, look around, and check out one path, then another before choosing a particular route. He could only explain this behavior (and many other similar behaviors he observed) by inferring some kind of mental process. The rat seemed to have a mental picture of the layout of the maze and that directed its behavior. In this way, Tolman introduced the mind into the behaviorist stronghold. Even rats running mazes evidenced mental processes, some form of thinking about the problem.
Tolman introduced the notions of expectancy, of mental maps, into behaviorism. Rats and other animals did not simply respond to the number of rewards for each behavior, automatically repeating the most frequently rewarded behavior. Some kind of thought process mediated between stimulus and response. More specifically, the rats appeared to develop a set of expectations about how events would play out based on their prior experiences. They then made decisions by matching their expectations against information from the new situation. This kind of mental map is essentially identical to Piaget’s concept of mental schemas and has become a critical concept in many areas of psychology, including cognitive, developmental, and clinical psychology.
In the 1950s and 1960s several lines of development converged to create the explosive shift in academic psychology known as the cognitive revolution. Research in various other fields of study, such as anthropology, linguistics, and computer science, had been moving toward the scientific study of mental processes. Within psychology, studies of memory, perception, personality traits, and other mental phenomena continued to gain ground.
Even orthodox behaviorists were stumbling onto mental processes. As these lines of development came together, the mind once again became a worthy object of study. The black box model of psychology was rejected and cognition, or thought processes per se, became the object of intense interest. Major contributors included Ulric Neisser, Howard Kendler, and George and Jean Mandler. With the renewed interested in cognitive processes, there was also a resurgence of an earlier movement that had started in Europe but migrated to the United States after World War II, namely Gestalt psychology.
Gestalt psychology, which started in the early twentieth century, provided an important counterpoint to the academic psychology of its time, specifically Watson’s behaviorism and Wundt’s structuralism. Its full impact, however, would not be felt until many decades after its birth. Gestalt psychology originated in 1910 with Max Wertheimer’s study of the perception of motion.
The core idea behind Gestalt psychology is that the mind actively organizes information into a coherent whole or a gestalt. In other words, the mind is not a passive recipient of sensory stimuli but an active organizer of information. Furthermore, knowledge does not come from a collection of isolated bits of information. Rather the mind creates a whole out of the relationships between separate parts. Gestalt psychology is a holistic theory.
A gestalt refers to a perceptual whole. The gestalt is created out of the relationships between the parts. Our perceptual knowledge of the world is based on our recognition of these relationships. For example, let us consider what we recognize as a table. Although a table can be large or small, metal or wood, dark or light, we recognize an object as a table if it has a flat, horizontal plane with one or more supports underneath it. Its gestalt is determined by the relationship among its parts.
Gestalt psychology countered the assumption that perception is based solely on the stimulation hitting our sensory organs. As the sensory stimulation coming in differs depending upon the circumstance, we would not be able to recognize an object or person as the same across different situations if our mind did not actively organize our perceptions to recognize the gestalt. For example, we recognize our neighbor as the same person even if he loses weight, changes his clothes, or cuts his hair. Clearly the sensory information differs in each circumstance yet somehow we still recognize our neighbor as one person.
Max Wertheimer (1880–1943) is recognized as the father of Gestalt theory. His interest was first piqued when he noticed the illusion of motion while sitting on a train. Although the landscape outside the train was stationary, it seemed to be moving backwards as the train sped by. Most of us have had the same experience. To Wertheimer, however, this phenomenon offered a unique window into the workings of the mind. When he began his investigations at the University of Frankfurt in 1910, two slightly younger psychologists, Wolfgang Köhler (1887–1967), and Kurt Koffka (1886–1941) came to work with him. Together they studied the illusion of movement through various experiments. Their research into the phi effect, as Wertheimer named it, was the beginning of a life-long, shared commitment to Gestalt research and theory. By the mid-1930s, all three men had relocated to the United States, Koffka before Hitler’s rise to power, and Wertheimer and Köhler in direct response to it.
Arguably, Gestalt theory is important more for its profound philosophical implications than for the specifics of its research findings. For one thing, by demonstrating its principles with solid empirical research, Gestalt theory put the mind back into academic psychology. Secondly, gestalt theory introduced a holistic paradigm, which was in sharp contrast to the associationist approach found in both behaviorism and Wilhelm Wundt’s structuralism. In associationism, complex knowledge is seen to derive entirely from associations between simple memories. Gestalt theorists rejected this view as overly simplistic as they believed that complex knowledge also develops holistically, through recognition of patterns and identification of the whole.
The Gestalt theorists were fond of optical illusions as they illustrated how the mind actively organizes perceptual information. The fact that we can see something that is not really there shows that our perceptions are more than an exact copy of reality. In the photo to the right, a field of rounded dots can seen as either convex (rows of buttons) or as concave (rows of holes). Notice that you can perceive either buttons or holes but you cannot perceive both at the same time. In order for your perception to switch, you have to look at something other than the dots, such as the flat area between the dots.
A field of dots can appear to be either convex or concave, depending on one’s perception of them (iStock).
In the late nineteenth century and early twentieth century, when psychology was coming into its own as a science, there was tremendous admiration for the accomplishments of physical science. This was a time of extraordinary technological changes. The telephone, the motor car, the moving picture—all of these were relatively recent inventions and all of them radically changed society. Science was exploding across the industrialized world and there was a widely shared assumption that the only worthwhile way to understand reality was through the methods used in the physical sciences. And these methods largely reflected an analytic approach to reason.
In other words, the way to understand complex phenomena (such as human psychology) was to break it down into its smallest parts (such as stimulus-response associations). Complexity in and of itself had no interest; it simply reflected a grouping of smaller parts. The whole could be reduced to the sum of its parts. Gestalt theorists challenged this reductionist assumption. They were interested in synthetic reasoning. How do you put the parts back together again? How do you make a whole out of the relationships between parts? Their core position was that “the whole is greater than the sum of its parts.”
This phrase is one Gestalt psychology’s most famous contributions. The Gestalt psychologists believed that there are properties of the whole that exist independently of their component parts. Consider that human beings are composed of cells and tissues. At a smaller scale we are composed of atoms. But can we ever explain our loves, our personalities, our prejudices, and even our taste in music solely by studying the behavior of our atoms? Or by studying our cells? The Gestaltists would say no. There are qualities of the whole that cannot be reduced to the qualities of its parts. Although Gestalt theory is best known for its work with perception, this core concept has been applied to almost every aspect of psychology. It has influenced Piagetian developmental psychologists, cognitive psychologists, and even psychotherapists.
Gestalt theory had much in common with James’s interest in the holistic flow of consciousness. Like Wertheimer and his colleagues, James did not believe we can understand reality simply by breaking it down into its elemental parts. In order to understand the whole of reality, we must look at it as a whole. Gestalt theorists felt that James did not go far enough, however, in his rejection of reductionistic assumptions. But this may not be fair to James, who after all died in 1910, the same year that Wertheimer first became fascinated with the perception of movement.
Gestalt psychology proposes a series of rules by which the mind organizes perceptual information. These include the rules of proximity, similarity, simplicity, and closure. The first two rules suggest that objects that are placed closely together (proximity) or are similar to each other (similarity) will be grouped together into a gestalt. The mind will combine them into a whole. Closure reflects the tendency to fill in the gaps of a gestalt. If we see a circle with sections missing, we will still see it as a circle. Further, the mind will group parts into a whole according to the simplest solution.
This graphic explaining Gestalt principles also uses Gestalt principles. Notice how you associate each picture with the text near it. This is an example of proximity.
Wolfgang Köhler (1887–1967) was one of Wertheimer’s closest associates. From 1913 to 1920, Köhler was director of the Anthropoid Research Station on the island of Tenerife, which is in the Canary Islands off the coast of Northwest Africa. He had intended to stay in Tenerife for only a short while. With the outbreak of World War I, however, he was unable to leave for several years. While in Tenerife, Köhler conducted an important series of studies on chimpanzees’ problem-solving behavior. He set up rooms where bunches of bananas were placed just out of the chimpanzees’ reach and then watched how they solved the problem of reaching the bananas.
Although not all chimpanzees were able to successfully solve the problem—evidently chimpanzees, like human beings, vary in their intelligence—those that did so exhibited similar behavior. For one, they would often try to reach the bananas simply by jumping or reaching for them. Upon failing to grasp the bananas, they would often show frustration, screaming, or kicking the walls of the room. Eventually, after surveying the entire room, they would suddenly derive a solution involving the use of nearby objects as tools. One chimp might drag a box under the bananas and then climb on top of it to reach them. Some chimps stacked multiple boxes to attain their goal. Another might put two sticks together to create a stick long enough to reach the food.
There has been controversy regarding Köhler’s stay in the Canary Islands during World War I. A number of people, specifically British intelligence agents, believed he was a German spy. Evidently they were not convinced that his fascination with chimpanzees and bananas was sufficient explanation of his presence. Some contemporary writers still believe the issue is unsettled although no evidence has been produced that proves he was anything more than a Gestalt psychologist.
These studies showed two things. For one, the animals arrived at their solutions only after surveying the entire environment. They did not just focus on a single object but took the entire field into account. Secondly, the problem was not solved through trial and error via rewards and punishments as the behaviorists would have predicted. Instead the animal arrived at a complete solution all at once. In other words, the chimps did not solve problems in a piecemeal fashion, but rather in a holistic way. Köhler referred to this holistic form of problem solving as insight learning.
Wolfgang Köhler (1887–1967) conducted a famous series of studies on chimpanzees’ methods of problem solving. He placed a bunch of bananas just out of the animals’ reach and then watched how they figured out how to get to the bananas. At first frustrated, the chimps eventually reached an insight about how to use available objects as I tools. This insight often came suddenly in a sort of A-Ha! moment. In one case, a chimpanzee put two sticks together to create a tool long enough to reach the bananas. Another chimp stacked three boxes on top of each other to reach the fruit hanging from the ceiling. Besides showing us the remarkable ingenuity of these animals, this work supported the Gestalt notion that the mind actively creates complete solutions to problems. This is in contrast to the behaviorist assumption that problem solving can only proceed piecemeal by trial and error.
Gestalt therapy, a school of psychotherapy founded by Fritz Perls in the 1940s, is completely distinct from Gestalt psychology, the body of research and theory derived from Max Wertheimer’s experiments with perception. Gestalt therapy is commonly considered part of humanistic psychology and incorporates principles from the philosophical schools of phenomenology and existentialism as well as psychoanalysis and Gestalt psychology.
While behaviorism dominated American academic psychology throughout the first half of the twentieth century, psychoanalysis dominated clinical psychology—the study of abnormal psychology—during the same period both in Europe and the United States. Psychoanalysis was so prominent because it provided a comprehensive theory of psychopathology and a psychological method of treating mental distress. It is fair to say that most, if not all, subsequent theories of psychopathology and psychotherapy owe an enormous debt to psychoanalysis.
Although many schools of psychotherapy were formed in reaction against psychoanalysis, they were still defined in response to it and therefore must be seen as its descendants. Psychoanalytic theory actually includes a broad range of theoretical writing, starting with Sigmund Freud’s original contributions in the late nineteenth century. Since Freud, psychoanalysis has broken into numerous schools including ego psychology, interpersonal psychoanalysis and the object relations school, all of which developed in the mid-twentieth century. More recent schools include self psychology and relational theory.
Freud changed his theories several times over the course of his long career. He initially proposed the seduction theory to explain hysteria, a common disorder of the late nineteenth century involving physical complaints without an actual physical basis. The seduction theory posited that hysteria resulted from early sexual experience, what we would now call childhood sexual abuse. This explanation was abandoned in the late 1890s, however, and Freud focused instead on unconscious sexual fantasy. In other words, the symptoms were caused by the patient’s disguised wishes rather than memories of real events. Freud also moved from a topological theory, focusing on the relationship between conscious and unconscious processes to a structural model, focused on the id, ego, and superego. Finally, in the 1920s Freud added the instinctual force of Thanatos, the death instinct, to his theory of instincts.
In Freud’s topological model, the mind is divided into three sections, the unconscious, the pre-conscious and the conscious. In the unconscious, the individual is not aware of the contents of the mind. Here forbidden and dangerous wishes reside, safely out of awareness. In the pre-conscious, mental content is capable of entering into consciousness but is not currently there. There is no block between conscious and pre-conscious as there is between the conscious and the unconscious. The conscious part of the mind contains all the mental content that is in our awareness. It is very small compared to the unconscious.
The structural model overshadowed the topological model’s focus on the conscious/unconscious division of the mind. While Freud still believed in unconscious processes, he became more and more interested in the compartmentalization of the mind into the id, ego, and superego. The id, translated literally as “the it”, contains the animalistic passions that must be subdued in order for civilization to function. The id works on the pleasure principle, where wish equals reality and desire is not subject to restraint. The ego, Latin for “the I,” mediates between the id and reality. The ego operates on the reality principle and recognizes that the world does not always obey our desires. The superego is the source of our morality. It is formed through our internalization of our parents’ rules and discipline. A strict superego results in inhibited, moralistic behavior. A weak superego results in self-indulgent, poorly disciplined, or immoral behavior.
Throughout his career Freud maintained a theory of libido as the primary motivating force behind all human behavior. In fact, he parted ways with some of his favorite pro-tégées after they proposed competing theories about human motivation. Libido can be loosely translated as the sexual instinct, but really refers to all aspects of sensual pleasure. In Freud’s view, instincts press for release as part of the pleasure principle. Pleasure is only attained when tension is reduced through release of instinctual energy. If the instinct is blocked from release, it will seek another outlet, much like a river running downstream. This mechanistic view of human motivation was called the hydraulic model and reflects the scientific models of the day.
Sigmund Freud’s office in London, where he immigrated in 1939. Patients laid down on his famous couch while Freud sat behind them in his chair. (photo by Lisa J. Cohen).
Following the carnage of World War I, Freud added Thanatos to his theory of instincts. Thanatos, the death instinct, explains human destructiveness. Because pleasure can only be found through the reduction of tension, there must be a drive to reach a state of total quiescence, a state of no tension at all. This is the equivalent of death, hence the death instinct. We now realize that pleasure comes from the build-up of tension as well the release of tension.
While the focus on sex may seem odd to modern eyes, it is important to consider Freud in the context of his own time. He was an extremely ambitious man who aimed to build an all-encompassing scientific theory to explain human behavior. In keeping with nineteenth-century mechanics, he looked for one single force that could explain all of human behavior. He was also a product of the Victorian period—a prudish, sexually inhibited time when sexual repression in the European upper middle-class was probably rampant. It is possible that many of the psychological symptoms his female clients exhibited truly were related to repressed sexuality. Over time, however, much of Freud’s theories, including the theory of libido and of psychosexual stages, were translated into emotional and interpersonal terms.
We can question whether the particular configuration of Freud’s own family made him especially sensitive to Oedipal issues. Twenty years separated Freud’s parents, Amalia and Jacob, the same age difference between Freud and his mother. Freud was his mother’s (but not his father’s) first born and by many accounts had a particularly close and intense relationship with her throughout his whole life. She died at age 95, only nine years before her son died.
Freud believed that the libidinal instincts moved through a series of developmental stages, corresponding with different erogenous zones at different ages. In the phallic stage (approximately ages four to seven), the little boy goes through the Oedipal crisis, which results in the formation of his super-ego. Around this age, the little boy falls in love with his mother. Recognizing his father as his rival, he feels murderous rage toward his father, controlled only by his fear of his father’s greater strength. His fear that his father will cut off his penis in retaliation is termed castration anxiety.
As a solution to this dilemma, the little boy identifies with his father, realizing that he will grow up to be a man just like him and then have a wife all his own. This internalization of the father and the father’s authority is seen to be the foundation of the super-ego and of a boy’s moral development. Freud was not as sure how to account for female moral development and assumed women to have weaker super-egos due to their obvious immunity to castration anxiety. While the specifics of this theory have been roundly criticized by feminists and developmental psychologists alike, Oedipal behavior is often observed in children this age, who can show strikingly romantic behavior to older relatives of the opposite sex.
Since the inception of psychoanalysis, Freud has always had passionate loyalists and detractors. Psychoanalysis has been trashed as all hocus-pocus; Freud’s writings have also been treasured as the bible and seen as infallible. To some extent this is still the case today. However, many advances in our understanding of behavior and of the brain have shown that Freud was often onto something, although he was wrong in many of the specifics. Modern neuroscience, for example, has revealed the frontal lobe and the limbic system to function dramatically similarly to the ego and the id.
There have been many developments in psychoanalysis. In contemporary psychoanalysis, the schools of object relations, self psychology, and relational theory have translated Freud’s original ideas into interpersonal terms. The emphasis has shifted from sexual instincts to consideration of how early childhood relationships affect adults’ capacity to relate to others and manage emotions. Principles from attachment theory and ideas about self-reflective functioning (as found in the work of Peter Fonagy and Mary Target) have also informed contemporary psychoanalysis. Arguably, the integration of psychoanalytic concepts with advances in neuroscience currently forms the cutting edge of psychoanalytic theory.
The Swiss psychiatrist Carl Gustave Jung (1875–1961) was one of Freud’s closest collaborators until he broke off to form his own school of analytical psychology. While clearly grounded in Freudian psychoanalysis, Jungian analytical psychology moves away from the dominance of libido and toward a mystical understanding of the human unconscious. Interestingly, Jung came from a long line of clergymen. His father was a minister in the Swiss Reformed Church.
Carl Gustave Jung (1875–1961) was a protegé of Freud who broke away in 1913 to found his own school of Analytical Psychology (Library of Congress).
Fairly early in his career, Jung worked in Zurich at the renowned Burghölzli clinic under Eugen Bleuler, a prominent psychiatrist and the originator of the term “schizophrenia.” Here Jung became involved in research with word association, detecting unconscious meaning through the way people grouped words together. This work led him to Freud’s psychoanalytic research and the two men met in 1907. An intense and dynamic collaboration followed but ended acrimoniously in 1913 following a 1912 publication in which Jung was critical of Freud’s work. From 1913 on, Jung referred to his own work as analytical psychology to differentiate it from Freudian psychoanalysis.
Mandalas are religious artworks created by Buddhist and Hindu monks. Jung was strongly attracted to Eastern religions and viewed the mandala as a symbol of the personality. For Jung and his followers, the structure of the man-dala, with its four corners bound to a central circle, represents the path of personal development. In our personal growth, we strive to unite the opposing forces of our personality (the four corners) into a comprehensive, all-inclusive, self-awareness (the central circle). To gain this awareness, we must turn inward, just as the outer corners of mandala point inward toward the circle.
Jung was a favorite protégé of Freud until they broke off their relationship over doctrine. Jung rose quickly within the psychoanalytic world, becoming editor of a psychoanalytic journal and president of the International Psychoanalytic Association. Freud favored him in part because as a non-Jew, he offered a bridge to the wider non-Jewish scientific community in Europe. Jung’s relationship with Eugen Bleuler also offered the promise of greater scientific respect for psychoanalysis, which was something Freud craved. Jung grew increasingly uncomfortable, however, with Freud’s insistence on sexuality as the sole motivating force. He agreed with Freud’s energy-based conception of psychological motivation—that normal and abnormal psychological processes were a product of energy flow—but he believed sexuality to form only a small part of human motivation.
Temperamentally, the two men differed as well. Jung had a mystical bent, nurtured perhaps through his family’s religious heritage, and a life-long interest in the occult. Freud was a fervent rationalist, believing religion to be little more than an infantile form of neurosis. It is unlikely Freud would have had much respect for the occult either, except perhaps as clinical material.
Like Freud, Jung believed the mind was divided into the conscious and unconscious parts and that the conscious part comprised a small fraction of the total psyche. Jung also believed, like Freud, that repressed and forbidden ideas were banished to the unconscious, intentionally kept out of consciousness. Unlike Freud, however, Jung divided the unconscious into the personal unconscious and the collective unconscious. The personal unconscious contained personal experiences that had slipped out of consciousness, due either to simple forgetting or repression. The contents of the personal unconscious came from the individual’s life experience. The collective unconscious, however, held the entire, evolutionary heritage of humanity. It contained the entire library of our typical reactions to universal human situations. It was not limited to the individual’s life but encompassed the great, impersonal truths of existence.
Mandalas are religious designs used in the Hindu and Buddhist traditions (Fortean Picture Library).
Jung developed a typology of personality traits that has had wide influence on personality psychology. He divided the conscious mind into both functional modes and attitudes toward the world. The functional types refer to ways that people process information. Believing the mind to be composed of opposites in continual tension with each other, he proposed two polarities, thinking vs. feeling and intuition vs. sensation. Each polarity was mutually exclusive from the other one. You could not process the world through feeling and through thinking at the same time. One side of the polarity was always dominant, the other relegated back to the unconscious. Extroversion vs. introversion described the attitude toward the outside world. The extrovert attends primarily to external reality, to other people and objects. The introvert is turned inward, preoccupied with internal, subjective experience.
The Meyers-Briggs test is a well known personality test that is often used in the workplace to identify employees’ different personality styles. This test uses all three polarities mentioned above, extroversion vs. introversion, thinking vs. feeling, intuition vs. sensation, and adds one more, judgment vs. perception. Extroversion is also measured on scales associated with the Five Factor Model of personality, such as the NEO personality inventory. This test, formulated to identify dimensions of personality in non-pathological adults, uses 240 items to quantify five areas: neuroticism, extroversion, openness to experience, agreeableness, and conscientiousness.
Archetypes are patterns of experience and behavior that reflect ancient and fundamental ways of dealing with universal life situations. Archetypes reside in the collective unconscious. There is a mother archetype, a child archetype, an archetype of the feminine, of the masculine, and many more. Archetypes can never be directly known in consciousness but can only be glimpsed through the images that float up from our unconscious in dreams, creative works of art, mythology, and even religious symbolism. Through interpretation of this visual symbolism, we gain greater knowledge of our deepest selves.
Jung was always drawn to mysticism and late in life he traveled extensively to learn about the spiritual practices of other cultures. He visited the Pueblo Indians in New Mexico, he traveled to Kenya and India and he collaborated on studies of various Eastern religions. He viewed the symbolism in all the religious traditions as expressions of universal archetypes. Jung’s view of mental health was also religiously tinged. Our happiness is dependent upon our communion with a universal reality that is part of us but yet larger than us. In his concept of the collective unconscious, he combined psychology, evolutionary biology, and the spiritual traditions of many diverse cultures.
Humanistic psychology refers to a group of psychological theories and practices that originated in the 1950s and became very popular in subsequent decades. Similar to the Gestalt psychologists, the humanistic psychologists reacted against the constraints of the dominant psychological schools of their time but the humanistic psychologists had better timing. They arrived on the scene just as the dominance of behaviorism and psychoanalysis was beginning to fade. In fairly short order, they became powerful counterpoints to the orthodoxies of both schools.
In general, humanistic psychologists wanted to inject humanity back into the study of human beings. More specifically, they objected to a mechanical view of psychology, to the portrayal of human beings as passive objects at the mercy of either stimulus-response chains or unconscious drives. They insisted that people are active participants in their own lives. Humanists emphasized free will and the importance of choice. They also valued the richness of subjective experience and concerned themselves with the qualities of lived experience, of human consciousness.
Finally, they challenged the emphasis on pathology in psychoanalysis. In contrast to Freud, they believed that people are inherently motivated toward psychological growth and will naturally move toward health with proper encouragement and support.
In Europe, the ravages of World War II and the Holocaust brought the question of meaning to the fore. How can life have meaning and purpose in the face of such senseless slaughter? The philosophical movement of Existentialism came out of these circumstances and provided a backdrop for the humanistic psychologists. Phenomenology, an earlier branch of European philosophy, also influenced the humanistic psychologists with its focus on the rich complexity of subjective experience. With regard to psychological schools, the functionalism of William James also played a role, as did the holistic theories of the Gestalt psychologists.
In 1950s America, where humanistic psychology originated, the field of psychology was dominated by the twin giants of behaviorism and psychoanalysis. Behaviorism dominated academic psychology and psychoanalysis dominated clinical psychology. Humanistic psychologists wanted to create an alternative to these two great forces: a third force in psychology.
The American psychologist Abraham Maslow (1908–1970) was one of the founding fathers of humanistic psychology. Maslow wrote a number of books and also made several important theoretical contributions. He is perhaps best known for his concept of the hierarchy of needs. Maslow believed that human psychological needs are multidimensional and that there is no single motivating force to explain all of human behavior. He believed these needs could be organized hierarchically with the most fundamental needs related to biological survival. Once our fundamental biological needs—such as thirst, hunger, and warmth—are met, our needs for safety come into play. Following satisfaction of safety needs, psychological needs for emotional bonds with other people become important. Once those are met, we become concerned with self-esteem and the need to feel recognized and valued in a community.
Finally, after all these more basic needs are met, we encounter the need for self actualization, a kind of creative fulfillment of our human potential.
This triangle illustrates Maslow’s hierarchy of needs.
Although Maslow was not the first to use the term self-actualization, his name is most frequently associated with it. Self-actualization refers to a state of full self-expression, where one’s creative, emotional, and intellectual potential is fully realized. We recognize what we need to feel fully alive and we commit ourselves to its pursuit. Although Maslow was criticized for promoting what was seen as a selfish pursuit of pleasure, he stressed that it is only through development of our truest selves that we attain full compassion for others. In his view, self actualized people make the strongest leaders and the greatest contributions to society. This concept illustrates humanistic psychology’s concern with personal growth and psychological health in contrast to psychoanalysis’ emphasis on psychopathology and mental illness.
A peak experience occurs in a state of total awareness and concentration, in which the world is understood as a unified, integrated whole where all is connected and no one part is more important than another. This is an awe-filled and ecstatic experience, which is frequently described in religious or mystical terms. It is not simply a rose-colored distortion of life, however, where all evil and tragedy is denied. Rather it is a moment of full comprehension, where good and evil are fully accepted as a part of a complete whole. Like William James and Carl Jung before him, Maslow believed the mystical and ecstatic aspects of religion were proper subjects of psychological study.
Maslow also distinguished between two different kinds of love. D-love or deficiency-love refers to a kind of grasping, possessive love. In this state, we cling to the loved one out of desperate dependency and see the loved one as a means to fill come kind of deficiency in ourselves. B-love, or being-love, reflects a love based on full acceptance of the other person. In B-love, we love other people simply for who they are and not for what they can do for us. Naturally, B-love is seen as the healthier and more sustainable kind of love. Maslow was very focused on the importance of rising above selfish desires in order to embrace other people for their own sake rather than as means to a goal. Interestingly, Maslow described his own mother as an extremely disturbed woman who was incapable of valuing anyone for any reason outside her own personal agenda.
A number of schools of psychotherapy came out of the humanistic movement and many more were influenced by it. Carl Rogers’s person-centered psychotherapy, Fritz Perls’s Gestalt therapy (named after Gestalt psychology but more closely tied to humanistic psychology), Victor Frankl’s logotherapy, and Rollo May’s existential psychoanalysis are all children of humanistic psychology.
Carl Rogers (1902–1987), another key figure in humanistic psychology, has had enormous influence on the practice of psychotherapy. His school of person-centered psychotherapy, originally known as client-centered psychotherapy (and often simply referred to as Rogerian therapy), placed the client’s subjective experience at the forefront of the therapy. He believed the therapist’s role was less to untangle psychopathology than to promote the client’s personal growth through empathic listening and unconditional positive regard. While Rogers has been criticized for a relative disregard of negative emotions and interpersonal conflict, therapeutic empathy is now universally recognized as an essential ingredient of psychotherapy.
Rogers made a distinction between loving a child for his or her intrinsic worth and loving the child dependent upon some condition: “I will love you if you are a good student, beautiful, obedient,” etc. Children who feel loved unconditionally grow up to have faith in their own intrinsic worth. In contrast, children who experienced their parents’ love as conditional, as contingent on some kind of performance, will often suffer long-lasting damage to their sense of self. These notions are similar to Maslow’s concepts of B-love and D-love.
Rogers was a pioneer in the scientific investigation of psychotherapy. He believed the methods of empirical research could and should be applied to the practice of psychotherapy. He was the first to record psychotherapy sessions despite vehement opposition from the psychoanalysts who believed the privacy of the therapy hour should never be violated. Rogers also systematically measured improvement by administering psychological tests pre-and post-treatment and then compared the results of subjects in therapy with those in a control group. These methods became fundamental tools in psychotherapy research, which has since blossomed into a discipline all its own.
Attachment theory was one of the first movements to provide empirical support for the key concepts of psychoanalytic theory, specifically that early childhood relationships with caregivers have profound impact on later personality development. Similar to Carl Rogers, attachment theorists believed that scientific methods could be usefully applied to the study of emotional and interpersonal phenomena. Thus attachment theory was the first movement to bring scientific methods to bear on psychoanalytic ideas. Not surprisingly, this occasioned resistance at first but over time attachment theory has been accepted by most psychoanalytic schools.
Attachment refers to a biologically based drive in the child to form an enduring emotional bond with the caregiver, generally the mother. Attachment theory originated with John Bowlby who wrote a trilogy of books entitled Attachment and Loss (1969, 1973, 1980). Bowlby’s theory was greatly expanded by Mary Ainsworth (1913-1999), who developed an experimental procedure to study attachment. It was Mary Ainsworth who put attachment theory into the lab.
John Bowlby (1907–1990) was a British psychoanalyst who became concerned with the devastating impact of early mother-child separations, which he frequently witnessed when working in post-World War II England. Disturbed by the dismissal of real-life events in the psychoanalytic world view, Bowlby’s insistence on the real-time influence of the mother’s presence often put him at odds with his colleagues. Bowlby was also interested in ethology, the study of animal behavior, and eventually synthesized both psychoanalytic theory and ethology into his theory of infant-mother attachment.
This little boy’s crying and reaching for his mother are what Bowlby referred to as attachment seeking behaviors.When a child is separated from his or her mother, the attachment system is activated and the child displays attachment seeking behavior to re-establish contact (iStock).
Generally, attachment is seen as a biologically-based, evolutionarily adaptive drive for the infant to seek protection from the mother. When the child is frightened or is separated from the mother, the attachment system is activated and the child will seek proximity or physical closeness to the mother. The child will reach toward the mother, cry to be picked up, or crawl close to the mother. In Bowlby’s view, the child is motivated to attain a sense of felt security, a subjective experience of safety and well-being-perhaps a kind of cozy contentment. When the child feels secure, the attachment system is deactivated and the exploratory system is turned on. At these points, the child will venture away from the mother to explore the world, to play. If the relationship with the mother is disrupted through separation or loss, the child will experience great sadness and distress, which can have long-lasting and even lifelong impact, depending on the severity of the loss.
Although his description of the infant attachment system was largely behavioral, Bowlby addressed the psychological aspects of attachment through the notion of the child’s internal working model of attachment. This is a kind of mental map or script of the caregiver and the self. Through repeated attachment experiences, the child develops expectations about the availability and responsiveness of the mother (or caregiver). The child develops a working model of how the mother-child interactions will play out and then modifies attachment behavior according to these expectations.
Although John Bowlby was always interested in translating his concepts into empirical research, his colleague Mary Ainsworth (1913–1999) is credited with taking attachment theory into the lab. While Bowlby had initially been interested in the universal effect of mother-child separation, Ainsworth was interested in individual differences in the quality of attachment based on the nature of the mother-child relationship. Her initial research was in Uganda, where she had traveled with her husband in 1954. By observing twenty-eight Ugandan babies, she noted individual differences in the quality of mother-infant attachment.
This research would be further developed in Baltimore at Johns Hopkins University, where she and her husband moved after leaving Uganda. Here she studied mother-child interactions both in their homes and in the laboratory during an experimental procedure she termed the strange situation. Based on the child’s responses to separations and reunions with the mother, the child could be classified into secure and insecure attachment categories. Ainsworth also found that attachment status in the lab correlated with the mother’s behavior toward the child in the home. Ainsworth’s publication of this data in her 1978 book Patterns of Attachment was a milestone in attachment research. This fairly simple experimental paradigm would dramatically change psychological research into child development.
The strange situation is a twenty-minute procedure in which infants of twelve to eighteen months and their mothers are introduced to a room full of toys attached to an observation room by a one-way mirror. A sequence of separations and reunions follow. There are eight episodes to the strange situation, the first lasting only thirty seconds and the rest up to three minutes. The baby’s reactions during the two separation and reunion episodes are carefully observed through the one-way mirror. Based mainly on these behaviors, the baby is classified as either securely attached or into one of three insecurely attached categories.
A securely attached child (B baby in Ainsworth’s system) showed interest in the toys when the mother was in the room. Some but not all babies showed mild to moderate distress in the separation episodes. Most importantly, in the reunion episodes, the child directly sought out contact with the mother. If the child was distressed after the separation, contact with the mother was effective in soothing the child. This pattern of behavior is seen to reflect the child’s felt security in the mother’s availability and responsivity to the child’s attachment needs.
A child who is insecurely attached is viewed as feeling insecure about the mother’s emotional availability or responsivity to the child’s attachment cues. The child then modifies his or her attachment behavior to adapt to the mother’s behavior. There are several categories of insecure attachment. Ainsworth originally proposed two categories, avoidant and resistant attachment, but another category, disorganized attachment, was added later.
Avoidant children, whom Ainsworth originally classified as A babies, show overly independent behavior. They tend to show more interest in the toys than their mother and little distress during the separation. Most importantly, they turn away from their mother upon reunion; hence they are avoidant. Resistant babies (or C babies) may be seen as overly dependent. They are less likely to engage with the toys when with their mother is present and may show great distress upon separation. Upon reunion, they show proximity-seeking behavior with the mother (crying, reaching, etc.) but also resist the mothers’ attempts to soothe them. They may push their mothers away, arch their backs when picked up, or angrily kick their mothers. While avoidant and resistant classifications are considered a variant of normal attachment, disorganized attachment is more likely to be found in children who are victims of abusive or otherwise pathological parenting. These children do not show a consistent strategy of dealing with attachment and may even show fear toward their parent.
No. Biology ensures that all children are powerfully attached to their caregivers. There is no choice in this matter. Children vary in the security of their attachment, which basically means how safe they feel in their relationship with their attachment figure, how secure they are that their caregiver will respond to their needs. But this does not mean they vary with regard to the power of their attachment to their caregivers.
The way mothers respond to their babies’ emotional cues affects the quality of the infants’ attachment. Parents of securely attached infants are reliably and sensitively responsive to the child’s emotional communications. Parents of insecurely attached children fail to respond sensitively to their children’s emotional cues or at least do so inconsistently (iStock).
In Ainsworth’s sample, securely attached children were more likely to have mothers who were reliably sensitive and responsive to the child’s cues as measured in the home environment. Mothers who were more sensitive to their infants during feeding, play, physical contact, and episodes of emotional distress in the first three months of life were more likely to have securely attached infants at twelve months.
Mothers of avoidant babies were shown to be reliably unresponsive to the babies’ cues in Ainsworth’s home studies. Avoidant attachment behavior is considered to reflect a down regulation of attachment in response to a reliably unresponsive mother. Mothers of resistant babies were found to be unreliably responsive to the child’s attachment cues in the home setting. Thus resistant attachment may be seen as a strategy to maximize their mother’s attention by ratcheting up their attachment system. Disorganized attachment is more often found in children who have been abused or whose mothers have significant emotional pathology. These babies cannot find a consistent strategy to deal with their parents’ erratic or even frightening behavior. Thus their attachment behavior is disorganized.
The three major attachment classifications, secure, insecure-avoidant, and insecure-resistant, can be seen to reflect predictable responses to different reinforcement schedules. They can be explained by the laws of operant conditioning. Avoidant attachment reflects the extinction of attachment-seeking behavior after these behaviors have consistently failed to elicit a response from the mother. The children in effect give up on the mother’s response. Resistant attachment reflects the opposite pattern, in which there is an increase of behavior in response to an intermittent reinforcement schedule. The child learns to crank up the attachment behaviors in order to maximize the likelihood of the desired response from the mother. Secure attachment reflects a consistent reinforcement schedule. The child has learned that attachment-seeking behaviors will be consistently and predictably rewarded, so the child simply performs them when needed and stops when they are no longer needed.
Alan Sroufe and his colleagues conducted several studies looking at the impact of attachment status on later childhood development. Children who were classified as securely attached were more likely to have better relationships with peers and teachers in later childhood than those classified as insecure. Insecure-resistant children showed overly dependent behavior with teachers while insecure-avoidant behavior showed overly independent behavior. These children were less likely to seek help from teachers when problem solving even if they could not solve the problems by themselves.
Attachment research is sometimes interpreted as implying that personality is entirely formed by the time that a child is one year old. Attachment strategies are conservative, that is they are resistant to change, but they are not fixed. If the family environment remains stable and the parent-child interaction patterns do not change dramatically, it is likely that the child’s general approach to attachment will remain stable. On the other hand, if the family environment changes dramatically or if the parent chooses to change his or her way of relating to the child, the child’s attachment status can change.
Changes in parental circumstances can impact attachment status, either positively or negatively. A single mother getting married or a father losing his job can make the parents more or less available to the child and thus impact the child’s attachment status. Studies of low income families show greater changes in children’s attachment classification over time than is found in middle-class families. This may be because lower income families have less buffer against changes in the environment than do families with greater financial means.
Mary Main was interested in the way attachment status might manifest in adults. She recognized that quality of attachment could not be easily captured in behavior, as it could be with infants, but would have to be investigated as an enduring part of personality. She built on Bowlby’s concept of internal working models to consider the way adults represent attachment relations. A representation is like a mental image or map of relationships. Main addressed questions about attachment: How do adults think about and talk about attachment? What is their narrative about attachment?
In order to study attachment in adults, Mary Main and colleagues developed a semi-structured interview called the Adult Attachment Interview (AAI). A semi-structured interview provides specific questions as well as open-ended, follow-up questions. There is a script to follow but the interviewer can deviate from it to clarify information. The interview takes about an hour and a half and asks questions about the subject’s childhood relationship with his or her parents. The way the subject talks about childhood attachment is more important than what they say about their parents. Of most importance is the coherence of their narrative, specifically between their abstract generalizations about their childhood attachment relationships (e.g., “My mother was loving, involved.”) and the specific memories generated to illustrate these generalizations (e.g., “I remember making chocolate chip cookies with her in our kitchen.”). The narrative is coherent if the story makes sense; if it is riddled with contradictions, it is not coherent.
Main developed three attachment classifications to correspond with the three infant classifications. She labeled them D,E, and F to match Ainsworth’s A, B, and C. Dismissing adults (D) were hypothesized to correspond to avoidant babies (A); Enmeshed adults (E) to correspond to resistant babies (C); and Secure adults (F for Free) to correspond with secure babies (B). The enmeshed classification was later changed to preoccupied.
These (simulated) excerpts below illustrate typical responses for each of the adult attachment classifications on Mary Main’s Adult Attachment Interview. Note how the dismissing adult presents an idealized view of her relationship without any specific memories to back it up. The securely attached adult is much more coherent. She acknowledges contradictions and mixed emotions but can reflect objectively on the relationship. The preoccupied adult, in contrast, is flooded by her attachment-related memories and is unable to integrate emotion and thought into a coherent narrative.
Dismissing
Interviewer: | Could you tell me five adjectives that describe your childhood relationship with your mother? |
Mother: | Oh, I don’t know. I guess she was normal, she was fine. I guess she was loving. She was practical and a good teacher. |
Interviewer: | Could you give me an example for each of those words? |
Mother: | Well, you know, she was always there. I don’t remember any problems or like anything that was really wrong. She was a good teacher—she always wanted to make sure we got good grades. |
Secure
Interviewer: | Could you tell me five adjectives that describe your childhood relationship with your mother? |
Mother: | Hmm, that’s a little complicated. My mother was very warm and very loving but she could also be controlling. So we had a very close relationship but it was also conflictual at times, especially when I was a teenager. |
Interviewer: | Could you give me an example for each of those words? |
Mother: | I remember a lot of affection. I remember curling up with her on the couch in the evenings, watching TV. But I also remember getting in fights with her, more when I was older, when I wanted to go out with my friends. She would insist that I be home earlier than any of my friends had to. Hmm, maybe she was just being responsible, but at the time I thought she was unreasonable. |
Preoccupied
Securely attached adults tend to be more sensitive to their infants’ emotional cues (iStock).
Adults who are securely attached value attachment and can speak about attachment relationships with feeling but will also be thoughtful and reflective. They can take some distance from their feelings and be reasonably objective about their experiences. On the AAI, secure adults give a coherent account of their childhood relationships with their parents and their generalized descriptions of the relationship are supported by specific memories. In the same way that a securely attached child balances dependency and exploration, a securely attached adult balances emotion and thought.
A dismissing adult corresponds to an avoidant infant. Attachment is devalued and dismissed by these adults with a concomitant emphasis on thought separated from emotion. An idealized picture of childhood attachment relationships is presented though it is not backed up by supporting memories. The adult may describe his or her mother as “fine, normal, and a good mother” but only provide memories such as “Well, you know, she was always there. She was just a normal mother.” The impression is of a cool, distant relationship with minimal recognition of the child’s emotional need for the parent.
Preoccupied adults correspond to resistant infants. In contrast to dismissing adults who attempt to minimize the effect of attachment, preoccupied adults cannot turn their attention away from attachment; they are preoccupied with it. These adults are flooded with memories of attachment relations but cannot take the distance necessary to create a coherent, objective narrative. They provide contradictory, rapidly alternating views of their attachment relationships (“She was loving, no she was really selfish.”) accompanied by a gush of vivid memories (“I remember on my senior prom. It’s always about her. It was my night but she kept inserting herself. I wanted to wear my blue heels but she said they made my legs look fat.”) In this case emotion predominates over rational thought.
There is a strong relationship between security of attachment in parents and security of attachment in their children. Secure adults are more likely to raise secure children and insecure adults are more likely to raise insecure children. However, the type of insecure attachment in adults is less strongly correlated with the type of insecure attachment in their children. Some dismissing mothers may have resistant children and some preoccupied mothers may have dismissing children.
Peter Fonagy and Mary Target have added to the attachment literature with their dual concepts of self-reflective functioning and mentalization. They propose that security of attachment in adulthood involves the capacity for self-reflective functioning, which means the ability to reflect upon one’s emotional experiences in a thoughtful and coherent way. The ability to mentalize emotional experiences involves the capacity to represent one’s own and others’ mental experiences; that is, to understand and grasp the nature of emotional experience. In their view, the child’s security of attachment is not only dependent on the mother’s sensitive behavior but also on her psychological sensitivity. When the mother can keep her child’s subjective experience in mind, she teaches the child that emotions both can be understood and communicated. The child’s development of self-reflective functioning is therefore dependent upon the mother’s mentalization of the child’s experiences. Fonagy and Target have applied these concepts to their work with adults with severe personality disorders, many of whom sorely lack both self-reflective and mentalization abilities.
By the final third of the twentieth century, evolutionary concepts were increasingly penetrating psychological theories. For example, both attachment theory and Jungian psychology borrow from evolutionary biology. The field of sociobiology explicitly applies the principles of evolutionary theory to the understanding of social behavior. This approach assumes that at least some part of social behavior is genetically based and therefore has been acted upon by evolution. In other words, when a behavior has survived across thousands of generations, it most likely serves an evolutionary purpose. This approach was first applied to the study of non-human animals; it wasn’t until the 1970s that evolutionary theory was rigorously applied to the study of human social behavior.
Evolutionary psychology is an outgrowth of sociobiology that focuses specifically on the evolutionary roots of human behavior.
Edward O. Wilson (1929–) is considered the father of sociobiology. A professor of entomology (the study of insects) in the Harvard biology department since 1956, he has maintained a lifelong interest in the social behavior of animals. His original specialty was the social life of ants. Wilson’s great contribution was to state that the evolutionary explanation of animal behavior could be applied to the study of human behavior. He did not mean that culture and environment had no influence, only that our behavioral repertoire has its origins in our genetics and has been shaped by the processes of natural selection.
When he first published his classic text Sociobiology: The New Synthesis in 1975, it was met with much resistance. To many people it was politically offensive because it seemed to dismiss the importance of environment. As with Eugenics and other earlier movements that proclaimed the heritability of human behavior, it seemed to endorse social inequality as the natural order of things. Over the last few decades, however, sociobiology and evolutionary psychology have become more widely accepted. With advances in brain imaging technology and other methods of studying biology, our understanding of the biological underpinnings of human behavior has grown dramatically. Likewise, our appreciation of the complex interplay between genes and environment has also advanced, so that it is now accepted that a focus on the genetic basis of behavior does not have to mean that environment is irrelevant.
Darwinian evolution is the central explanatory framework for all of biology. All of biological science is understood within the context of evolution. Likewise, human beings are biological animals and our behavior is inextricably tied to our biology. Thus a clear understanding of evolutionary principles is critical to the understanding of human psychology.
Charles Darwin’s theory of evolution has proved key to an understanding of biology, and this has translated, as well, to how scientists understand human psychology (iStock).
Both sociobiology and evolutionary psychology assume our behavior is grounded in our genetics. Genetics determine the range of possible behaviors, the parameters of our behavior. Much of our behavior, however, simply cannot develop without extensive training. For example, we cannot learn to read unless we are taught the necessary skills and unless we are exposed to reading materials. With the proper circumstances, our genetic make-up allows us to learn to read. In contrast, no amount of training will ever lead a cat, a dog, or a pigeon to read. Likewise, no amount of training will ever allow a human being to fly (without artificial support). Thus genetics determine the potentiality of our behavior but genetics alone cannot determine the specific outcomes for any given individual.
Natural selection refers to the effect of the natural environment on the likelihood that genetically based traits will be passed on from one generation to the next. The process goes like this: First there must be variation in a particular trait within a population. Secondly, the trait must have some genetic basis. Thirdly, one version of the trait is better adapted to the environment than another version. Consequently, the animals with the more adaptive trait will bear more young, thus passing more of their genes onto the next generation.
Let’s consider the example of light and dark moths first recorded by Charles Darwin. There were two varieties of moths in England, light-colored moths and dark-colored moths. Originally, there were more light-colored moths than dark ones, as the dark ones stood out against the light-colored tree bark and were easy prey for the local birds. At this point, light color was more adaptive than dark color.
During the Industrial Revolution in England, however, the trees became covered with soot. This meant the dark-colored moths were better adapted to their environment than the light-colored moths, as they no longer stood out against the soot-covered tree bark. Now it was the light-colored moths that were easy prey for the birds. Hence, the population of dark-colored moths grew relative to the population of light-colored moths as more of the former survived to reproduce and pass their genes on to the next generation. Thus natural selection acted on the moth population as a result of their coloring. Of note, Darwin’s concept of natural selection does not explain how variation in the population comes to be, only how one trait comes to be more frequent in the population than another trait.
Charles Darwin (1809–1882) is easily one of the most influential figures in modern science. His theory of evolution has influenced every scientific discipline involved with living organisms. Prior to the theory of evolution, the variety of life on earth was seen as a product of God’s creation. All creation occurred according to the book of Genesis with no changes since. To suggest that animals had changed over time implied that God’s creation was less than perfect. Thus the theory of evolution challenged Christian theology about the very origins of life.
Because of this, Darwin’s theory was highly controversial in its day. In some circles it remains so today. Scientifically, however, Darwin’s basic premises have never been seriously challenged. Darwin was not the first proponent of a theory of evolution. In fact, his grand father Erasmus Darwin (1731–1802) contributed to early work on the subject. What was missing in Darwin’s day was an exact explanation of the mechanism of evolution and appropriate supporting evidence. Darwin gathered evidence for his theory on his famous sea voyage on the H.M.S. Beagle in 1831, in which he traveled from England to the coast of Africa to the southern tip of South America and back. It took him more than twenty years, though, to synthesize his observations into a coherent theory.
By the time Darwin published his famous essay “On the Origins of Species by Means of Natural Selection” in 1859, the scientific community was ready to receive it. It was an immediate sensation. Darwin’s theory of genetics, however, was not well developed. The monk Gregor Mendel did not publish his study of pea plants until 1866 and his work was not appreciated until the beginning of the twentieth century. The current view of evolution reflects a synthesis of Darwin’s theory of natural selection and Mendelian genetics.
Evolution occurs through the process of reproductive success. Those organisms that pass their genes onto the next generation have succeeded; their genes and the traits associated with them have survived into the next generation. In evolution, success really means survival. If a trait is common in a population, this means that the genes of previous generations with that trait have survived to the present.
Evolutionary fitness is the ability to pass on one’s genes to the next generation. If there is a larger proportion of gene A in the present generation than in the previous one, then the organism with gene A has demonstrated evolutionary fitness. Conversely, if the proportion of gene B has decreased across generations, then the organism with gene B has poor fitness.
We generally assume that animal behavior is adaptive, that it has evolved because it confers fitness on the organism whose genetic make-up produces the behavior. For example, we assume that the mating dance of pigeons—in which they strut back and forth, jut their necks in and out and emit loud cooing noises—is adaptive. It increases male pigeons’ access to females and thus to reproductive success. This display behavior may make the male look bigger and stronger than he actually is. Females are more likely to select such males as mates because selection of large and strong males may confer an evolutionary advantage for the females’ offspring. Male displays of strength and size are very frequent strategies for access to females, evident in an extremely wide array of species, including our own. If we consider human males’ predilection for muscle cars and bodybuilding, we can see how the principles of sociobiology might indeed be relevant to the behavior of humans.
Survival of the fittest means that those individuals of a species with genetic traits that are best adapted to the particular environment are most likely to mate and pass those traits on to the next generation. Importantly, survival of the fittest does not mean that the most aggressive and dominant will pass their genes on to the next generation. Dominance is one evolutionary strategy, but it is not the only one. For example, in some fish species, male fish can disguise themselves as females and then sneak into the dominant male’s territory to mate with his females. In this case, fish that are not the most dominant nevertheless reproduce successfully. Moreover, in many circumstances, cooperation and altruism can be useful evolutionary strategies—as effective, if not more effective, than competition and aggression.
Jean-Baptiste LaMarck (1744–1829) was a French biologist who contributed to pre-Darwinian theories of evolution. In keeping with the ideas of Charles Darwin’s grandfather, Erasmus Darwin, LaMarck believed in the inheritance of acquired characteristics. In other words, an animal adapts to the environment and these changes are then passed onto the animal’s offspring via some form of heritability. Genetic change takes place because of the animal’s behavior. The classic example involves the long neck of the giraffe. It was thought that giraffes stretched their own necks by reaching up to eat the leaves off the top of tall trees. This trait was then passed on to later generations. Similarly, mountain goats grew a thicker coat in a cold climate and then passed this trait onto their offspring.
Although LaMarckian evolution has a kind of intuitive appeal, there has never been any evidence to support its central premise, that acquired behavior is directly coded into the genes. In Darwinian evolution, genetic changes occur through random mutation. Some of these genetic mutations will improve the animal’s adaptation to the environment, though most will not. Those genes that do improve adaptation are more likely to be passed on to the next generation. Hence the environment influences reproductive success but does not act upon the genes directly.
Social Darwinism refers to a loose group of theories that arose in the late nineteenth and early twentieth centuries, following publication of Charles Darwin’s theory of evolution. This was a time of European imperialism, intense immigration into the United States, and growing masses of urbanized poor due to the industrial revolution. Thus, social prejudices spread among the European and American elite who convinced themselves that the conquered and the impoverished were somehow deserving of their status. Likewise, the idea of survival of the fittest was used to justify this viewpoint. Darwin did not intend evolution to be racist or a justification of social inequity. His theory was an explanation of how animals adapted to their environments. It was not a moral prescription for society. But his work was misinterpreted to mean that only the strongest and most worthy will survive and that social disadvantage was a reflection of genetic inferiority. Galton’s theory of eugenics is a good example of Social Darwinism.
Altruism, which involves helping others at some cost to the self, has long been a puzzle to evolutionary theorists. How is altruistic behavior evolutionarily adaptive? It is certainly common enough in the animal world. Worker bees and drones live their entire lives in service to the queen bee. They do not even reproduce. Alarm calls are also altruistic. When an animal sounds the alarm, warning others of the presence of a predator, the animal increases its visibility to the very predator it is warning against. Likewise, altruistic behavior is widespread in human beings. We give money to charity, take care of other people’s children, and may even donate a kidney to a relative in need.
Although altruistic behavior may cost the individual animal, it may still confer reproductive success if it helps other animals that share the same genes. Thus, we would expect altruistic behavior to be most common among close relatives, which is universally the case. What is also found is that the cost and risk of altruistic behavior decreases as the biological relationship grows more distant. Think about it. Most of us are willing to donate used clothes to children in another country. This is a low cost and low risk investment. But would you be willing to sell your house and donate the proceeds to a complete stranger? Would you be willing to donate a kidney to a stranger? Or would you be more likely to donate your kidney to your sister, especially if she was likely to die without it?
Helping others when it in no way benefits oneself—such as donating blood—is called altruism. Scientists, believing that evolution has been based on self-preservation, have long wondered what the advantages could be of altruism (iStock).
Because sexual behavior has such direct impact on reproductive success, sociobiologists have given a good deal of thought to the evolutionary significance of various forms of sexual behavior. In many species, males and females may have different strategies for reproductive success. Females devote an enormous amount of time and energy to bearing and raising their young. The more complex the species, the more this is the case.
For example, humans, chimpanzees, and dogs provide much more maternal care than turtles do. Therefore it is in the evolutionary interest of many females to be highly selective in their choice of mates and to seek males that can contribute to care of the young. Males, on the other hand, do not bear young and are not physically bound to the care of the young. They can develop a wide array of successful reproductive strategies. They can inseminate a large number of females but give little resources to the care of their offspring (e.g., buffalo, wildebeests), or they can inseminate fewer females, have more offspring with them and give much more time and energy to the care of their young (e.g., trumpeter swans, gibbons). Some males (e.g., gorillas, fur seals) compete for exclusive access to a group of females, devoting considerable energy to protecting their harems from encroachment from rival males.
Across history, human males have exhibited all of the above reproductive strategies. They are monogamous, promiscuous, or polygynous. Some even have harems. Whichever strategy is selected is dependent on numerous environmental contingencies, such as population density, scarcity of resources, culture, religion, social status, etc. While human females are not immune to the temptations of multiple partners, polygamous societies with polygynous marital patterns (men with multiple wives) are far more common than those with polyandrous marital patterns (women with multiple husbands).
Because females invest so much more energy into reproduction than males do, females are high-energy resources and hence very valuable to males. Consequently males are likely to evolve strategies to compete for them. Sexual selection means that any physical trait or behavioral pattern that increases access to mates will be evolutionarily advantageous. Sexual selection is most pronounced in polygamous species, where a sort of winner-take-all system results in clear winners and losers.
One of the most common competitive strategies for males involves physical size and strength. Across many, many species larger males have more offspring. Likewise dominant males can jealously guard access to multiple females, creating harems that they defend aggressively. However, in these circumstances non-dominant males will be excluded from access to females. Therefore, the non-dominant males have developed alternative strategies.
In the competition for sexual partners, many species have adapted in amazing biological and behavioral ways. Peacocks, for instance, have evolved so that they have stunning feathers to attract peahens. Humans, too, have developed their own competitive strategies (iStock).
In several species, including stickleback fish, prairie chickens, and elephant seals, smaller and non-dominant males disguise themselves as females to gain access to the dominant males’ territory and the females within. These strategies work in direct male-to-male competition. But females are also often highly selective. Males have to compete for female favor as well. It is likely elaborate display rituals that are evident in many birds reflect behaviors evolved to enhance female preference. Such rituals often do two things. They can advertise the males’ size and strength, often in exaggerated form. They can also attempt to persuade the female of the resources the male can make available for child rearing. For example, male scorpion flies give a high calorie gift to their prospective mates. Female scorpion flies, in turn, prefer males who make larger gifts. Perhaps the tendency for human males to buy women expensive jewelry and take them out to high-priced restaurants is a related phenomenon.
Natural selection works on the comparative advantage of genetic traits. Perhaps because of this, evolutionary theorists have tended to emphasize the competitive nature of social relations. But this paints a very incomplete picture. Social behavior in all highly social animals involves much more cooperation than competition or antagonism. If social life was entirely a Hobbesian free-for-all, there would be little reason for humans and other animals to seek each other out. Just as evolution results in competitive and aggressive behavior, it also results in the capacity for strong social bonds, parental devotion to children, affection, cooperation, empathy (in humans at least), and many other traits that support cohesive social groups.
Although male-to-male competition can be very dramatic in the animal world and consequently has received more attention from sociobiologists, female-to-female competition certainly exists. Some female birds roll other females’ eggs out of the nest or otherwise interfere with their reproduction. In complex social groups, females can compete for status. In large monkey troupes, for example, high-status females and their offspring have many advantages over lower status females. The presence of monogamy can also affect female-to-female competition.
Monogamy is more common when a prolonged or intensified period of dependence in the young favors paternal investment in child rearing. When males are monogamous, they are likely to be more selective in their choice of a mate as they invest more in each partner. Hence females may need to compete for males. In these cases, females who show signs of greater reproductive fitness are often more successful in attracting males. We can certainly consider how this might apply to human females, who characteristically spend considerable energy and time maintaining and enhancing their physical attractiveness to males. Across human cultures, the standards for female beauty almost universally relate to youth and physical health, which corresponds to a long period of fertility.
If we consider the size of the beauty industry, which produces women’s makeup, jewelry, clothing, skin creams, hair products, and many other forms of female adornment, we can see how evolutionary pressures may be in play within our own culture.
Sociobiology was extremely controversial in its early days in the 1970s and 1980s. To say that a behavior had a genetic basis seemed to imply that it was morally desirable or inevitable. Further, the emphasis on genetics was seen to invalidate the importance of environment. We now know that environment and genetics interact in almost all of human behavior. While genetics may set the outer limits of behavior, environment has huge influence on the expression of behavior and even on the expression of genes themselves. Genes can be turned on and off according to environmental influences. The biggest problem with evolutionary explanations of human behavior, however, involves the profound difficulty distinguishing between proximate and ultimate levels of causation.
The ultimate level of causation refers to the behavior’s evolutionary significance; how the behavior enhances reproductive fitness. The proximate cause refers to the immediate cause of a behavior, whether that be hormonal, neurological, cognitive, interpersonal, or cultural. For example, the proximate cause of humans eating more cookies, cake, and ice cream involves the psychological tendency to desire and enjoy foods with high sugar and fat content. The ultimate cause involves the high caloric content of both sweet and high fat foods, which promotes physical survival in resource-scarce environments.
Such environments were typical until only just recently. However, distinguishing between proximate and ultimate causes in human beings is extremely difficult, far more difficult than it is in simple animals, like insects, whose behavior is much more closely tied to their genetics. This is because one of the most important evolutionary strategies of human beings involves our remarkably developed intelligence. No other animal on earth can learn information of such complexity and modify its behavior in such diverse ways. Therefore, due to our remarkable behavioral flexibility, it is very difficult to distinguish what behavior is learned and what is genetically based.
In the absence of rigorous scientific research, sociobiologists can potentially fall back upon speculation, which can easily be biased by prevailing prejudices. Women are supposed to be subordinate to males; males are supposed to be aggressive. Therefore it is critical that rigorous scientific research support any claims as to the evolutionary significance of human behavior.
Sociobiology relies on careful animal studies in which the frequency of any given social behavior can be correlated with some marker of evolutionary significance. For example, the frequency of altruistic behavior in baboons can be correlated with the degree of biological relatedness between animals, which will then translate into the proportion of shared genes (50 percent for parents and children, 50 percent for full siblings, 25 percent for half siblings, 12.5 percent for cousins).
In humans, twin studies have been used to differentiate the effects of genetics from the effects of environment. Additionally, anthropological studies that compare social behavior across different cultures are also used in an attempt to separate the effects of genetics and environment. These studies become harder to do with time, however, as globalization leaves fewer cultures truly independent of each other.
Studies comparing identical twins, fraternal twins, and non-twin siblings have been used in an attempt to tease apart the role of genes and environment in various psychological traits (iStock).
Twin studies have largely focused on IQ tests, comparing monozygotic (identical) and dizygotic (fraternal) twins. Monozygotic twins grew from the same fertilized egg and share 100 percent of their genes. Dizygotic twins grew from two separate eggs and therefore share only 50 percent of their genes. Twins reared together and twins reared apart have also been compared. These studies show that intelligence does have a significant genetic component, as monozygotic twins score much more similarly on IQ tests than do other types of siblings. However, these studies have been criticized because while the percentage of genes that differ from one subject to another has been carefully measured, the degree that environments differ between subjects is not clear at all. Moreover, there is considerable evidence showing that many environmental factors, such as socio-economic status, years of education, and mother’s level of education also have very strong influences on IQ.
Neurobiological theories of psychology investigate the links between the brain and the mind. The assumption is that all psychological processes can be linked to specific patterns of brain activity and that understanding the neurobiological substrates of behavior can only enhance our understanding of human psychology. With the remarkable technological advances of recent years, our ability to study the workings of the brain and its relationship to psychological processes has grown at an extremely rapid pace.
Neuropsychology involves the study of specific psychological functions that can be directly linked to brain processes. Alexander Luria (1902–1977), one of the fathers of neuropsychology, studied brain-injured soldiers in World War II to determine how different kinds of brain damage impacted intellectual functioning. Modern neuropsychological research helps identify the specific psychological functions that are associated with specific patterns of brain activity. For example the encoding of information into long term memory is mediated by a brain area called the hippocampus.
Animal models are a critical aspect of neurobiological research because, for obvious ethical reasons, scientists can perform much more invasive procedures on animal brains than on human brains. In fact the ethics of animal research is a controversial and difficult area. Studies of animal brains shed important light on the workings of the human brain but they also highlight the ways that brains differ across species. When the brains of various animal species are compared, we can generate hypotheses about how our own brains developed across evolution. For example, the frontal lobe, which is associated with planning and other complex cognitive functions, is proportionately larger and more convoluted (providing more surface area) in animals with higher intelligence. This suggests that the frontal lobe grew in size across human evolution as intelligence became an increasingly important evolutionary strategy.
Only a few decades ago, it was not possible to observe the human brain in action. Autopsies after death and neuropsychological studies of brain-injured patients were the main methods of neurobiological research. With the advent of brain imaging technology, however, it became possible to obtain snapshots of the living human brain. Computerized tomography (CT scans) and magnetic resonance imagery (MRI) allowed pictures of brain anatomy. Positron emission tomography (PET) and single photon emission computed tomography (SPECT) scans allowed investigation of the actual workings of the brain via recorded patterns of glucose uptake or blood flow.
More recently, functional MRI (fMRI) allows rapidly repeated images of brain activity, permitting study of brain activity over time. In effect, brain imaging technology has moved from still photos to moving pictures. Moreover, subjects can be scanned while performing various actions, opening up an enormous array of research possibilities that will take many years to fully exploit.
This photograph shows a Magnetic Resonance Imaging (MRI) brain scan for a 22-year-old man. The scan includes twenty vertical slices starting from the man’s right ear and running through to his left ear. Brain imaging technology allows us to view the workings of a living brain, an extraordinary accomplishment possible only in the last few decades (iStock).
Cognitive science can be seen as an outgrowth of the cognitive revolution. Cognitive scientists use tools of evolutionary psychology, linguistics, computer science, philosophy, and neurobiology to investigate mental phenomena from a scientific vantage point. One of the aims of cognitive science is to create complex computer programs to model psychological and brain processes. Cognitive scientists address a broad range of psychological problems including memory, language, learning, and decision making. Theories of neural networks address how the vast network of brain cells, or neurons, work together to create complex behaviors. Out of these investigations, many remarkable technological innovations have developed, including voice recognition software and advances in robotics.
Artificial intelligence (AI) is a computer-based model of intellectual processes. AI scientists build computer programs to simulate human intelligence. Their implicit assumption is that psychology can be reduced to mathematical algorithms, the set of mathematical rules from which computer programs are built. As of yet, AI has been restricted to relatively simple aspects of human psychology, such as visual perception and object recognition. Nonetheless, AI models have become increasingly sophisticated and have taken on the complex problem of learning. How can a computer program modify itself in the face of new information? Pattern recognition software depends on a kind of teaching. The programs are designed to respond to incoming feedback from the outside world. Responses that are reinforced are strengthened and those that are not reinforced are weakened. In this way, AI is similar to the both behaviorist and evolutionary models of psychology.
Computer models of human psychology are based on mathematical rules. It is a philosophical question whether the mind can ever be wholly explained by a finite set of mathematical equations. A new branch of philosophy called neurophilosophy endorses this view while the holistic tradition of William James, the Gestalt theorists, and the humanistic psychologists would argue that the whole is more than the sum of its parts. As of now, there is no definitive answer to this question. Another controversy regarding cognitive science and artificial intelligence involves the concept of qualia. This refers to the subjective quality of a mental process, the yellow of yellow, the sadness of sad. AI may well be able to model the neurological processes underlying the perception of the color yellow, but can it explain how these neuronal firing patterns produce the experience of yellow? At present, we simply do not know the answer to these fundamental philosophical questions.
Psychological research provides the absolute foundation of modern psychology. It is the bread and butter, the bricks and mortar, of the science of psychology. Research allows us to study the questions of psychology in a rigorous and systematic way so psychology can be more than a collection of subjective opinions and anecdotal observations.
Arguably, no. Human behavior is too complex, and influenced by too many factors to ever presume 100 percent certainty in our conclusions. Even the best studies depend to some extent on subjective judgments. Therefore we aim for the most rigorous methods possible, accounting for possible confounds, biases, and limitations in our research. That is also why we employ the peer review method for quality control before publishing our studies in journals, so that other experts in the field can independently and anonymously review each paper. Empirical research is the most rigorous method we have but it is not a crystal ball. Luckily, the best way to refute erroneous research is more research. Research can be used to correct its own mistakes.
A variable is the building block of psychological research. It is the fundamental unit of a study. Any trait or behavior that we wish to study is translated into a variable so that we can measure it with numbers. We use the term variable because we are studying traits that vary across individuals or across time. If we want to study the relationship between red hair and school achievement, we must first operationalize our traits of interest, that is turn them into variables. We will code hair color as 1 = red hair, 0 = not red hair. We will operationalize school achievement by using grades, translating A through F into a 13-point scale, (A+ = 13, A = 12, A-= 11, B+ = 10, etc.). Having translated our traits of interest into numerical variables, we can now use mathematics to calculate the relationships between the variables. This, in effect, is the nuts and bolts of psychological science.
A number of different methods allow flexibility in the way we conduct psychological research. In experimental studies the variables are controlled and manipulated to give the maximum precision to our observations. The drawback of such control is that we cannot know how well the behavior observed in the artificially controlled environment will generalize to everyday life. In an observational study, we systematically observe behavior in its natural environment. We sacrifice a degree of control and precision for naturalism. Cross-sectional studies assess behavior at one point in time. Longitudinal studies observe behavior over a period of time, sometimes over decades. In quantitative studies, behavior is quantified into numbers. Even though quantitative research is the most common form of psychological research, qualitative research has gained more attention recently. This involves careful observation without the use of numbers.
The history of scientific research with human subjects has been fraught with abuses. Examples abound, including the Nazis’ murderous experiments on concentration camp victims and the infamous Tuskegee experiments of the 1930s in which poor and uneducated African American men with syphilis were deliberately deprived of available treatments.
Psychological research is not excluded from this disturbing history. Examples include Stanley Milgram’s work in the 1960s, in which subjects were falsely led to believe that they were causing pain, injury, and even death by administering electric shocks to another person. John Watson’s treatment of Little Albert provides another example.
Starting in the 1940s, a series of national and international laws were instituted to protect the rights of human subjects in research studies. In 1947 the Nuremburg code laid down an international code of ethics regarding human experiments. In the 1960s, a series of laws was passed in the United States further developing these protections. The establishment of independent review boards to oversee the safety and ethics of human research in all American research institutions dates from this period.
Currently, Institutional Review Boards (IRBs) or Human Subjects Review Committees must approve all studies conducted with human subjects. Most academic journals require IRB approval of any study submitted for publication.
Most psychological studies are quantitative and rely on the translation of psychological traits and behaviors into variables that can be analyzed statistically. Qualitative research, however, also has a place in psychological research. In qualitative research a smaller number of subjects are observed or interviewed intensively. The observations are recorded not in numbers but in a long, detailed narrative. From these narratives, the researcher identifies themes that can be explored with greater precision in later quantitative research. Thus, qualitative research is hypothesis generating vs. hypothesis testing. It is more broad-based and open-ended than quantitative research but less precise and reproducible. It is best understood as a preliminary type of research.
In general, the role of math is different in the social sciences than in the hard sciences. In the hard sciences, especially physics, mathematics is used to identify fixed laws of nature. Once a mathematical equation is identified to explain the behavior of an object, the equation can be used to predict the behavior of the object with extraordinary precision. Consider the equations that send rockets into space. Thus, mathematics in the hard sciences is predictive and deterministic.
All of the object’s behavior can be predicted by the equation. Aspects of modern physics, such as quantum mechanics and Heisenberg’s Uncertainty Principle, do contradict this certainty, however, although only in the realm of the extremely small (e.g. the sub-atomic particles) or the extremely large. In psychology, the topics of study are so complex that it is not possible to predict all human behavior with mathematical equations. Whether that will ever be possible is debatable, but it certainly has not been done yet. What does that mean for the role of mathematics in psychological science? In psychology, mathematics is probabilistic. We estimate the likelihood that certain statements are true or not. Further, these estimates are based on the aggregate, on the behavior of groups. Therefore, while we may be able to say a lot about the likely behavior of groups, we are unable to predict the behavior of any given individual with certainty. For example, based on samples showing increased beer drinking among male college students compared to females, we can predict that, in general, male college students will drink more beer than females but we cannot predict the behavior of any given college student.
In psychological research, we try to draw conclusions about a larger population from observations of a small sample. We cannot study all male college students or all people with schizophrenia so we study a sample of the population of interest and then try to apply our findings to the larger population. For this reason it is critical to make sure the sample is similar to the larger population. There are many ways the sample can vary from the larger population. The way we recruit our study subjects may bias the sample right from the start. For example, if you want to study illegal behavior, you are likely to find your sample in the judicial system. Right off the bat your sample is biased toward people who have been arrested, leaving out the people who never got caught. If you want to study people with depression, you are likely to study people in the mental health system and your sample will be biased toward people who seek treatment. Because it is virtually impossible to remove all problems from sample selection, researchers must carefully describe their samples so that the applicability to a larger population, or the study’s generalizeability, can be assessed.
Millions of psychology majors grit their teeth and roll their eyes at the very thought of statistics. Nonetheless, statistics are a fundamental part of psychological research. Statistics provide a mathematical technique to measure the relationships between two or more variables (traits of interest such as intelligence, aggression, or severity of depression). Statistics can show how these variables relate. It can show the strength of their relationship and the probability that the relationships found in a given study are likely to be true findings, rather than a statistical fluke, i.e., due to chance. The most common statistics are measures of central tendencies, specifically: the mean, median, and the mode; measures of group differences such as the t-test, ANOVA, and MANOVA; and measures of covariation, such as correlation, factor analyses, and regression analyses. Measures of covariation assess the degree to which two or more variables change in relation to each other. For example, height and weight covary (or are correlated) while age and ethnicity do not. In general tall people weigh more than shorter people while ethnicity does not change with age.
This is a graph showing how 69 people scored on a test measuring traits of a personality disorder. The vertical axis represents the count, or the number of people who attained each score. The horizontal axis represents the scale score. In this graph, the vast majority of the subjects scored on the low end of the scale while a few people had much higher scores. When the distribution of scores is concentrated toward one end of the scale, we say there is a skewed distribution. In a normal distribution the majority of scores are in the middle with a few scores moving out toward the far ends of either side. When the distribution is skewed like this, the mean, median, and mode separate from each other.
These are ways to characterize a population or a sample. The mean is the average score. It is calculated by dividing the sum of scores by the number of scores. For example, the average of the series {4, 7, 8, 9, 9} equals 7.4, (4 + 7 + 8 + 9 + 9 divided by 5). The median refers to the number that falls in the middle of the sample; half of the scores lie above it and half lie below. In this case the median is 8. The mode refers to the most common score. In this case the mode is 9. Each measure of central tendency has different advantages and disadvantages.
The mean is very sensitive to extreme values, also known as outliers, and so can give a distorted view of a population when some values are much higher than the rest. The median is not affected by outliers and thus can be a more stable measure of central tendencies. For example, the mean of 8, 8, 9, 12, 13, and 102 is equal to 26.4 but the median is equal to 10.5. This distinction is very important when describing characteristics such as national income. Due to a small percentage of people with very large incomes, the average or mean income in the United States is higher than the median income. Because of this the U.S. Census only reports median income. The mean, on the other hand, is more useful in statistical analyses.
Correlation is one of the most common ways of evaluating the relationship between two variables. If one variable increases at the same time another one increases, the two variables are positively correlated. For example, gregariousness and number of friends are well correlated. The more gregarious a person is the more friends they are likely to have. Less gregarious people are likely to have fewer friends. If one variable increases while another decreases, the two variables are negatively correlated. Age and impulsivity are negatively correlated. The older someone gets, the less likely they are to engage in impulsive behavior. Likewise younger people are more likely to engage in impulsive behavior. If there is no relationship between variables, they have no correlation. Month of birth and mathematical skills have no relationship. We do not anticipate that the month of birth would have any impact on a person’s mathematical ability.
We trust the scientific method to give us reliable knowledge. Nevertheless, research should never be taken at face value. There are many ways a study can be biased and it is extremely important to be able to interpret the results of a study critically. The issue of validity is of particular importance. Are the results valid or is the study flawed to the extent that the conclusions are not supported by the data? Internal validity refers to the integrity of the study methods. Is there a fatal flaw that is intrinsic to the design of the design? For example, a study comparing the effectiveness of two drugs used one drug that had passed its expiration date. In this case, drug B may be less effective than drug A simply because it passed its expiration date. External validity refers to the extent to which the results can be applied to a larger population. A study of attitudes toward religion that only includes atheists will have limited external validity. It may be an accurate measure of the subjects’ religious beliefs but the study would not tell us much about non-atheists. In general, internal validity is more important than external validity.
A confound is something that biases the results of a study. It is a third, extraneous variable that accounts for the relationship between the two variables of interest. For example, much of the early literature on intelligence tests found that Americans of northern European descent had greater intelligence than immigrants from southern or eastern Europe. These results were confounded by language fluency as the immigrants were not fluent in English. We cannot conclude that the difference in test scores across ethnic groups is due to intelligence if it is confounded by language ability. There are statistical techniques to control for confounds, but they are not appropriate in all cases and it is always better, if possible, to avoid confounds in the first place.
If the results of a study can be applied to a larger population, we say the study is generalizeable. Another term for generalizeability is external validity.
Psychological tests are the bread and butter—the currency—of psychological science. Research in psychology depends upon the measurement of psychological traits, which can only be accomplished with psychological tests. Nonetheless, psychological traits are inherently difficult to assess. They are not concrete objects that are obviously measured, like the number of green peas or the height of a giraffe. They are abstract and intangible traits like love or happiness or self-esteem that can neither be seen, touched, nor counted and may be interpreted differently by different people. Therefore a critical part of psychological research involves the construction of tests that can measure psychological traits in a systematic and reliable way.
There are many forms of psychological tests, all of which offer both advantages and disadvantages. Perhaps the most common form of test is a self-report questionnaire, in which a subject answers a series of questions that gives information about one or more psychological traits. These tests are quick and easy to develop, to administer, and to score, but they are limited by the likelihood of inaccuracies in the subject’s self report.
Clinician-administered questionnaires allow the clinician to make the final scoring decision based on the subject’s responses to each question.
Interviews, like questionnaires, involve a series of questions administered to the subject, but the interviewer has room to follow up each question with probes to obtain more information or clarify responses.
Projective tests, like the TAT or the Rorschach, ask the subject to complete a task (e.g., to tell a story based on a picture), which is intended to reveal characteristic ways of thinking, feeling, and behaving. The subject, however, is unaware of the information being revealed.
In cognitive tests, the subject completes various tasks that involve intellectual skills, like memorizing a list of words or arranging blocks to match a pattern.
Sensory or motor tasks likewise measure sensory skills, such as sensitivity to touch, or motor skills such as visual-motor coordination.
Tests in these last three categories are often called objective tests because they involve the assessment of objective behavior.
The two excerpts listed below give sample items from a psychological test measuring various emotional and behavioral traits. The first group of questions measures anger regulation and the second group of questions measures sustained initiative. The questions can either be read aloud by the examiner in an interview format or given to the subject to fill out as a self-report questionnaire. Note how the answers are translated into numbers, which can then be added together to form a total score.
How frequently have any of the statements listed below been true for you in the past five years?
Some people have a difficult time getting themselves to do things they either should do or would like to do. Over the past five years, how frequently have any of the following statements applied to you?
A good deal of work goes into test construction. First, the construct must be defined. What exactly are you trying to measure? Then, taking the most typical case of a self-report questionnaire, the items must be selected. Next, the test must be administered to several samples of people to prove that it is a consistent and reliable measure of the construct it is intended to measure. Two critical concepts in test construction are reliability and validity.
The reliability of a test refers to its ability to measure a given trait consistently. If the outcome of a measure varies each time it is applied, the measure is not reliable. There are several forms of reliability, depending on the format and purpose of the test. Interitem consistency means that the individual items of a test are inter-correlated, or they are well related to each other. This form of reliability is used with questionnaires in which multiple items are used to rate one trait. Test-retest reliability measures how well an initial administration of a test correlates with a repeated administration. This is only useful if the trait measured is unlikely to change much over time. Interrater reliability is used with semi-structured questionnaires and other instruments in which the rater must use complex subjective judgments in the scoring. An instrument has inter-rater reliability when two or more raters rate the same material the same way.
The validity of a test reflects the degree to which it is measuring what it says it is measuring. Validity is often measured by correlation with a similar measure of the same construct. For example, a depression rating scale could be correlated with another questionnaire that measures depression. Differences across groups can also be used to establish validity. Does a group of depressed psychiatric inpatients score higher on the depression scale than a group of healthy subjects? For that matter, do the depressed patients score higher on the depression scale than a group of inpatients with schizophrenia? With convergent validity, measures of similar constructs will rate the same material similarly. Two measures of depression should be positively correlated. With divergent validity, measures of different constructs will rate the same material differently. A measure of depression should not be well correlated with a measure of happiness.
A test can be reliable without being valid. For example a ruler is a reliable measure. It will always measure a given distance the same way. However, it is not a valid measure of depression, as the outcome of its measurements, no matter how consistent, have no relationship to depression. Although a test can be reliable without being valid, it cannot be valid without being reliable. If a test is inconsistent in its measurements, we cannot say it is measuring what it is intended to measure and, therefore, it is considered invalid.
The Rorschach inkblot test is a well-known projective test. In fact, it was once so widely used that it was frequently portrayed in the popular media, often as a mysterious and somewhat menacing test that could magically see into people’s souls. The Rorschach consists of ten cards with images of inkblots, some in black-and-white and some with color. These blots were created by Herman Rorschach (1884–1922), who first published the test in 1922. Just as people see images in clouds, subjects see images in the inkblots and they are asked to identify and describe these images. The responses are then coded for their content and form, which are seen as reflective of the subject’s own mental processes. There are no set answers to this test; the subject must project his or her own thought processes onto the blot in order to make sense of it. The Rorschach is therefore called a projective test. Perhaps because Herman Rorschach originally developed his test with inpatient schizophrenics, this test is particularly sensitive to psychotic thought process.
Although numerous scoring systems for the Rorschach have been developed since its original publication in 1922, in its heyday in the mid-twentieth century, the Rorschach was interpreted arbitrarily, according to the whim of the clinician who was administering the test. Claims for the power of the Rorschach were also overblown and poorly supported by empirical research. Because of that, the Rorschach has been harshly criticized as unscientific. It was further disparaged because of its strong ties to psychoanalysis, a discipline also criticized as unscientific. Like the Rorschach, psychoanalysis involves the identification of emotional meaning in ostensibly neutral material.
This ink blot design is very similar to those used in Rorschach tests. As an experiment, try to see what images you can find in the ink blot. What part of the blot do you use? Are your images based on the form of the blot, or the white space? (iStock)
In 1974, John Exner published the Comprehensive Scoring System for the Rorschach, in which he reworked earlier scoring systems into a comprehensive and systematic approach. He also provided considerable empirical research for his results, showing perfectly respectable reliability and validity. His system has been revised and updated multiple times. While there are still criticisms leveled at Exner’s approach, many of them legitimate, he has undeniably provided a scientifically supported system with which to score the Rorschach.
The TAT is another projective test, almost as well known as the Rorschach. The TAT was developed by Henry Murray in 1938. It consists of 20 cards with evocative and ambiguous drawings involving one or more people. Usually, only ten cards are administered at a time. Subjects are asked to tell a story about what is happening in the picture, what led up to it, and what will happen afterward. Subjects are also asked to say what the characters are thinking and feeling. Because the images are ambiguous, the subjects’ stories will reveal their personal ways of processing interpersonal relationships. Unfortunately, the TAT has not had the benefit of a John Exner to develop a modern scoring system. Therefore, without a reliable and valid scoring system, the TAT can only be used qualitatively and only in conjunction with other scientifically supported tests.
The MMPI is one of the oldest and best known self-report questionnaires. It measures various aspects of personality and psychopathology. The original version of the MMPI was developed in the 1940s. The second edition of the MMPI, known as the MMPI-2, is currently in use and was last revised in 1989. Eight basic syndrome scales are derived from 567 self-report items in a true/false format. These are Hypochondriasis, Hysteria, Psychopathic Deviancy, Paranoia, Psychasthenia, Schizophrenia, and Mania. Additional scales include Masculinity-Femininity, Social Introversion and three validity scales to assess response biases, such as under-reporting or over-reporting. There is also a shorter MMPI-A for adolescents.
An IQ test is a test of cognitive skills that produces an IQ score. This refers to an intelligence quotient, which is an estimate of general intelligence. IQ tests have multiple subtests to tap different kinds of intellectual skills, such as memory, vocabulary, reasoning, attention, and copying skills. Tests therefore can include lists of vocabulary words to define, arithmetic problems, or drawings to be copied. All subtests have both easy and hard items, and the items become more difficult as the test goes on. The score is based on the number of items answered correctly.
Test norms allow comparison of any individual’s score with those of the general population. In other words, when a test is normed, it is possible to know the percentile rank of any given score, which means the percentage of people who scored below it. In order to establish test norms, the test is administered to a large sample of people. The average (or mean) score and the standard deviation are then calculated. The standard deviation measures how much the individual scores vary from the average score. Are all the scores clustered tightly around the mean or are they all spread out? If you know both the mean and the standard deviation of a test, you can determine the percentile rank of any score. Thus IQ scores reflect a person’s percentile rank according to the tests’ norms.
The Wechsler Adult Intelligence Test is the most widely-used intelligence test. The first WAIS was published in 1958. The WAIS-IV, published in 2008, produces a Full Scale IQ based on scores from ten core subtests. These core subtests include Vocabulary, Similarities, Information, Arithmetic, Digit Span, Block Design, Matrix Reasoning, Visual Puzzles, Digit Symbol, and Symbol Search. Five supplemental tests include Comprehension, Letter-Number Sequencing, Picture Completion, Figure Weights, and Cancellation.
The WAIS subtests are grouped into four index scores, each measuring a specific cognitive skill. The Verbal Comprehension Index reflects the ability to express abstract ideas in words. The Perceptual Reasoning Index reflects the ability to process visual and spatial information; the Working Memory Index suggests the ability to hold and manipulate information in memory; and the Processing Speed Index indicates the ability to process information rapidly. These indices show that intelligence, as measured by the WAIS, has several very different components.
Whether any one test can measure a concept as complex as intelligence has been a topic of considerable controversy. What we do know is that the WAIS does a good job of measuring a range of cognitive skills that are indicators of other measures of intelligence, such as academic and occupational success. WAIS subtests are also well correlated with many other cognitive tests and with studies on brain activity. In other words, compared with those who have lower scores, people with high scores on the WAIS are more likely to perform well in school and in their work life and to score well on other tests of thinking ability. They are also more likely to show greater activity in the areas of the brain associated with complex thought.
The IQ, as measured by the WAIS, does a good job of measuring the kinds of cognitive skills that are useful for functioning in a complex, industrialized, modern society. These include abstract and verbal problem-solving skills and complex attention. The IQ gives a good general sense of the person’s overall intellectual performance. But when the data is interpreted, close attention must be paid to the subtests because an individual’s performance may vary widely, with very high scores on some tests and low scores on others. The IQ is also vulnerable to many cultural biases. The subtests and the functional indices are very useful, however, for providing a profile of an individual’s thought processes. This profile can be helpful in diagnosing various neurological or psychiatric conditions, such as dementia, depression, attention deficit disorder, or mental retardation. Thus, regardless of the person’s IQ score, the profile of subtests can be enormously helpful for clinical purposes.
There is some agreement that general intelligence does exist and that people vary in how much of it they have. Nonetheless, there is considerable disagreement as to the exact way to define intelligence. Loosely, we can define intelligence to refer to the ability to process information in a way that allows individuals to adapt to their environment. This definition suggests, however, that intelligence may vary according to the environment. If you live in a hunting and gathering society, your intelligence will have nothing to do with your ability to read abstract philosophy texts and much more to do with your ability to interpret your natural surroundings. In fact, someone who scores very highly on the WAIS would probably perform very poorly if he or she had been dropped into the Australian bush in the middle of the nineteenth century. Likewise, an aboriginal Australian from the nineteenth century with no formal education would perform extremely poorly on the WAIS, but would have an enormous store of knowledge and skills about surviving in the bush. Because the nature of intelligence is inherently dependent on an individual’s environment, there are chronic problems with cultural bias in intelligence tests. It is arguably impossible to design an intelligence test completely free from cultural bias.
While it is probably impossible to remove all cultural bias from IQ tests, there are ways to ensure that the test is relevant to as broad a sample of people as possible. This is especially important in highly diverse societies such as the United States. The WAIS-IV includes non-verbal tests such as Block Design and Matrix Reasoning that are not dependent on language and not too dependent on education. Further, the use of abstract, geometric shapes avoids culturally meaningful images. It is also important to exclude items that depend on knowledge that is relevant to only a small percentage of the population. For example, early intelligence tests included items on the make and model of specific cars, which would only be relevant to people who drive and who care about cars. Another important way to reduce cultural bias is to provide norms for different segments of the population. The WAIS-IV includes norms for different age groups and many other cognitive tests provide separate norms for people with different levels of education. Finally, translation of tests into several languages is also very important.
Some people question the usefulness of IQ tests because it is difficult to create a test that is not skewed somewhat by cultural biases (iStock).
If we define intelligence as information processing that allows us to adapt to our environment, then the WAIS only taps a narrow range of such skills. Howard Gardiner has argued against the idea of a single, unitary intelligence, proposing instead the existence of multiple intelligences, including forms based in the body, and social and emotional forms of intelligence. Similarly, Daniel Goleman has written extensively about emotional intelligence, the ability to process emotional and interpersonal information effectively. Folk psychology speaks about street smarts, political acumen, business smarts, mechanical aptitude, and even common sense. None of these are directly measured on the WAIS, although we would assume the visual-spatial tests to have some relationship to mechanical aptitude. We do know, however, that people with very low intellectual skills—as measured by tests such as the WAIS—have significantly reduced interpersonal and self-care skills. On the other hand, we probably all know people with very high IQs who are sorely lacking in emotional and interpersonal skills and even common sense. Therefore we can conclude that the WAIS measures some aspects of intelligence that are related, but not identical, to other aspects of intelligence.
In 1917, immediately after the United States entered World War I, the American Psychological Association (APA) convened a committee to consider how best to contribute to the war effort. The committee concluded that the development of an intelligence test that could be administered to large groups would be most useful. Potential soldiers falling below a cut-off point would be excluded from the military, while high scorers could be selected for elite positions.
Under the guidance of Robert Yerkes, a Harvard psychologist and army major, the Army Alpha, a written test, and the Army Beta, a pictorial version for the 40 percent of soldiers unable to read the written test, were developed. These tests had broad impact on the discharge and promotion of soldiers. The use of such tests in WWI spawned an explosion of intelligence and aptitude tests after the war to be used in schools, the military, and other institutions.
Criticism of cultural bias soon followed, with complaints that the content of the Army tests favored affluent native-born Americans over less privileged immigrants, who could not be expected to know, for example, the engines of different luxury cars or the layout of a tennis court. Further, many questions were moralistic, as if disagreement with Anglo-American values reflected lower intelligence. Despite these very legitimate complaints, it must be kept in mind that intelligence tests aimed for a merit-based approach to job placement. In this way, the army at least tried to be more democratic than the explicitly prejudiced, family- and class-based approaches to employment that were typically used before. Today’s intelligence and aptitude tests aim for much greater cultural sensitivity. Nonetheless, it is arguably impossible to develop a test that is completely culture-neutral.
John Galton (1822–1911), the father of Eugenics, was one of the first scientists to study individual differences in intelligence. He presumed such differences were inherited, what we would now call genetic, and he aimed to separate the most intelligent individuals from the least in the interest of selective breeding. In keeping with Wilhelm Wundt’s studies of sensation and perception, his initial intelligence tests comprised various measures of hand grip, reaction time to sensory stimuli, and other sensory-motor skills. James Cattell (1860–1944) carried this work forward and developed an intelligence test based on Galton’s work. In his position as professor of psychology at Columbia University, he administered his test to hundreds of college freshmen. (Perhaps this was the beginning of a long tradition of using college freshmen in psychological research.)
By 1901, he had sufficient data to correlate students’ grades with their intelligence test results. To his great disappointment, there was no relationship at all between the two variables. We might attribute these negative findings to two factors: a lack of construct validity, such that psychophysical measures have no relationship at all to academic performance, and restriction of range. College freshmen at an elite university will not vary that much in intelligence, so some correlations with intelligence may be masked by this fact.
Alfred Binet (1857–1911), a French psychologist, furthered the work of Galton and Cattell with his concept of mental age. While observing his own children develop new cognitive skills as they grew, Binet recognized that intelligence could be measured developmentally. By comparing the test performance of a child with the age at which such performance was expected, he could calculate a mental age for each child.
Influenced by a mandate from the French government addressing the needs of mentally retarded children, Binet and his colleague Théodore Simon decided to develop a test capable of distinguishing mentally retarded children from those of normal intelligence. They did this through multiple administrations and refinements of their measure, giving the test both to children of normal intelligence and those identified as mentally retarded. The first version of their test was published in 1905, with several revisions following in quick succession. By providing the expected scores for each age in the 1908 edition, Binet and Simon created the first empirically validated, standardized test. Within a few years, the Binet-Simon test had spread to countries on five continents.
Lewis M. Terman (1877–1956) at Stanford University revised and refined the Binet-Simon test to increase its sensitivity at the higher end of the scale. The Stanford-Binet test, published in 1916, was the first test to use IQ scores. An IQ score (or intelligence quotient) is derived from a large sample of test results. Terman set the mean IQ score at 100 and the standard deviation at 10. By translating raw scores into IQ scores, the percentile rank of each score could be calculated. For example, an IQ score of 100 falls in the 50th percentile, of 80 in the 2.5th percentile, and one of 130 in the 99th percentile. The Stanford-Binet was the primary IQ test used for many decades and is currently in its fifth edition. In 1958, two years after Terman died, David Wechsler published the Wechsler Adult Intelligence Test (WAIS), which is now the more widely used IQ test.
The problem with Galton’s approach was an utter lack of relationship between outside indications of intelligence, such as school performance, and the measures used. His tests had more to do with physical coordination and strength than intelligence and as such had no construct validity. Later tests also had problems with validity but these were more subtle. For the most part, they were extremely culturally biased, serving the anti-immigrant bias of the first several decades of the twentieth century. Here the biggest problem was generalizeability, meaning the applicability of the test to a larger population. There was no consideration of English-speaking ability or of culturally relevant knowledge. Some items only measured knowledge available to wealthy, English-speaking, and native-born Americans. Other items measured moral values more than strict intellectual skills. Later IQ tests addressed these problems by including non-verbal tests, considering cultural relevance when including items, and basing test norms on samples carefully constructed to match the demographics of the United States.