David F. Bjorklund, Carlos Hernández Blasi, and Bruce J. Ellis
Developmental psychology is concerned with changes in behavior, emotions, and cognition over the lifespan of an individual. Our ancient ancestors also developed, and features of infants, children, and adolescents, as well as the course of human ontogeny itself, have been shaped by the forces of natural selection as surely as have features of adulthood (Bjorklund & Ellis, 2014). This evolutionary perspective has been increasingly recognized by developmental psychologists, despite strong criticism of evolutionary-psychological viewpoints by some in the field (e.g., Spencer et al., 2009). For example, special issues devoted to topics in evolutionary-developmental psychology have been published in many journals, including Developmental Psychology, Development and Psychopathology, Journal of Experimental Child Psychology, Developmental Review, Anuario de Psicología, Psicothema, Infancia y Aprendizaje, and Evolutionary Psychology. Edited volumes and book-length monographs have appeared on the topic, both in the professional literature (e.g., Burgess & MacDonald, 2005; Ellis & Bjorklund, 2005; Tomasello, 2009) and the popular press (e.g., Bjorklund, 2007; Principe, 2011). And most child-development textbooks now include at least a discussion of evolutionary theory, with several including an explicit evolutionary perspective (Bjorklund & Hernández Blasi, 2012; Smith, Cowie, & Blades, 2011).
Although many developmental psychologists seem to have discovered evolution, many mainstream evolutionary psychologists have yet to discover development. Darwin's main thesis was that individuals who were better adapted to local environments were more likely to survive and reproduce and to pass those features associated with “success” to their offspring than less fit individuals. From this perspective, it is understandable that evolutionarily-minded psychologists would focus on adaptations of adulthood where “the real show of humanity” emerges. However, to assume that natural selection plays its trump cards only during adulthood is ignoring the perilous trials and tribulations involved in reaching maturity and the role that natural selection must have played in solving the problems of surviving the early stages of life.
In this chapter, we explore the field of evolutionary developmental psychology, the application of the basic principles of evolution to explain contemporary human development. It involves the study of the genetic and environmental mechanisms that underlie the universal development of social and cognitive competencies and the evolved epigenetic (gene-environment interactions) processes that adapt these competencies to local conditions (Bjorklund & Ellis, 2014; Bjorklund & Pellegrini, 2002; Geary & Bjorklund, 2000). We present some of the basic assumptions of evolutionary developmental psychology along with a sample of some of the research findings in this rapidly growing discipline. We argue that “development matters” and that mainstream evolutionary psychology can benefit from adopting a developmental perspective. Table 38.1 presents some of the basic assumptions of evolutionary developmental psychology, many of which are discussed in the remainder of this chapter.
Table 38.1 Some Assumptions of Evolutionary Developmental Psychology
|
Source: Adapted from Bjorklund and Pellegrini, 2002; Hernández Blasi and Bjorklund, 2003. |
From an evolutionary perspective, child development might be described as a bridge joining two shores: conception and reproductive age. Crossing it in terms of survival is a must, not an option. In humans, such a bridge is particularly long, taking about 15 years to traverse (Poirer & Smith, 1974); requires considerable resources in terms of parental and in-group investment (Hrdy, 2009; Trivers, 1972); and is associated (or was for our ancestors) with increased risk of dying before reproducing relative to species with a less-prolonged developmental pathway.
Fetal development, infancy, and childhood are not for the faint of heart. In fact, the challenge to stay alive begins shortly after conception. A fertilized egg has only about a 40% chance of surviving the first 6 weeks of life (Wang et al. 2003), after which its chances of making it to birth improve greatly. Although the probability of dying prior to adolescence in developed countries is less than 1%, that rate is closer to 50% in traditional cultures today and for all human cultures in the not-too-distant past, and was at least as high for our hunter-gatherer ancestors (Volk & Atkinson, 2013). Thus, selection is strong for traits that promote survival in young animals. The mortality bottleneck of infancy and childhood has served to maintain those characteristics that promote survival and reproductive success, and adaptations whose sole purpose is to ensure successful passage through childhood will be retained.
Animals inherit not only a species-typical genome but also a species-typical (adaptively relevant) environment. As discussed by Tooby and Cosmides (1990), this environment “is not a place or a habitat, or even a time period. Rather, it is a statistical composite of the adaptation-relevant properties of the ancestral environments encountered by members of ancestral populations, weighted by their frequency and fitness-consequences” (pp. 386–387). These adaptively relevant environments start at conception, with the cellular machinery in the zygote (inherited directly from the mother). They continue prenatally, with mammals, for example, developing in a womb and being nourished through the placenta, and they persist postnatally, including a lactating mother in mammals, intestinal bacteria, and a social structure for many species that may include parental care. In addition, a species-typical environment includes certain characteristics of the physical surroundings, such as light, gravity, and air, among many others.
To the extent that individuals grow up in environments similar to those of their ancestors, development should follow a species-typical pattern. Although most animals (including humans) did not evolve in a single, narrowly defined environment, they have evolved to “expect” certain adaptively relevant experiences. For humans, this would include 9 months in a sheltered womb; a lactating, warm, and affectionate mother (though variation in parental investment is part of the adaptive landscape); kin to provide additional support; peer groups; and so forth.
As an example of the coordination between a species-typical genome and a species-typical environment, consider Turkewitz and Kenny's (1982) proposal that the maturation of the various sensory systems is coordinated with perceptual experience, so that early-developing systems (e.g., audition) do not compete for neurons with later-developing systems (e.g., vision). This is nicely demonstrated in research that altered the perceptual experiences of precocial birds (e.g., ducks, bobwhite quail) while still in the egg, either depriving them of expected sensory stimulation (e.g., preventing ducks from hearing conspecific vocalizations; Gottlieb, 1976) or providing earlier-than-usual perceptual experience (e.g., patterned light while still in the egg; Lickliter, 1990), and assessed its consequences on species-typical behavior after hatching. In general, species-atypical sensory experiences produce species-atypical post-hatching behavior. For example, several days before hatching, Lickliter (1990) removed the shell over the heads of bobwhite quail embryos and presented them with patterned light, something they would not normally experience until after hatching. When subsequently tested for auditory imprinting, the birds exposed to visual stimulation while still in the egg failed to approach the maternal bobwhite quail call and were just as likely to approach the call of a chicken, whereas control bobwhite quail chicks that had the ends of their shells removed but were not exposed to pattern light consistently approached their species maternal call. This and other studies (see Bjorklund, 1997, for a review) demonstrate that when animals receive species-atypical patterns of stimulation, the choreographed dance between gene-influenced neural maturation and perceptual experience is interrupted, disrupting the typical course of development.
Due to the highly structured organism-environment relationship during the course of ontogeny, humans are “prepared” by evolution to process some information more readily than others (language and faces, for instance, compared to numerals or written words); they are constrained in how they make sense of their world, with such constraints making it easier to process certain types of information (enabling constraints; Gelman & Williams, 1998). Such constraints are the result of selectively structured gene × environment × development interactions that emerge in each generation, are influenced by prenatal as well as postnatal environments, and reflect the inheritance of developmental systems, not just genes. Consistent with this idea is the concept of evolved probabilistic cognitive mechanisms. These are:
information-processing mechanisms that have evolved to solve recurrent problems faced by ancestral populations; however, they are expressed in a probabilistic fashion in each individual in a generation, based on the continuous and bidirectional interaction over time at all levels of organization, from the genetic through the cultural. These mechanisms are universal, in that they will develop in a species-typical manner when an individual experiences a species-typical environment over the course of ontogeny. (Bjorklund, Ellis, & Rosenberg, 2007, p. 22)
Evolved probabilistic cognitive mechanisms are reflected in the phenomenon of perceptual narrowing. For example, faces have special processing priority for people, as they should, being perhaps the most socially significant stimulus in one's environment. In fact, adults process upright and inverted faces differently, as reflected by differences in speed of processing and patterns of brain activation. However, they show this pattern only to faces of conspecifics; when shown upright and inverted monkey faces, they process them similarly, reflecting a species-specific bias. A similar bias is found for 9-month-old infants (Pascalis, de Haan, & Nelson, 2002). This bias is not observed, however, in newborns (Di Giorgio, Leo, Pascalis, & Simion, 2012) and 6-month-old infants (e.g., Pascalis et al., 2002), who process upright and inverted faces differently for both fellow humans and monkeys. This pattern suggests that cortical processing of human faces becomes more specialized with age and experience. Based on these and related findings, Pascalis et al. proposed “that the ability to perceive faces narrows with development, due in large measure to the cortical specialization that occurs with experience viewing faces. In this view, the sensitivity of the face recognition system to differences in identity among the faces of one's own species will increase with age and with experience in processing those faces” (p. 1321). These findings are consistent with the position that human infants are born with perceptual constraints and that the resulting biases become modified with experience.
In order to traverse the bridge between conception and reproductive age, natural selection has shaped a series of adaptations of infants, children, and adolescents, some specific to the early stages of development. In this section, we discuss two broad types of adaptations of infancy, childhood, and adolescence: deferred and ontogenetic. A third type, conditional adaptations, will be discussed separately in the section entitled “Developmental Plasticity and Adaptive Individual Differences.”
Developmental psychologists usually make the implicit assumption that experiences in infancy and childhood serve as preparations for adulthood (e.g., learning the conventions of one's social group). In fact, some aspects of infancy and childhood that play this role may have been selected over the course of evolution, referred to as deferred adaptations (Hernández Blasi & Bjorklund, 2003). Such adaptations likely function throughout life, adapting children to the niche of childhood, but also preparing them for the life they will likely lead as adults. This is most apt to occur when ecological or social conditions remain relatively stable over time, as would likely be the case, for example, of children from hunter-gatherer groups interacting with the same set of peers both as juveniles and as adults.
Some sex differences are good candidates for deferred adaptations. Males and females have different self-interests, often focused around mating and parenting. Following parental investment theory (Trivers, 1972), females of most mammals invest more in offspring than do males, and, as a consequence, are more cautious in selecting a mate and consenting to sex than are males. Males, as the less-investing sex, tend to compete more vigorously over access to females than vice versa. As a result, men and women have evolved different psychologies, which develop over the course of childhood. Many experiences during childhood seem to promote and even exaggerate these sex differences (e.g., play styles), serving to prepare boys and girls for the roles they will play (or would have played in the environment of evolutionary adaptedness) as adults.
Sex differences in play serve as good examples. Although there is no type of play that is the exclusive purview of one sex or the other, boys and girls show different patterns and styles of the major types of play, and some theorists have argued that such sex-differentiated play served to prepare children for adult roles in ancient environments (Geary, 2010). For example, rough-and-tumble play (R&T) is observed in most mammals and usually accounts for about 10% of their time and energy budgets (Fagen, 1981). Males engage in R&T more frequently than females in all human cultures and in many mammal species. Some have argued (Geary, 2010; Smith, 1982) that R&T is a classic example of play serving deferred benefits to juvenile males, especially in terms of practice for adult fighting skills, important in traditional environments. Boys' position in a social hierarchy is more often based on physical skills than that of girls' (Hawley, 1999), and the high incidence of R&T among boys may facilitate their ability to encode and decode social signals (Pellegrini & Smith, 1998), which is important at all stages of life.
Sex differences are also found in fantasy play, although less in the frequency in which boys and girls engage in such play, and more in the content of their pretending. For example, beginning around age 6, girls engage in more play parenting than boys (see Geary, 2010). This pattern is seen across cohorts in the United States (Geary, 2010) and in traditional cultures (Eibl-Eibesfeldt, 1989), making it unlikely that it reflects recent Western social norms. From infancy, girls are more socially responsive than boys (e.g., Zahn-Waxler, Radke-Yarrow, Wagner, & Chapman, 1992), and social responsivity involves paying attention to family roles and relationships. Such play might have prepared girls to perform the traditional roles that women played over our species' evolutionary history (and continue to play in most cultures today). In contrast, boys' fantasy play is more likely to focus on aggression, power, and dominance and is often part of R&T. When dolls are used in boys' play, they more typically serve the role of combatants rather than nurturers. Thus, the patterns of fantasy play displayed by boys and girls can be viewed as antecedents for the roles (e.g., parenting, male–male competition) they will have as adults, or would have had in ancestral environments (see Pellegrini & Bjorklund, 2004).
As children grow into adolescents, the distinction between the immediate and deferred benefits of adaptations becomes increasingly blurred, as adolescents begin to “try out” adult behaviors. From an evolutionary perspective, a major function of adolescence is to attain reproductive status. Both sexual promiscuity and the intensity of sexual competition peak during adolescence and early adulthood (Weisfeld, 1999), when most people have not yet found a stable partner and the mating market is maximally open. Indeed, an important function of self-organized peer groups in adolescence may be to position oneself in a social context to be sexually active, pulling away from adult supervision and engaging in reinforcing activities with peers (Dishion, Ha, & Véronneau, 2012). To achieve success at the critical adolescent transition, natural selection has favored a coordinated suite of rapid, punctuated changes—puberty—across multiple developmental domains, including new drives and motivations and a wide array of social, behavioral, and affective changes (Table 38.2). These puberty-specific processes function to build reproductive capacity and increase sociocompetitive competencies in boys and girls (Ellis, Del Giudice, et al., 2012).
Table 38.2 Puberty-Specific Morphological and Biobehavioral Changes (Independent of Age)
|
Source: Adapted from Ellis, Del Giudice, et al., 2012; see supporting citations therein. |
Heightened sexual desire increases motivation to pursue, attract, and maintain mating relationships. Increased sensation seeking and emotional responsivity promote novelty seeking and exploration and may increase pursuit of socially mediated rewards. Higher levels of aggression and social dominance both facilitate and reflect the higher-stake competition that is occurring in adolescence over sex, status, and social alliances. Delinquent and risky behaviors (e.g., crime, fighting, reckless driving, drinking games) often signal bravery and toughness and can leverage position in dominance hierarchies, especially for males. Increasing levels of anxiety and depression in girls may reflect heightened sensitivity to negative social evaluations at a critical time for alliance formation.
The peak in these high-risk, high-stakes behaviors during adolescence suggests that this phase of the lifespan had substantial effects on fitness over human evolutionary history and, therefore, underwent strong selection. Ellis, Del Giudice, et al. (2012) have hypothesized that natural selection favored especially strong emotional and behavioral responses to social successes and failures during the adolescent transition, including heightened reactivity to peers. This hypothesis is consistent with fMRI data showing that in adolescents, but not adults, the presence of peers during a simulated driving task amplifies activity in reward-related brain regions, including the ventral striatum and orbitofrontal cortex (Chein, Albert, O'Brien, Uckert, & Steinberg, 2011). This heightened brain activity predicts subsequent risky decision making while driving. In total, there may be an evolved nexus between the adolescent brain's incentive-processing system, peer contexts, and risky behavior. At the same time, however, adolescence is a key period of opportunity to impact developmental trajectories in positive directions. It is a time when youth develop healthy habits, interests, skills, and inclinations and align their motivations and inspirations toward positive goals (Dahl, 2004).
Not all aspects of childhood serve to prepare individuals for life as an adult. Many features of infancy and childhood serve to adapt individuals to their current environment, and not to an anticipated future one. These have been referred to as ontogenetic adaptations (Bjorklund, 1997; Oppenheim, 1981) and can be easily recognized in some prenatal mechanisms in mammals and birds. For example, before birth, fetal mammals get their nutrition and oxygen through the placenta, but immediately after birth these systems become obsolete and infants must eat and breathe on their own. These are not immature forms of adult adaptations that become gradually shaped to mature forms, but are structures or mechanisms that have a specific function at a particular time in development and are discarded when they are no longer necessary.
Such adaptations are not limited to the prenatal period, nor to mechanisms associated with physiological functioning, but may also be found in infant and child behavior and cognition. For instance, Bjorklund (1987) proposed that newborns' tendency to imitate facial gestures (e.g., tongue protrusion) may be an ontogenetic adaptation. Meltzoff and Moore (1977) argued that such neonatal imitation reflects “true” imitation and involves the same underlying cognitive mechanisms as does the imitation seen more readily in older infants. Yet, imitation of the most frequently studied gesture of tongue protrusion can be elicited by a looming pen, small ball, or flashing stimuli (Jacobson, 1979; Jones, 1996; Legerstee, 1991), declines to chance levels by 2 months of age (Jacobson, 1979), and is not reliably seen again until about 10 to 12 months. One interpretation of this decline is that neonatal imitation serves a different function than does imitation in older infants. For example, it may be functional in nursing (Jacobson, 1979), serve as a form of prelinguistic communication (Legerstee, 1991), or facilitate mother–infant interaction at a time when infants cannot easily control their head movements and gaze in response to social cues (Bjorklund, 1987; Byrne, 2005). Byrne (2005) proposed that such matching behavior helps the neonate to stay “in tune” with his or her mother, fostering and consolidating the social interaction. Consistent with this hypothesis, Heimann (1989) reported that infants who displayed high levels of imitation as neonates later showed greater levels of social interactions with their mothers at 3 months of age. These findings are consistent with the interpretation that neonatal imitation has a specific function at that time in development only—fostering mother–infant communication and social relations—and when infants are better able to control their own social and communicative behaviors, it disappears. Although it has a similar surface structure to the behavior of older infants, the two sets of behaviors have different functions (fostering infant–mother interaction and social learning) and were presumably selected for these specific functions over evolutionary time.
Other examples of ontogenetic adaptations can be found in the behavior of older children. For example, earlier we proposed that aspects of children's play serve to prepare them for adult roles in traditional cultures. In addition to these preparatory roles, play may also serve more immediate functions. For example, rough-and-tumble play may serve as a way for children to learn and practice social signaling, with exaggerated movements and a play face indicating playful intent (Pellegrini & Smith, 1998). Such play also provides opportunities for vigorous exercise, important in skeletal and muscle development (Pellegrini & Smith, 1998).
Natural selection provided children with sets of adaptations to cross the bridge connecting conception to young adulthood. Some of these adaptations were unique to particular times in development, adapting the young organism to the niche of childhood and disappearing when they were no longer needed. Others served not only to adapt children to current environments but also to prepare them for future ones. Although many of these adaptations served to facilitate children's understanding of the physical world (e.g., folk physics; Geary, 2005; Spelke & Kinzler, 2007), perhaps the most important adaptations, from the perspective of the evolution of Homo sapiens, concerned navigating the social world, which we examine briefly in the next section.
There has been no lack of proposals about the pressures most responsible for the evolution of human intelligence. The currently popular social brain hypothesis focuses on the complex social environment that humans and our ancestors lived in and proposes that it was the need to deal with conspecifics that, more than any other single force, was the primary selective pressure in the evolution of the modern human mind (Dunbar, 2003). Human social complexity is also associated with a large brain and an extended juvenile period, and it was the confluence of these three factors, we propose, acting synergistically, that produced the human mind (e.g., Bjorklund & Pellegrini, 2002). In fact, several theorists have proposed that humans uniquely evolved (or at least greatly expanded) new developmental stages, specifically childhood (about 2 to 6 years in humans) and adolescence (Bogin, 2001), and that the stage of childhood was necessary for the evolution of advanced forms of social learning (Nielsen, 2012). An extended childhood provides more time for brain development and increased opportunity to learn the social norms and complexities of one's group prior to reproducing. A sophisticated intelligence is required to deal with problems of relating to conspecifics, and the skills needed to traverse the social landscape take a long time to acquire. At the transition to the reproductive phase of the human lifespan, individuals who have better mastered their social world reap the benefits in terms of increased access to resources and mating opportunities.
Human infants and young children have a suite of adaptations devoted to orienting to and processing social stimuli, some of which they share with other animals, and others that seem to be unique to or exceptionally developed in humans, and we examine several of them here.
Human infants' orientation to social stimuli begins at birth. For example, neonates preferentially look at lights depicting biological motion (Bardi, Regolin, & Simion, 2011), selectively attend to face-like stimuli (Mondloch et al., 1999), and look longer at the faces of their mothers than those of other women (Bushnell, Sai, & Mullin, 1989). Young infants are particularly attentive to eyes, especially a direct gaze (Farroni, Csibra, Simion, & Johnson 2002).
Beyond the neonatal period, infants are able to exert greater intentional control of their actions, as neural control is shifted from subcortical to cortical brain areas (Nagy, 2006). Infants are now able to engage in sustained eye contact and social smiling, which is not frequently and unambiguously seen until about 3 months (Reilly, Harrison, & Klima, 1995). These positive social cues are seen universally and were described by the ethologist Eibl-Eibesfeldt (1970) as “flirting.” Such behaviors promote repeated social interaction with their caretakers, fostering infant–mother attachment and thus survival. These cues also serve as reinforcements to caregivers, promoting a mother's feeling of competence, which may serve to increase the quantity and quality of maternal care infants receive (see Murray & Trevarthen, 1986).
Despite infants' improved abilities to facilitate social interaction with their caregivers, human social interaction requires, at its most basic, the ability to view other people as intentional agents—individuals who cause things to happen and whose behavior is designed to achieve some goal (Tomasello, 1999). Although infants are highly attentive to social stimuli from birth, it is not until the latter part of the first year that they seem to appreciate that other people behave in purposive ways. This is seen in shared attention, which involves a triadic interaction between the infant, another person, and an object, such as when a parent points to an object for infant's attention (Carpenter, Akhtar, & Tomasello, 1998; Tomasello & Carpenter, 2007). Although parents begin engaging in shared attention early on, infants only begin to hold up their end of the shared interaction beginning around 9 months of age, when they look in the direction adults are looking or pointing, engage in repetitive give-and-take with an adult and an object, and point or hold up objects for another person to see (Carpenter et al., 1998). These abilities continue to improve over the next year (Tomasello, 1999; Tomasello, Carpenter, & Liszkowski, 2007). Although the responsiveness of the caregiver influences infants' shared attention (Deák, Walden, Kaiser, & Lewis, 2008), shared attention is highly canalized and is expressed similarly in diverse cultures (Callaghan et al., 2011).
On the surface, sharing a perceptual experience does not seem to be a great cognitive accomplishment, but it is one that is seemingly not observed in the great apes. For instance, although chimpanzees point to objects in some contexts (Leavens, Hopkins, & Bard, 2005) and will follow the gaze of another in other contexts (Bräuer, Call, & Tomasello, 2005), most researchers concur that there is little evidence of shared attention in great apes (Herrmann, Call, Hernández-Lloreda, Hare, & Tomasello, 2007; Russell, Lyn, Schaeffer, & Hopkins, 2011; but see Leavens et al., 2005), with the exception of enculturated apes that have been reared much as human children are (for a review, see Bjorklund, Causey, & Periss, 2010).
The ability to view others as intentional agents plays an important role in learning from one another. Although chimpanzees and the other great apes display impressive social-learning abilities, permitting the transmission of nongenetic information across generations (Whiten et al., 1999), the fidelity with which humans achieve this is unmatched in the animal world and afforded by their social learning abilities, which develop over an extended prereproductive period. In fact, an extended juvenile period is necessary not only for understanding the intricacies of human social relations and organization, but also for mastering the products that result from complex human culture. As new ways of thinking about fellow members of our species evolved, they resulted in new or more effective ways of transmitting information between individuals and generations. These new forms of social learning led to new technologies that no longer needed to be discovered or invented anew by each generation, but could be taught or acquired via observation. As the contents and complexity of culture increased, each generation had more to learn than the previous generation about dealing with their physical and social environments, requiring an extended childhood to master them (Nielsen, 2012).
Although there are many aspects of children's developing social learning abilities, one that has caught the attention of evolutionarily-minded psychologists is overimitation—the copying of all components of a model's behavior, even those not relevant to solving the task. Most 2-year-old children, like most chimpanzees, will copy only the relevant actions of a model or will sometimes use means not demonstrated by a model to achieve a goal (termed emulation). However, beginning at about 3 years of age, children frequently copy all behaviors of a model, even those that are clearly irrelevant to solving a task (Lyons, Young, & Keil, 2007; Nielsen, 2006). Such overimitation is not limited to Western cultures but has been observed in 2- to 6-year-old Kalahari Bushman children (Nielsen & Tomaselli, 2010) and persists into adulthood (McGuigan, Makinson, & Whiten, 2011), and its prevalence and persistence has resulted in some researchers proposing that it reflects an evolved adaptation (Csibra & Gergely, 2011; Whiten, McGuigan, Marshall-Pescini, & Hopper, 2009). Children seem to believe that the actions of a model are normative, correcting a puppet, for example, that failed to copy irrelevant actions of a model (Kenward, 2012). There is no evidence of overimitation in chimpanzees (see Nielson, 2012).
Overimitation may be especially adaptive for human children, who must learn to use thousands of artifacts. An economical way to learn to use cultural inventions might be to assume that all modeled behaviors related to the artifact are relevant. Although this will result in the acquisition of some irrelevant behaviors, these can be “weeded out” with individual learning.
Humans are not just a social species, but a prosocial one, engaging in behaviors that benefit other group members. Prosociality begins early in life and is as much a part of human-typical behavior as aggression, preference for attractive mates, and endearing feelings toward baby-faced infants. Tomasello (2009) proposed several reasons to believe that prosociality is part of humans' evolved nature, among them that it is mediated by empathy, it is observed relatively early in development, it is not increased by parental rewards, and rudiments of such behavior are seen in humans' closest living relatives, chimpanzees.
One early demonstration of prosocial behavior is helping. For example, in one experiment, 18- and 24-month-old toddlers sat across from an adult who was having difficulty performing a task (Warneken & Tomasello, 2006). For instance, in a “reaching” task, a person accidentally dropped an object on the floor (e.g., a marker) and reached unsuccessfully for it. This was contrasted with a control condition in which the person intentionally threw the marker on the floor. In a “wrong-result” task, a book slipped off as the person attempted to place it on top of the stack, versus a control condition in which the person placed the book beside the stack. The children helped the adult (e.g., retrieved the marker, placed the book on top of the stack) more in the experimental than the control condition on 6 of 10 tasks. In other research, 2-year-olds demonstrated the same sympathetic arousal when they helped a person as when they watched a person being helped by a third party, suggesting that from an early age children have a genuine concern for the welfare of others (Hepach, Vaish, & Tomasello, 2012).
Enculturated (human-reared) chimpanzees also provided “help” in some of these contexts, although only when the adult was reaching unsuccessfully for an object, not for other types of tasks (Warneken & Tomasello, 2006). Evidence of helping using a similar procedure was reported for a group of semi-free-ranging and nonenculturated chimpanzees (Warneken, Hare, Melis, Hanus, & Tomasello, 2007). Thus, although human preschool children generally display greater prosociality than chimpanzees, Warneken et al. (2007, p. 1418) concluded that “the altruistic tendency seen in early human ontogeny did not evolve in humans de novo. The roots of human altruism may go deeper than previously thought, reaching as far back as the last common ancestor of humans and chimpanzees.”
In addition to species-typical developmental adaptations, natural selection maintains individual differences in developmental processes; indeed, theory and research in evolutionary biology have acknowledged that in most species, single “best” strategies for survival and reproduction are unlikely to evolve. Instead, the locally optimal strategy varies as a function of three overarching parameters. First, the expected costs and benefits of different strategies depend on the physical, economic, and social parameters of an organism's environment (e.g., food availability, mortality rates, quality of parental investment, social competition). This context dependency means that a strategy that promotes success in some environments may lead to failure in others. Second, the success and failure of different strategies depends on an organism's relative competitive abilities in the population (e.g., age, body size, health, history of wins and losses in agonistic encounters). Third, an organism's sex often has important implications for the range of available strategies and their relative costs and benefits.
In this section, we discuss how developmental processes increase adaptation by matching an organism's phenotype to local environmental conditions and individual characteristics. We begin by reviewing the general concepts of plasticity and conditional adaptation. We then introduce life history theory and show how it provides a general framework for adaptive plasticity, as well as an integrative understanding of the development of individual differences in physiology, growth, and behavior.
Because the viability of different survival and reproductive strategies is so context and condition dependent, natural selection tends to maintain adaptive developmental plasticity: biological systems that reliably guide the development of alternative phenotypes (including anatomy, physiology, and behavior) to match an organism's internal condition and external environments (see West-Eberhard, 2003). Importantly, adaptive developmental plasticity is a nonrandom process; it is the outcome of structured interplay between the organism and its environment, shaped by natural selection to increase the capacity and tendency of individuals to track both their internal condition and external environments and adjust the development of their phenotypes accordingly. Developmental plasticity is ubiquitous throughout the animal world (see reviews in DeWitt & Scheiner, 2004; West-Eberhard, 2003).
Developmental plasticity is critically important for enabling organisms to adapt to stress, which has always been part of the human experience. From an evolutionary-developmental perspective, stressful rearing conditions, even if those conditions engender sustained stress responses that must be maintained over time, should not so much impair neurobiological systems as direct or regulate them toward patterns of functioning that are adaptive under stressful conditions (see Ellis et al., 2012; Frankenhuis & de Weerth, 2013).
Developmental plasticity involves durable change, and therefore it is inherently forward-looking; that is, it involves predicting—and preparing—for future experiences. Boyce and Ellis (2005) make this explicit in their definition of conditional adaptation: “evolved mechanisms that detect and respond to specific features of childhood environments, features that have proven reliable over evolutionary time in predicting the nature of the social and physical world into which children will mature, and entrain developmental pathways that reliably matched those features during a species' natural selective history” (p. 290). During fetal development and infancy, important features of the environment are communicated to the child via the placenta and lactation in nutrients, metabolites, hormones, growth factors, and immune factors that reflect the mother's current and past experiences (Kuzawa & Quinn, 2009). Beyond these molecular signals from the mother, relevant features of the environment are detected and encoded through the child's ongoing experiences.
A major framework in evolutionary biology for explaining patterns of developmental plasticity and individual differences is life history theory (see Kaplan & Gangestad, 2005; Stearns, 1992). All organisms live in a world of limited resources; for example, the energy that can be extracted from the environment in a given amount of time is intrinsically limited. Time itself is a limited good; the time spent by an organism looking for mates cannot be used to search for food or care for extant offspring. Due to these structural and resource limitations, organisms cannot maximize all components of fitness simultaneously and instead are selected to make trade-offs that prioritize resource expenditures, so that greater investment of time and/or resources in one domain occurs at the expense of investment in competing domains.
For example, resources spent on mounting a robust inflammatory response to fight infection cannot be spent on reproductive effort. Thus, the benefits of inflammatory response are traded off against the costs of lower ovarian function in women and reduced musculoskeletal function in men (Clancy et al., 2013; Muehlenbein & Bribiescas, 2005). Trade-offs between reproductive effort and health go in the opposite direction as well, as early reproductive maturation is linked to more physical health problems in adulthood (Allsworth, Weitzen, & Boardman, 2005). Each trade-off constitutes a decision node in allocation of resources, and each decision node influences the next decision node (opening up some options, foreclosing others) in an unending chain over the life course (Ellis, Figueredo, Brumbach, & Schlomer, 2009). This chain of resource-allocation decisions—expressed in the development of a coherent, integrated suite of physiological and behavioral traits—constitutes the individual's life history strategy.
Life history strategies are adaptive solutions to fitness trade-offs within the constraints imposed by social conditions, physical laws, phylogenetic history, and developmental mechanisms. An organism's life history strategy coordinates morphology, physiology, and behavior in a way that maximizes expected fitness in a given environment (Braendle, Heyland, & Flatt, 2011; Réale et al., 2010). At the most basic level, the resources of an organism must be distributed between somatic effort and reproductive effort. Somatic effort can be further subdivided into growth, survival and body maintenance, and developmental activity (Geary, 2002). Developmental activity includes play, learning, exercise, and other activities that contribute to building and accumulating embodied capital—strength, coordination, skills, knowledge, and so forth (Kaplan & Gangestad, 2005). Reproductive effort can be subdivided into mating effort (finding and attracting mates, conceiving offspring), parenting effort (investing resources in already conceived offspring), and nepotistic effort (investing in other relatives, for example, siblings and grandoffspring).
The critical decisions involved in a life history strategy can be summarized by the fundamental trade-offs between current and future reproduction, between quality and quantity of offspring, and between mating and parenting effort (see Ellis et al., 2009; Hill, 1993; Kaplan & Gangestad, 2005). By delaying reproduction, an organism can accumulate resources and/or embodied capital, thus increasing the quality and fitness of future offspring; however, the risk of dying before reproducing increases concomitantly. When reproduction occurs, the choice is between many offspring of lower quality and fewer offspring of higher quality. Although intensive parental investment is a powerful way to increase the embodied capital (and long-term prospects) of one's descendants, the fitness gains accrued through parenting must be weighed against the corresponding reduction in mating opportunities. Different life history strategies solve these problems in different ways by determining how organisms allocate effort among fitness-relevant traits.
At the broadest level of analysis, life history traits covary along a dimension of slow versus fast life history strategies. Variation along the slow-fast continuum is observed both among related species and among individuals of the same species (see Ellis et al., 2009; Réale et al., 2010). Slow growth and late reproduction correlate with long lifespan, high parental investment, fewer offspring of higher quality, and low juvenile mortality. Conversely, fast growth and early reproduction correlate with high juvenile mortality, short lifespan, larger numbers of offspring and reduced parental investment in each (Figure 38.1). Fast life history strategies are comparatively high risk, focusing on mating opportunities, reproducing at younger ages, and producing a greater number of offspring with more variable outcomes.
Figure 38.1 The Fast-Slow Continuum of Life History Variation.
Developmental calibration of slow versus fast life history strategies is a prototypical case of developmental plasticity. Key dimensions of the environment that regulate the development of life history strategies include energy availability, extrinsic morbidity- mortality, and predictability of environmental change (Ellis et al., 2009; Kuzawa & Bragg, 2012). Energetic resources—caloric intake, energy expenditures, and related health conditions—set the baseline for many developmental processes. Energy scarcity slows growth and delays sexual maturation and reproduction, resulting in a “slow” life history strategy. However, when bioenergetic resources are adequate to support growth and development, then proximal cues to extrinsic morbidity-mortality and unpredictability generally promote faster life history strategies.
Extrinsic morbidity-mortality refers to external sources of disability and death that are relatively insensitive to the adaptive decisions of the organism. Environmental cues indicating high levels of extrinsic morbidity-mortality cause individuals to develop faster life history strategies. Faster strategies in this context—a context that devalues future reproduction—function to reduce the risk of disability or death prior to reproduction. Moreover, high extrinsic morbidity-mortality means that investing in parental care has quickly diminishing returns, which favors reduced parental investment and offspring quantity over quality. Accordingly, exposure to environmental cues indicating extrinsic morbidity-mortality (i.e., observable cues that reliably covaried with morbidity-mortality risks during evolutionary history) can be expected to shift life history strategies toward current reproduction by anticipating maturation and onset of sexual activity. In humans, these cues may include exposure to violence, harsh child-rearing practices, premature disability and death of other individuals in one's local ecology, and so forth.
In addition to extrinsic morbidity-mortality, environmental unpredictability—stochastic changes in ecological and familial conditions—also regulates development of life history strategies (Ellis et al., 2009). In humans, cues of unpredictability may include erratic neighborhood conditions, frequent residential changes, fluctuating economic conditions, changes in family composition, and so forth. In environments that fluctuate unpredictably, long-term investment in development of a slow life history strategy does not optimize fitness; all of the energy invested in the future is wasted if the individual matures into an environment where life expectancy is short. Instead, individuals should detect signals of environmental unpredictability and respond to them by adopting faster life history strategies.
Belsky, Steinberg, and Draper (1991) were the first to hypothesize that harsh parenting, conflictual family relations, and insecure attachment would predict early sexual maturation, precocious sexuality, unstable couple relationships, impulsivity, reduced cooperation, and exploitative interpersonal styles—the expected correlates of a fast life history strategy in humans. Empirical studies have confirmed these associations (see the special section of Developmental Psychology; Ellis & Bjorklund, 2012). Other key psychological mediators of fast life history strategies include present orientation (the inability to delay gratification and/or wait for larger rewards in the future) and a short subjective life expectancy (reviewed in Belsky, 2012; Del Giudice, 2014). At the level of personality traits, slow life history strategies are robustly associated with agreeableness and conscientiousness (Del Giudice, 2014). Taken together, these results strongly support the existence of a fast-slow dimension underlying a broad spectrum of individual differences in humans.
Because extrinsic morbidity-mortality and unpredictability are distinct, developmental exposures to each of these environmental factors should uniquely contribute to variation in life history strategy (Ellis et al., 2009). Longitudinal analyses of the National Longitudinal Study of Adolescent Health, the National Institute of Child Health and Human Development (NICHD) Study of Early Child Care and Youth Development, and the Minnesota Longitudinal Study of Risk and Adaptation (MLSRA) support this prediction (Belsky, Schlomer, & Ellis, 2012; Brumbach, Figueredo, & Ellis, 2009; Simpson, Griskevicius, Kuo, Sung, & Collins, 2012). For example, in the NICHD and MLSRA studies, exposures to environmental unpredictability in the first 5 years of life (e.g., parental changes, residential changes) uniquely predicted faster life history strategies in adolescence and emerging adulthood, independent of the effects of unpredictability in later childhood and indicators of extrinsic morbidity-mortality.
All developmental processes are ultimately the product of structured organism–environment interplay. Development is always modulated by the organized phenotype, which is initially provided by the parents in the form of a zygote and then changes during ontogeny in response to both genetic and environmental influences.
Consider a central life history trait: timing of sexual maturation. Sexual maturation is regulated by energetic conditions, so that (on average) individuals in well-fed populations experience early puberty and individuals in poorly fed populations experience late puberty (Ellis, 2004). The effects of energetic conditions, however, are modulated by the organized phenotype. For example, food-getting ability (a behavioral phenotype), metabolic efficiency (a physiological phenotype), and energy stores in the form of body fat (a morphological phenotype) all contribute to regulation of puberty; that is, these phenotypic traits modulate the effects of the critical environmental factor (energy availability) on maturation and functioning of the reproductive axis. The same logic applies to genetic effects: Genes provide templates for the production of particular molecules that become incorporated into the phenotype, depending on the responsivity of the phenotype to those molecules and the presence of the necessary environmental building blocks (substances from outside the organism) to support gene expression (West-Eberhard, 2003). The effects of genes, environments, and phenotypes are hierarchically organized: The preexisting phenotype is the transducer of both genetic and environmental sources of information. Specifically, genetic and environmental effects depend on the phenotype being organized to accept them, and the modified phenotype retains these effects as development proceeds. In this sense, the phenotype embodies one's own particular history of genetic and environmental effects.
An important phenotypic characteristic that moderates the effects of environmental conditions on the timing and tempo of puberty is biological sensitivity to context, which Boyce and Ellis (2005) defined as neurobiological susceptibility to both cost-inflicting and benefit-conferring features of the environment. Enhanced biological sensitivity to context increases developmental receptivity to the environment, with more neurobiologically susceptible individuals experiencing more developmental change in response to environmental conditions (Ellis, Boyce, Belsky, Bakermans-Kranenburg, & van IJzendoorn, 2011). Boyce and Ellis (2005) operationalized biological sensitivity to context as heightened autonomic or adrenocortical reactivity to environmental challenge (see Obradovic, 2012; Sijtsema et al., 2013, for reviews of empirical evidence and limitations). In a longitudinal study of children from preschool to high school, lower-quality parent–child relationships forecasted faster initial tempo of puberty and earlier pubertal timing, but only among children showing biological sensitivity to context in the form of heightened sympathetic nervous system or adrenocortical reactivity (Ellis, Shirtcliff, Boyce, Deardorff, & Essex, 2011). Thus, consistent with bidirectional models of person-environment interactions, environmental effects on regulation of puberty depended on the extant phenotype being organized to accept them, with heightened stress reactivity increasing the child's susceptibility to familial conditions.
Although we do not have detailed information on gene-environment interactions in human sexual development, there is emerging evidence that genetic effects on puberty are also conditioned by environmental context, and vice versa. The first molecular genetic study to investigate this question focused on variation in the estrogen receptor gene ESR1 (Manuck, Craig, Flory, Halder, & Ferrell, 2011). Consistent with past research, women who reported being raised in families characterized by distant interpersonal relationships and high levels of conflict tended to reach menarche earlier than women raised in close families with little discord. However, this effect was moderated by ESR1 variation. Among women who were homozygous for minor alleles of the two ESR1 polymorphisms examined in the study, a childhood history of low-quality family relationships (−1 SD) was associated with a 1-year decrease in age of menarche compared with a childhood history of high-quality family relationships (+1 SD); no such effect was found among women with other ESR1 genotypes. These data demonstrate moderating effects, in which environmental influences on regulation of puberty depend on genotypic variation.
* * *
The organized phenotype incorporates and biologically embeds environmental and genetic inputs throughout the life course. This ongoing process translates into individual differences in such critical traits as body size, energy reserves, metabolic efficiency, susceptibility to environmental influence, immune function, fecundity, mate value, and fighting ability. Differences between individuals in these phenotypic traits influence the cost-benefit trade-offs of different life history strategies and thus play a central role in regulating the development of these strategies. Life history concepts can be used to make remarkably accurate predictions about the structure of individual differences in physiology, growth, and behavior and the environmental factors that shift development along alternative trajectories (Del Giudice & Ellis, in press). In particular, life history theory delineates basic dimensions of environmental stress and support that underlie the multitude of risk and protective factors described in mainstream developmental psychopathology—resource availability, morbidity-mortality risk, and unpredictability.
Development matters, and this should be reflected in how evolutionary psychologists theorize about what is inherited and how. Infants are not born as blank slates; evolution has prepared them to “expect” certain types of environments and to process some information more readily than others. But prepared is not preformed (Bjorklund, 2003). It is the constant and bidirectional interaction between various levels of organization, which changes over the course of development, that produces behavior. Although contemporary evolutionary psychologists clearly state that “environment” interacts with genetic dispositions to produce adaptive behavior, how this occurs (i.e., how phenotypes develop) is rarely addressed. This is a major contribution that a developmental perspective can have for evolutionary psychology, along with the realization that natural selection has impacted human thought and behavior not only during adulthood, but also during infancy and childhood. A developmental perspective does not lessen the role of genetics in explaining contemporary human behavior, but rather helps to clarify how genes interact over time with environments, broadly defined, to produce adaptive patterns of thought and behavior, including individual differences. Such a perspective can go a long way, we believe, to bringing evolutionary thought to a wider range of behavioral scientists.
Paraphrasing Dobzhansky, we believe that nothing in development makes sense except in the light of evolution. An evolutionary perspective affords a deeper understanding of human ontogeny, as it does all aspects of human functioning. However, an evolutionary perspective by itself is not sufficient to “explain” development, but must be integrated with other causal factors, including also sociohistorical and current contextual influences (from genes through contemporary culture) (Bjorklund & Hernández Blasi, 2012).