Clark Spencer Larsen
Bioarchaeology has developed into a distinctive discipline due in part to growth in interest in the role of human remains in understanding the history of the human condition. Previous generations of bioarchaeologists typically studied archaeological skeletons without ever having seen the context of recovery. Typically, the remains were excavated by an archaeologist and then transported to the laboratory, where a sole worker — the bioarchaeologist — studied them. Thus, collaborative research was limited to the interaction between the individual who excavated the skeletons and the individual who studied them. Oftentimes, the results of the investigation of the remains ended up in an obscure archaeological publication or report as an appendix.
Although the disconnection between archaeological context and bioarchaeological study continues to be an all too frequent practice, the on-site presence of skeletal specialists and supervision of excavation by bioarchaeologists is becoming commonplace. My sense of the field of bioarchaeology is that the results of study of skeletons are now incorporated into the body of research reports versus the earlier practice of relegation to unread appendices (and see Buikstra, 1991). Moreover, in contrast to the earlier practice of the lone osteologist attempting to report on all aspects of skeletal variation, bioarchaeological research increasingly involves teams of scientists — often drawn from nonanthropological disciplines. These teams draw from a range of areas of expertise that join together to address common problems and questions.
360The purpose of this chapter is twofold. First, it discusses the questions and topics that archaeological skeletons are especially useful at addressing. Second, it discusses important technological and methodological advancements made in the field in the last several decades. Bioarchaeologists have been successful at drawing from research protocols developed in other sciences, and this chapter identifies some tools and skills that have been especially useful in helping us develop a more informed understanding of the human past, at least as it is based on the human biological component of that record.
The changing scope of bioarchaeology, especially its interdisciplinary orientation, reflects what has happened in most other areas of science in the later 20th century and into the 21st century. Bioarchaeologists have been adept at identifying and coopting developments in other disciplines in addressing problem-oriented research. As discussed in this chapter, the disciplines that have contributed to bioarchaeology include those housed in a diversity of fields, including the biological, geological, chemical, physical, and engineering sciences — the so-called hard sciences — and the social and behavioral sciences. More than anything else, this diversity of fields that bioarchaeology draws from reflects the fact that human beings are highly complex. Unlike other animals, Homo sapiens and their ancestors involve a complex interaction among biology, culture, and the environment. This interaction is expressed in multiple ways in biological tissues that are often difficult to interpret. It is the bioarchaeologist’s primary task to identify and interpret this complex interaction from the remote (and sometimes not so remote) past, relying on preserved biological tissues. Understanding this complex interaction helps us understand past populations as though they were alive today—as living, breathing, and functioning human beings.
There are some limitations in what we can learn about past human biology. Bioarchaeologists are almost always restricted to the study of bones and teeth (rarely partial or whole bodies are available for study), whereas human biologists who study the living are able to look at a range of behavioral and anatomical parameters that either cease to exist once a person dies or disappear altogether once the soft tissues have decayed away. Moreover, skeletons from a cemetery may not always be representative of the living population from which they are drawn. For example, a person who dies from an infection may do so well before the osteological signature has had time to develop. However, a person who displays an indication of a past perturbation or stress, such as hypoplasia or periostitis, may have had relatively robust health — enough so that the person survived the stress that caused the osteological modification. Thus, it is quite possible that there is a positive correlation between pathology and health in 361individuals drawn from a cemetery population (see Wood et al., 1992; Cohen, 1998; Buikstra, 1997). Key in understanding the representation of archaeological cemeteries and the skeletons drawn from them is that they are aggregations of samples of populations, usually covering multiple generations.
Despite the limitations inherent in the study of ancient remains, there is much to learn about the past from them. This section focuses on the following three areas that archaeological human remains are informative about when drawing inferences about the past: (1) quality of life, (2) behavior and lifestyle, and (3) biological relatedness (biodistance) and population history.
The measurement of quality of a person’s life is highly subjective and reflects a wide variety of circumstances — social, cultural, and biological. Because of this subjectivity, quality of life is difficult to measure and can represent many different things to different people (Bennett and Phillips, 1999). For example, the number of labor-saving devices a person owns might be one measure. Most measures of quality of life include health as the chief component, especially regarding disease and its consequences for the individual or population (Ware, 1987; Guyatt et al., 1993; Allison et al., 1997).
The central role that health plays in measuring quality of life in the living indicates that documentation of health indicators in ancient skeletons can provide bioarchaeologists with a means of assessing quality of life in the past. This chapter focuses on dietary reconstruction and nutritional inference, disease, and growth, especially new tools developed in other sciences that have helped refine our understanding of the past.
An understanding of diet (the foods eaten) is fundamental to understanding health, mainly because diet provides the nutrition (the nutrients that these foods provide) one needs for a healthy life. For most of the history of anthropology, diet in past populations was based largely on the study of plant and animal remains. Traditionally, these areas of investigation — paleoethnobotany and zooarchaeology, respectively — were outside the purview of bioarchaeology. Skeletal biologists did not involve themselves in the reconstruction of diet or implications that these diets had for nutrition in past populations.
362With the development of stable isotope analysis of archaeological human bone for dietary reconstruction, bioarchaeologists became deeply involved in the growing discussion in anthropology of issues relating to food use, dietary reconstruction, and nutritional inference. Stable isotope analysis was coopted by bioarchaeology from other disciplines—the theory is based in physics and it was first applied in geological sciences, especially geochemistry (Schoeninger, 1995). Elements that comprise tissues of plants and animals occur in various forms called isotopes, including, for example, carbon (C), nitrogen (N), hydrogen (H), oxygen (O), and strontium (Sr), which differ according to the number of neutrons found in their nuclei. Plants use either one of three photosynthetic pathways, which, because of the differences in how carbon is acquired from atmospheric carbon dioxide, express differences in the ratios of the stable forms of the element (13C to 12C). Because these differences in ratios of 13C to 12C—expressed as δ13C values—are passed up the food chain to the consumer (animals and humans), researchers realized the tremendous potential for paleodietary study. Using instrumentation developed in chemistry called mass spectrometry, Vogel and van der Merwe (1977; van der Merwe and Vogel, 1978) measured amounts of stable carbon isotopes 13C and 12C and their ratios in archaeological bone samples from New York state. They hypothesized that because maize was the primary economically important plant consumed by prehistoric humans in this region of North America, it should be possible to identify the timing and importance of its use by examining the stable isotope ratios in a temporal succession covering the transition from foraging to farming. That is, maize has a C4 photosynthetic pathway, in contrast to most other plants eaten in this region, which are almost exclusively C3 plants. Because of differences in the way that carbon is acquired between C3 and C4 plants, plants of the former variety have lower (more negative) δ13C values than plants of the latter variety. This means that the bone tissue (collagen) should express higher (less negative) δ13C values when the shift to maize agriculture occurred, and the higher the value, the greater the importance of maize in diet. Their pilot study provided compelling evidence that contrary to the assertions of many archaeologists, maize did not become an important part of diet until late in North American prehistory, mostly after about AD 800 or so.
Since the mid-1980s, archaeologists and bioarchaeologists have analyzed thousands of human bone samples from around the world to address the shifting patterns of human diet based on stable carbon isotopes. Other stable isotopes and trace elements have also provided an enormously important perspective on diet and nutrition, such as for identifying relative use of terrestrial vs marine resources in coastal settings [see Schoeninger (1995)].
The knowledge that maize was grown and harvested later in prehistory provided a new and fresh perspective on other changes that took place in prehistoric 363societies in a number of regions. In eastern North America, at about the same time that maize became a key part of economy, there were widespread changes in sociopolitical complexity and settlement. Populations in the final centuries of prehistory became more complex and more sedentary than their predecessors and were more dependent on agriculture. The increasingly sedentary nature of these groups was also accompanied by an aggregation of population in villages and towns. Most authorities are convinced that the cultural florescence that took place during this time, known as the “Mississippian,” was fueled by an economy reliant on production of domesticated plants, specifically focused on maize (Smith, 1989).
The focus on maize has important implications for health and nutrition in prehistoric societies. For example, maize is a poor source for protein in that it is lacking or deficient in several essential amino acids necessary for growth and development. This and other factors suggest that a change in diet led to a decline in nutritional quality (Larsen, 1995).
Disease is also an important component of quality of life. The representation of disease in ancient remains has been the focus of study by bioarchaeologists and others since at least the 18th century, and inferences about quality of life have been made (Ubelaker, 1982; Buikstra and Cook, 1980; Larsen, 1997). Unfortunately, due to the overlap in bony responses to specific pathogens, it is oftentimes difficult, if not impossible, to diagnose specific diseases. Beginning with the publication of Earnest Albert Hooton’s classic monograph, The Indians of Pecos Pueblo, a Study of Their Skeletal Remains in 1930, a new approach to the study of ancient disease commenced. Hooton presented the frequency of a variety of skeletal pathological conditions present in the Pecos series, such as for osteoarthritis, infection and inflammatory conditions, and trauma. His inchoate paleoepidemiological study set the stage for population-oriented research undertaken by J. Lawrence Angel in the eastern Mediterranean region (Angel, 1966b, 1984) and later workers (e.g., Cook, 1984; Larsen, 1982; Ubelaker, 1994; and many others). Especially important about Hooton’s approach is the shift in focus from mostly diagnosis of specific diseases to the development of a greater understanding of the importance of context and biocultural setting in interpreting disease prevalence and pattern. That is, bioarchaeologists seek to identify the environmental, social, cultural, and other factors that best explain the presence of a disease or set of diseases in the past and circumstances for changing frequency.
Diagnosis is still an important element of understanding disease and health history in an ancient population. Skeletal lesions are notoriously difficult to match with a particular disease that caused them. Some general patterns have emerged that are consistent with uniformitarian notions of how pathogens operate in a 364living population. For example, bioarchaeologists have documented in many settings a general pattern of increase in frequency of periostitis and bone infection in later prehistory where populations have increased in size and are concentrated in settled communities (Larsen, 1995). Epidemiological theory tells us that infection increases when human populations become denser and sedentary. Indeed, this is the general pattern that we see in the prehistoric past, especially in North America where the bioarchaeological record is most complete.
This pattern change in periostitis is interesting, but still leaves open the question of what diseases were present in the past. The identification of specific diseases is not just an “academic” question. Rather, their documentation offers an important avenue for reconstructing the evolutionary history of infectious diseases and for understanding their interaction with humans and other organisms. This knowledge provides an important tool for control of the disease in the living.
The osteological signatures of disease are somewhat clear for several chronic infectious diseases, including tuberculosis, treponematosis, and leprosy (see Ortner and Putschar, 1985; Larsen, 1997). However, the overlap in skeletal manifestations between these and other disease syndromes greatly constrains our ability to identify specific infectious disease in ancient remains. Tools developed in other sciences are offering important — and potentially revolutionary — insight into the history of disease. First, histology, the microscopic study of tissue structure, offers perspective on disease diagnosis that is informative in ways not possible with gross inspection of pathological bone. One application in particular is the histological analysis of cranial bone exhibiting cribra orbitalia and porotic hyperostosis. There has been a growing consensus in bioarchaeology and paleopathology that these two pathological conditions represent iron deficiency anemia. However, detailed examination of thin sections of bone lesions reveals that although diagnosis of anemia is correct in many cases, scurvy, inflammation, and other agents or circumstances can be involved (see Kreutz, 1997; Carli-Thiele, 1996; Schultz, 1993; Schultz et al., 2001).
Second, DNA extracted from ancient bone is beginning to confirm the presence of specific diseases in osteological remains. Like the human host, the pathogens that cause disease are living organisms. Therefore, just as human remains retain nucleic acids with potentially amplifiable DNA, so too should the pathogens that the human host was carrying at the time of death. Spigelman and Lemma (1993; see also Rafi et al., 1994) completed one of the earliest studies on DNA in order to detect the presence of the pathogenic organism that causes tuberculosis, Mycobacterium tuberculosis. They applied polymerase chain reaction (PCR) analysis — a technological breakthrough in molecular biology that allows identification and amplification of DNA (Mullis and Faloona, 1987) — to bone samples from skeletons that have skeletal lesions diagnosed as tuberculosis. PCR analysis revealed the presence of M. tuberculosis DNA. Similarly, extracted DNA from soft tissue (lung) tuberculosis in a 1000-year-old mummy 365from Chiribaya Alta in southern Peru was analyzed by PCR, identifying a segment of DNA that is unique to M. tuberculosis (Salo et al., 1994). This evidence for presence of tuberculosis in a pre-Columbian New World setting is enlightening because it presents unequivocal evidence for the disease in the New World well before the arrival of Europeans. This finding runs counter to the traditional argument that tuberculosis was introduced by early Spaniards. This powerful tool also now reveals the presence of tuberculosis in a range of settings around the world, both before and after the late 15th century (e.g., Spigelman and Donoghue, 1999; Faerman et al., 1999). Moreover, unlike human DNA extracted from archaeological skeletons, pathogen DNA is relatively contamination free, making it a highly promising material for future investigation (see Kolman and Tuross, 2000).
Growth and development of skeletal and dental hard tissues offer important perspectives on health and nutritional status, perhaps better than any other indicators. The current understanding of human skeletal growth as it is documented in ancient skeletons is backed by an extensive literature, especially following Stewart’s (1954) study of Eskimos and Johnston’s (1962) study of Indian Knoll, Kentucky (and see review in Hoppa and FitzGerald, 1999). Most bioarchaeological studies focus on linear growth of long bones and dental development. The resulting growth profiles from bones and teeth suggest that environmentally disadvantaged populations have retarded growth.
Studies of enamel defects called pathological striae or lines of Retzius (or Wilson bands) from ground thin sections of teeth using histological techniques provide an important retrospective picture on growth history for skeletal individuals. Jerome Rose (1977, 1979; Rose et al., 1978) was among the first to show the important value of histological analysis in understanding growth history and to infer quality of life in archaeological populations. Based on his study of dentitions from central Illinois (Dickson Mounds) in a temporal succession of populations, he was able to identify a pattern of increasing physiological stress based on a higher frequency of microdefects in later than in earlier teeth. Rose argued that the increase in stress was caused by a decline in nutritional quality with the increased emphasis on maize, along with population aggregation and increased disease stress.
In historic settings where stress can be documented through written sources, the context for study of microdefects is provided. In Spanish Florida, Roman Catholic missions were established among native populations in the 16th and 17th centuries. For this region of North America, there is a range of evidence — bioarchaeological and documentary — indicating deteriorating quality of life in native populations (Hann, 1988; Larsen, 2000). During this time, native populations increased maize production and consumption, relocated to crowded mission 366communities, and increased labor generally. Simpson (1999, 2001) analyzed longitudinal thin-sections of anterior teeth (incisors, canines) from prehistoric and mission-era native populations in northern Florida. Whereas Rose’s analysis involved light microscopy, Simpson applied scanning electron microscopy (SEM) for observation and analysis. Scanning electron microscopy is now the preferred tool in histological study, mainly because it offers much greater power of magnification, greater depth of focus, and greater resolution in detail of structure than standard light microscopy (see discussion in Teaford, 1991). SEM uses an electronic beam that scans the tooth section, causing an emission of electrons. These secondary electrons are amplified, resulting in an image reflecting very slight variations in brightness created by surface features in the tooth section. Like the image seen on a standard television, the image of the section represents an electronic composite of many points of light rather than the object itself.
Normal Retzius lines are slightly darkened and radiate outward from the dentine–enamel junction to the tooth’s surface. The regular spacing of the lines reflects periodic growth deposition of enamel lasting anywhere form 6 to 9 days of growth. Pathological Retzius lines are abnormally dark bands reflecting acute stress episodes lasting from several hours to several days. Simpson’s comparison of mission teeth with pre-mission (mostly prehistoric) teeth revealed a marked increase in pathological Retzius lines (from 48 to 83% of teeth). This finding is consistent with the notion that mission peoples experienced more stress than their pre-mission ancestors. Pathological Retzius lines are a nonspecific indicator of health, but Simpson (2001) believes that for this setting the lines are likely caused by dehydration from infantile diarrhea. Under these circumstances, severe dehydration results in dysfunction of ameloblasts caused by intracellular fluids moving into intercellular spaces in the developing enamel. His interpretation is consistent with the fact that the mission setting was highly unsanitary and conducive to conditions that would cause widespread infantile diarrhea (see Larsen et al., 1992; Larsen and Sering, 2000).
Physical activity is a defining characteristic of humans generally and shows a high degree of patterned variability across the world. Workload, for example, varies considerably across the spectrum of different subsistence strategies, ranging from heavy to light. There are some human groups that spend much of their day in highly demanding physical activities, whereas there are others that are involved in relatively little physical activity (e.g., most Americans). In order to 367reconstruct and interpret behavioral patterns in past humans, bioarchaeologists commonly rely on the study of osteoarthritis, primarily because the disorder is caused in large part by wear and tear on the joints of the skeleton. Because other factors also influence osteoarthritis, such as environment, climate, body weight, and genetic predisposition, it is not possible to equate frequency or type of osteoarthritis with frequency or type of workload or physical activity. Generally speaking, however, populations that had demanding physical lifestyles had more osteoarthritis than populations living under less demanding circumstances (Larsen, 1997).
Biomechanical analysis of long bones (e.g., femur, humerus) is much more revealing about activity level and type. Just like building materials that go into the construction of buildings or bridges, long bones are structured so as to be able to withstand breakage due to excessive mechanical loading, such as from bending or twisting (torsion). The ability of building materials and long bones to resist mechanical loads is called “strength” or “rigidity” (see Ruff, 1992; Larsen, 1997). “Beam theory” from engineering science provides an important framework for present-day bioarchaeologists for drawing inferences about activity and workload in past humans. In this regard, long bones can be modeled as hollow beams. When viewed in cross-section, the magnitude of physical stresses in these hollow beams is directly proportional to the distance from a central or “neutral” axis running down the midline of the bone. Mathematically, the stresses are equal to zero at this midline, but the further one is from the midline, the greater the magnitude of mechanical stress. Thus, a bone that is the strongest is one in which the material (cortical bone) is placed furthest from the midline.
Engineers have developed standard formulas for calculating cross-sectional geometric properties that measure the strength of a cross-section. Unlike building materials analyzed by engineers, human bone tissue is highly dynamic, and the strength of the cross-section changes over the course of an individual’s lifetime. For example, when viewed in cross-section, the diaphyses of the long bones of the skeleton continues to expand throughout life. This continued expansion appears to maintain the mechanical integrity of the element, especially as bone tissue is lost after about age 40 (see Ruff and Hayes, 1982).
Two key cross-sectional geometric properties analyzed by bioarchaeologists include values called “I,” which measures the ability of the bone to resist bending, and “J,” which measures resistance to torsion. J represents the sum of the strength values of Ix and Iy (resistance to bending in the “x” plane and “y” plane, respectively) and is a good overall indicator of bone strength.
Cross-sectional geometric properties can be measured either invasively or noninvasively. In the former, the bone is cut with a fine-tooth saw at the section location (e.g., femur midshaft), the section is photographed, the photograph is projected onto a digitizer screen, and the outlines of the endosteal and periosteal surfaces are digitized manually. With computer software developed by engineers 368and modified for study of human bone, the cross-sectional geometric properties are calculated automatically from the digitized section images (see Ruff, 1992). Alternatively, images can be generated via computed axial tomography (commonly known as CT scans). Computed axial tomography was developed in medical science as a noninvasive means to observe body tissues in living persons. Bones offer excellent material for observation of surfaces not visible (i.e., the endosteal surface). Due to advances in CT technology, it is now possible to create high-quality images that are as accurate as photographic images of the actual bone cross-section.
Long bones from a range of archaeological populations, mostly in North America, have been analyzed by various workers in an effort to characterize type and level of physical activity (Ruff, 1992; Larsen, 1997). For example, comparison of I and J values in the aforementioned prehistoric and mission-era populations from Spanish Florida reveals patterns of behavioral change. In particular, there is an increase in bone strength in comparison of late prehistoric and mission Indians that reveals an increase in workload once Spanish arrive in the region (Larsen and Ruff, 1994; Larsen et al., 1996; Ruff and Larsen, 2001). Historic records indicate that the mission Indians were heavily exploited for labor, including food production, transport of heavy materials, and construction projects. Thus, the increase in bone strength reflects an adaptation to increased labor demands during the 16th and 17th centuries. Biomechanical evidence provides clear biological evidence for changing patterns of lifestyle and behavior not possible from other sources.
Prior to their consumption, most foods have to be processed in some manner in order to make them chewable, enhance their taste, or provide key nutrients that might otherwise be unavailable once it passes to the digestive tract. Foods most humans eat today are highly processed, so much so that the amount of chewing has been minimized. Still, nearly all food requires some amount of chewing before it is passed on to the digestive tract. Study of gross wear on the occlusal surfaces of teeth reveals different patterns and severity of wear, reflecting the kinds of foods being eaten or the type of processing (e.g., with grinding stones) before they enter the mouth (Smith, 1991). The identification of wear patterns on the teeth allows the bioarchaeologist to draw inferences about foods that members of a particular population ate. Moreover, unusual patterns of wear, such as heavy wear on the lingual surfaces of maxillary incisors (e.g., Irish and Turner, 1987; Larsen et al., 1998; for review, see Milner and Larsen, 1991), indicates the use of teeth in extramasticatory functions.
Scanning electron microscopic study of occlusal surface tooth wear has been an important technological breakthrough in refining our understanding of tooth 369use and masticatory adaptation. A large body of experimental research on animals and humans fed different types and textures (e.g., hard vs soft) of foods reveals key patterns of variation. For example, subjects fed soft or otherwise nonabrasive foods show a strong tendency for having fewer microwear features (e.g., scratch width, pit size) than subjects fed hard or abrasive foods (Teaford, 1991; Teaford and Lytle, 1996).
Bullington (1991) examined occlusal surfaces of deciduous teeth from part-time agriculturalists and later intensive agriculturalists from west-central Illinois. Ethnobotanical research indicated that the earlier group ate wild plants and animals, along with various domesticated starchy seeds having hard seed coats. In contrast, the later group replaced (at least partially) these plants with maize. Analysis of ceramic technology indicates that plants in the later period were likely boiled for long periods of time, which would soften the food into a mushy consistency. Her comparisons of microwear using SEM revealed that the deciduous teeth of two prehistoric groups had similar types and frequencies of microwear on their occlusal surfaces. However, comparison of microwear in the youngest age cohort (ca. 0.5-1.0 years) indicated that the later intensive agriculturalists have a lower frequency of microwear features than the earlier less intensive agriculturalists. This difference indicates the strong possibility that the later infants were eating softer foods than the earlier infants.
In contrast to the setting from west-central Illinois, microwear appears to have changed dramatically in other areas of North America and elsewhere in major subsistence shifts. Teaford and colleagues (Teaford, 1991; Teaford et al., 2001) found a general reduction in frequency and severity of microwear features with the shift from foraging to farming on the southeastern U.S. Atlantic coast. In mission populations during the period of more intensive agriculture, the occlusal surfaces of molars contain fewer pits and scratches, reflecting both the change in foods consumed (more maize) and how they were prepared (e.g., more prolonged boiling of food). In contrast, Molleson and co-workers (1993) and Pastor (1992) documented an increase in frequency of microwear features in western and southern Asia, respectively. In these settings, the change appears to be related to the adoption of the use of grinding stones, used in preparing grains into flour. This new technology added grit to the foods being eaten, and hence more microwear.
The identification of relatedness between human groups has been a major area of discussion in anthropology since the 18th century, when measurement 370of skulls began to be used to identify biological/anatomical differences and to infer biological history and population relationships. Bioarchaeologists today use biodistance to identify temporal and spatial relationships between and within past groups based on the study of polygenic skeletal and dental traits (Buikstra et al., 1990). The approach assumes that sharing of skeletal and dental attributes indicates affinity (e.g., presence of a persistent metopic suture or accessory cusps on molars). These traits include both discrete (non-metric) and metric features, which for the most part do not bear a one-to-one relationship with a person’s genome. However, biodistance analysis has provided an important tool for providing information on population structure and relatedness.
Recent advances in the last few years in extraction and amplification of DNA from archaeological bone are beginning to make possible the identification of genetic distance, a development that was thought to be unlikely a decade ago. Although application of the PCR to archaeological bone is still very much in its infancy in bioarchaeology, the situation is changing rapidly as protocols are established and reliable results begin to emerge.
Hypotheses about population movements and relationships in North America have generated a great deal of debate among archaeologists and linguists. In the American Great Basin (Nevada and parts of surrounding states), Sidney Lamb (1958) suggested that based on glottochronological evidence the ancestry of present-day Numic speakers living throughout the region today could be traced to a founding population in southeastern California. He argued that the founding group spread throughout the Great Basin from this homeland at about AD 1000. Some archaeologists believe that changes in material culture, settlement pattern, and subsistence seen at this time were caused by the Numic expansion and replacement (e.g., Bettinger, 1994), whereas others see no record of cultural discontinuity (e.g., Raven, 1994).
Studies of mitochondrial DNA (mtDNA) from living Native Americans reveal that at least four distinct founding matrilines (haplogroups A, B, C, and D) account for most groups (Kaestle et al., 1999; Smith et al., 1999; Crawford, 1998). Comparison with Asian populations reveals the strong likelihood of a northeastern Asian origin for Native Americans. Moreover, different language groups today possess different frequencies of haplogroups, including Numic and non-Numic speakers. Theoretically, the identification of haplogroups from mtDNA extracted from bone samples from archaeological contexts in the Great Basin should reveal either a discontinuity or a continuity between prehistoric and living peoples in the Great Basin.
Kaestle and co-workers (Kaestle, 1995; Kaestle et al., 1999) extracted and amplified DNA from a sample of skeletons from the Stillwater Marsh region of western Nevada in order to identify ancestral–descendant relationships. Their findings indicate that the most parsimonious explanation appears to be some degree of admixture between Numic and pre-Numic peoples. This appears to 371be the case because the presence of mtDNA and albumin phenotypes for both Numic and non-Numic speakers are present in the Stillwater series. Kaestle and co-workers suggest that if Numic speakers did move into the Great Basin, they did not replace the earlier pre-Numic population. Thus, while the native languages spoken at the time of European contact were derived from the end of the first millennium AD, the biological composition involved both pre-AD 1000 and post-AD 1000 groups. This conclusion jibes with the observations made by Smith and co-workers (1995) from their analysis of serum protein albumin extracted from skeletal samples from the Stillwater Marsh remains from the Great Basin. That is, the combined absence of AlNa and the presence of AlMe phenotypes indicate that the ancestry of the prehistoric populations from the region are neither Athapascan nor Algonkian, which is consistent with most linguistic reconstructions for the region (see Smith et al., 1995). The mtDNA evidence makes clear that the prehistoric and historic populations in the region are likely biologically related.
At the other side of the Great Basin, from prehistoric skeletons recovered from near the Great Salt Lake in northern Utah, O’Rourke and colleagues (1996, 1999; Parr et al., 1996) have provided another context for testing hypotheses about the so-called Numic expansion. Like the evidence derived from the study of skeletons from Stillwater Marsh, DNA evidence from the Great Salt Lake region indicates a probable continuity between pre-AD 1000 and post-AD 1000 populations in the eastern Great Basin. Interestingly, the genetic marker associated with haplogroup B, the 9-bp deletion, is present in the earliest and latest samples in this region, providing additional support for biological continuity in the Great Basin.
The documentation of a person’s residence history through the analysis of stable isotopes in earlier and later forming tissues offers an important means of identifying patterns of mobility. Strontium isotope ratios (87Sr/86Sr) in skeletal and dental tissues are useful for identifying the amount of marine and terrestrial foods eaten by a person. In the South African Cape region, strontium ratios differ between people who live in the coastal setting vs those who live in a terrestrial setting in the interior mainland. Ratios determined from bone samples for prehistoric people living in the interior are higher than for prehistoric people living on the coast. These differences reflect local geochemistry such that people living in the interior are exposed to — the geochemical isotope ratios are passed directly through the food chain without undergoing fractionation (Sealy et al., 1991). Indeed, comparison of earlier formed teeth with later formed teeth in South Africa reveals isotopic differences reflecting different location of the individual when the teeth were forming. Similarly, Price and co-workers (1994a,b; Grupe, 1995) 372analyzed strontium isotope ratios in teeth (earlier forming tissues) and mature bone (later forming tissues) in the American Southwest and Southern Bavaria. In the Southwestern setting, only some of the individuals displayed strontium isotope ratios that matched local geochemistry, indicating that these individuals likely spent their entire lives at their natal residence (or close to it). In Bavaria, comparison of isotope ratios in earlier and later populations from the Bell Beaker period (2500-2000 BC) reveals a decrease in variation, which can be interpreted as representing a decline in mobility of the population in general. This interpretation is consistent with archaeological settlement analysis showing a shift to increasing sedentism.
My reading of the record of the changes seen in the field of bioarchaeology since the mid-1980s is one of vitality, innovation, and increasing sophistication. Long past are the days when the sole bioarchaeologist working on an archaeological skeletal series would be expected to develop a comprehensive analysis of a series of archaeological skeletons. More commonly, the bioarchaeologist today is involved from the start of an archaeological project and where the skeletons are excavated in order to address questions that will be used to improve our understanding of quality of life, behavior, and population history in past societies. The bioarchaeologist called upon to study the skeletons will likely call upon other experts, such as those who study ancient DNA, bone geometry, or tooth microwear.
Bioarchaeologists have recognized the strength of the tools developed in other sciences that can help address key issues about the human biological past. The newly evolving instrumentation and techniques discussed in this chapter — scanning electron microscopy, computed axial tomography, mass spectrometry, and so forth — are not just “bells and whistles.” Rather, they offer a means to address issues about human biological history. Based on recent history of bioarchaeology, we can expect to see continued growth of the discipline fueled by new methodological and theoretical developments. New opportunities to learn about the human past will continue to help us.
Any discussion of the history of bioarchaeology will almost certainly highlight difficulties relating to comparability of data sets generated by different bioarchaeologists. The development of common standards for data collection is helping to address this problem (e.g., Buikstra and Ubelaker, 1994). A broader concern is the need to increase even more the level of collaboration between archaeologists and bioarchaeologists. I believe that further refinements in this arena are necessary 373for the placement of bioarchaeology within the larger context of anthropological and behavioral sciences generally.
There is often the misperception that archaeological bones and teeth are not especially informative about human social or political behavior. For example, it is often assumed that the study of gender is inaccessible in past settings. Some argue that gender attribution — and therefore the issue of gender overall — is too ambiguous in archaeological settings to be able to reconstruct and interpret past human behavior. Wylie (1991:31) noted that “the very identification of women subjects and women’s activities is inherently problematic; they must be reconstructed from highly enigmatic data.”
Contrary to Wylie’s statement, gender is a highly visible part of the past. Nowhere is the potential for elucidating human social behavior where gender is concerned than in human skeletal remains. Indeed, human remains provide the only direct means of identifying the sex of a person in archaeological contexts, and arguably sex identity — female or male — provides a key window into gender. Indeed, this point is underscored with the publication of an entire volume devoted to the discussion of sex and gender in relation to disease and health in the past (see Grauer and Stuart-Macadam, 1998). Clearly, this biocultural approach to the study of gender has important meaning for the emerging studies in archaeology, other areas of anthropology, and other disciplines in general and for health and behavior in particular.
The link between quality of life and gender also speaks to the larger issue of the relationship between health and political complexity. The political structure of a population is strongly integrated with its subsistence base. In this regard, access to food (and, by extension, nutrition) is influenced by the political system: access differs according to age, gender, status, and other cultural identities. In her extensive overview, Danforth (1999) documents clear links between quality of life (especially nutrition) and political complexity, finding that most members of egalitarian societies have good nutritional health and that low-status individuals in state-level societies have poorer health than high-status individuals. This is a pattern that is very much the same as seen in societies around the world today (e.g., see various studies in Strickland and Shetty, 1998). The discussion of political complexity and implications for health and quality of life is an issue that is of enormous concern to a range of disciplines, and bioarchaeology has much to offer, especially with regard to the past.
Bioarchaeology is enjoying a period of robust growth. This growth relates to the increasing recognition that human remains offer valuable insight into human behavior, health, and quality of life in the past. More importantly, the growth 374of the discipline reflects the strengths that it brings to the table in addressing issues about the past as well as the success of the discipline in adopting and applying developments — technological, methodological, and theoretical — from other sciences in new, innovative, and highly creative ways.
Much of the discussion in this chapter reflects my own education in bioarchaeology. I especially thank my colleagues and collaborators who have contributed in so many ways to the advancement of the field in general and who have contributed to my own research in particular—Christopher Ruff, Margaret Schoeninger, Mark Teaford, Dale Hutchinson, Katherine Russell, Mark Griffin, Frederika Kaestle, David Smith, and Scott Simpson. I am fortunate to have been involved in the kind of archaeological–bioarchaeological collaboration that I espouse in this chapter. I especially acknowledge the projects that I have coordinated with David Hurst Thomas, Bonnie McEwan, Jerry Milanich, Rebecca Saunders, Robert Kelly, and Joseph Craig. I am grateful to the National Science Foundation, the National Endowment for the Humanities, and the St. Catherines Island Foundation, the primary agencies that have funded my research.