Michael G. Yost and P. Barry Ryan
Dr. Yost and Dr. Ryan report no conflicts of interest related to the authorship of this chapter.
This chapter introduces concepts and activities that are at the core of environmental health: recognizing, measuring, and ultimately controlling human exposures to harmful agents. Our account begins with industrial hygiene, a technical field that historically evolved in industrial workplaces. It then moves beyond industrial hygiene to describe exposure science, a modern field focused on exposure assessment in both the workplace and the general environment.
Industrial hygiene and exposure science share a common task: quantifying exposures. This task is relevant both to public health practice and to research. In public health practice, quantifying exposures helps us to assess potential problems, direct preventive efforts and monitor their success, and check compliance with regulations. Quantifying exposures is also essential for research, because it allows investigators to quantify the association between the exposures and health outcomes. For example, knowing that carbon monoxide is an asphyxiant is only partially useful. Knowing how much carbon monoxide exposure is dangerous, and knowing how to measure the exposures where and when they occur, enables us to understand the biological effects more completely, identify acceptable levels and set standards accordingly, and monitor environments to be sure they are safe.
Industrial hygiene has moved beyond its traditional approach of measuring exposures to controlling them. Typically, an industrial hygienist in a factory is called upon to, say, monitor air levels of a hazardous agent such as a chemical solvent. If the exposures are excessive in a particular part of the factory, the hygienist will implement controls: for example, by substituting a safer solvent, upgrading the ventilation system, or providing personal protective equipment for affected workers.
In contrast, exposure scientists usually focus only on measuring and quantifying exposures in the general population (often in a research setting); the results of the exposure assessment then become the inputs for risk assessment and decisions by public health policymakers. Responsibility for controlling the excessive exposures rests with these other professionals.
Industrial hygiene (referred to as occupational hygiene outside the United States) has been defined as “the discipline of anticipating, recognizing, evaluating and controlling health hazards in the working environment with the objective of protecting workers health and well-being, and safeguarding the community at large” (International Occupational Hygiene Association, 2009). Industrial hygienists are professionals trained to manage workplace risks, in collaboration with allied professionals such as occupational physicians and nurses who treat work-related illness. Industrial hygiene has been practiced in the United States for about one hundred years. Industrial hygienists work to predict and then recognize workplace hazards, quantify the exposures, and implement appropriate control strategies.
Koren and Bisesi (2002, pp. 563–565) have developed concise definitions of each part of this paradigm. They define anticipation of hazards as “proactive estimation of health and safety concerns that are commonly, or at least potentially, associated with a given occupational or environmental setting.” Recognition of occupational hazards is the “identification of potential and actual hazards in a workplace through direct inspection,” a definition that emphasizes that empirical observation is at the heart of industrial hygiene. Evaluation includes measuring exposures through “visual or instrumental monitoring of a site.” Finally, control is the “reduction of risk to health and safety through administrative or engineering measures.” Industrial hygienists spend much of their time in real workplaces, observing, measuring, and problem solving to improve worker health and safety.
Anticipation is the first step before conducting a field assessment. The hygienist typically obtains information such as the site history, a manufacturing processes diagram, worker job titles, and material safety data sheets for the chemicals in use. The hygienist uses this information and practical knowledge to develop a preliminary list of potential workplace hazards, including safety hazards that pose a risk of injury and health hazards that pose a risk of disease. This anticipation process may also uncover environmental hazards that can impact nearby communities, rivers, woodlands, or other sensitive environments.
Examples of safety hazards include insufficient emergency egress; moving machinery or vehicles, such as fork lifts; slippery, elevated, or uneven surfaces contributing to trips and falls; and inappropriate chemical storage posing risk of fires or explosions. Although these concerns are the domain of a related profession, safety engineering, many industrial hygienists handle safety concerns as part of their job, especially at smaller facilities.
Examples of health hazards in the workplace include physical hazards, or agents, such as high noise levels, elevated temperatures and humidity, and radiation. Physical agents have in common high levels of energy or force on parts of the body. Physical hazards also include repetitive motion such as occurs in typing or hand tool use, which can increase the risk of musculoskeletal injuries such as back pain or carpal tunnel syndrome. Exposure to chemical hazards, or agents, can occur in many workplace processes and may present an acute or chronic hazard. Acute, high-level exposures to certain highly toxic chemicals, such as chlorine gas, may result in both acute and chronic health effects, disability, and even death. Such events must be clearly anticipated and controlled. More commonly, long-term exposures can lead to chronic effects, such as neurological damage from solvent exposures. For example, long-term exposure to benzene increases the risk of bone marrow dysfunction and aplastic anemia, a blood disease; inhaling asbestos fibers can lead to lung disease and cancers; inhaling crystalline silica contributes to silicosis in foundry workers; and radon increases the risk of lung cancer in uranium miners.
Some workplaces can also present biological hazards. For example, in health care settings workers may be exposed to blood-borne pathogens, such as hepatitis A, B, or C or HIV, that can be contracted due to mishandling of needles or other instruments that come into contact with body fluids. In these workplaces strict disinfection protocols, personal protective equipment such as gloves and masks, and secure disposal of biohazard wastes are used to prevent the spread of infections.
Industrial hygienists often are called upon to anticipate environmental hazards as well as workplace hazards. Environmental hazards may endanger public safety (as when a chlorine tank ruptures and neighbors are exposed to toxic gas), health (as when organic solvents from improper storage or disposal contaminate groundwater or drinking water), or welfare (as when smokestack emissions damage nearby trees or homes). Environmental effects also include ecological damage (such as killing fish by reducing the oxygen-carrying ability of lakes or streams) and economic damage (such as contaminating nearby land with heavy metals, industrial solvents, or pesticides that diminish the land's value for residential or recreational purposes). The industrial hygienist should anticipate such possibilities and review the site history. For example, a review of records or employee interviews may suggest that solvents have seeped into the ground and migrated off-site, contaminating groundwater. A hygienist who suspects such widespread contamination may consult with an environmental specialist to assess nearby groundwater.
Text Box 8.1 presents an example of an industrial hygiene evaluation, emphasizing hazard anticipation. The example shows that even with minimal information, the hygienist can anticipate hazards prior to visiting a facility. This assessment strategy depends on examining a range of information before visiting the site: the industrial process description, the job titles of workers, the chemicals in use at the facility (often found on the material safety data sheets required by law), and the history of the site. Using this information the hygienist develops a list of potential health and safety hazards, perhaps in checklist form, to enhance observations during the walk-through visit.
The initial recognition phase is usually accomplished during a site visit or walk-through, where the hygienist conducts a visual inspection of the facility to assess both qualitative and quantitative information about hazards. The hygienist inspects processes and procedures at the facility, observes workers in various job categories, and reviews any health and safety programs in place at the plant. During a walk-through the hygienist looks for a wide range of issues: physical, chemical, and biological exposures; ergonomic, mechanical, and psychological factors; safety hazards such as exposed machinery or slippery surfaces; high noise levels; and the presence of chemicals. A similar on-site review may look for environmental hazards, with an emphasis on off-site emissions.
Another important purpose of the walk-through is recognition of special subpopulations in the facility that are at elevated risk. Some workers may perform lifting or repetitive movements in their jobs, exposing them to ergonomic hazards. Work in a high-temperature area may subject other workers to heat stress. During a walk-through the hygienist notes these subpopulations and may evaluate hazards differently for different groups. The hygienist concludes the recognition phase with a detailed picture of the manufacturing processes, a list of the associated hazards, and a written hazard evaluation plan. From this plan a detailed protocol is developed for the next phase, the evaluation of the hazards.
After the walk-through the hygienist will have a list of potential hazards but no quantitative information about worker exposures. For example, if a metalworking facility uses toxic degreasing solvents, the risk of exposure may be minimal with proper storage, personal protective gear, and appropriate ventilation. The evaluation phase actually begins during the walk-through, and there is a smooth transition from the recognition of hazards to their evaluation.
Evaluation focuses on quantifying the degree of exposure. Exposures can be assessed in several ways: area sampling collects measurements in a room in the vicinity of some workers; personal sampling collects contaminants in the breathing zone of individuals, using small portable samplers; biological sampling collects body fluids or breath samples to measure contaminants or specific metabolites.
The starting point when conducting an exposure study is the identification of the study population to be sampled. This involves enumerating all the people who could be sampled. In a workplace the focus is usually on certain workers with specific job titles—those identified during the anticipation phase as having a potential for high exposures. Other selection criteria may also apply, based on concerns raised by workers or their union, consultants, or regulatory authorities.
The next choice is the sampling strategy to be applied to the population. In small workplaces (e.g., a dozen or fewer employees) a simple census of all individuals may be taken. This ensures that all exposures are monitored. In larger facilities this approach can be too costly, so a statistically representative sample is needed. A common approach here is stratified sampling.
In stratified sampling the population is divided into subgroups (strata), and each individual monitored represents a known number of individuals in the subgroup. For example, if a trucking company has 5,000 drivers serving a state, it may be impractical to monitor all of them for exposure to diesel exhaust during trips. The hygienist may subdivide the drivers into groups according to the type of route (say, long-haul or delivery) and destination city, creating, say, fifty groups of roughly 100 individuals each. The hygienist then selects some members from each group (usually at random) to create a statistically representative sample of the full 5,000. Although stratified sampling is subject to error because not all the exposed people are monitored, it can be an efficient way to characterize an entire population when differences between groups have an effect on exposures. Techniques are available to estimate the size of the error. Statistical colleagues can help the hygienist to determine the number of samples needed from each group to characterize exposures for the entire population.
A third strategy is the so-called convenience sample. Often such a strategy consists of monitoring volunteers or individuals with a particular complaint. Convenience sampling can be subject to bias; those who volunteer or have complaints are not likely to represent all members of the group. This sampling strategy should be avoided in favor of randomly selecting individuals. However, a related sampling strategy may have a role and is used in regulatory settings. The hygienist may choose worst-case sampling—selecting workers at highest risk of exposure or measuring at times when exposures are highest. The assumption in worst-case sampling is that if these workers' exposures are acceptable, then all the remaining workers are also unlikely to be overexposed.
Two general types of devices are used for measuring environmental exposures: direct reading devices and sample collection devices. Direct reading devices provide near real-time measurements of the exposure of interest, and sample collection devices store or trap samples for later analysis.
Direct reading instruments are available for measuring many physical hazards, such as temperature, noise, and radiation. These instruments typically have a digital readout and the ability to store data over a period of time for later downloading. Common examples are digital thermometer-hygrometers to measure temperature and humidity, noise monitors, and ultraviolet radiation monitors. Other direct reading instruments can measure various pollutants, including gases, vapors, and airborne particles. For example, fine particles can be measured with a device called an optical particle counter. These instruments are usually portable, battery operated, lightweight, and enclosed in a rugged case for field surveys.
Sample collection instruments are often used when multiple airborne pollutants are present or further laboratory analysis is needed. The collection device draws in a known volume of air, including whatever contaminants are in it, and traps the contaminants on an absorbing medium. The absorbing medium is used to stabilize and store the contaminant so the sample can be taken later to a laboratory where the mass of the contaminant stored in the medium is determined. The air concentration of the contaminant in the sample can then be quantified in units of mass per volume, by dividing the mass of contaminant collected on the absorbing medium by the volume of air that was sampled. These air concentrations typically have units of micrograms per cubic meter (µg/m3). By increasing the volume of air sampled, more contaminant mass can be collected, thereby increasing the sensitivity for detecting low levels of contamination.
Sample collection instruments can be either active or passive. Active sampling devices draw air through the absorbing medium using an electric pump. The pump's airflow rate can be varied, and the total volume of air sampled is calculated by multiplying the flow rate by the duration of sampling. The sampling time period can sometimes be shortened by increasing the pump flow rate, thereby delivering the same volume of air in less time. This can be useful when exposures occur over short time periods or are highly variable. Conversely, the flow rate can be decreased if longer sample times are needed.
Active sampling is highly versatile, sensitive, and specific for the contaminant of interest because sophisticated laboratory analysis methods (such as mass spectrometry) can be used to analyze the samples. However, running the pump requires a battery or electricity and the pump may be bulky or noisy and have limited run time. These drawbacks make such devices unsuitable for some kinds of personal sampling, although area sampling is more feasible. In Figure 8.1, both the sampling devices (for ozone and particulate matter) and the pump are located inside the box at the bottom of the apparatus. The vertical pipe with the metal cone on top collects the fine particle sizes that can be inhaled deeply into the lungs. Ozone is sampled off the same airstream.
Figure 8.1 An Air Pollution Monitoring Station for Ozone and Particulate Matter, in Atlanta
Passive sampling devices use diffusion, rather than a pump, to collect the air sample. This method requires an absorbing medium that removes the compound of interest from the air by reaction or absorption at the surface of the medium. The concentration gradient between the air to be sampled and the surface of the absorbing medium causes the contaminant of interest to diffuse from the air to the surface where it is trapped. The mass collected in the medium is then analyzed in a laboratory, as in active sampling analysis. The flow rate of the air delivered to the surface during sampling is computed using Fick's law of diffusion; the volume sampled is this sampling rate multiplied by the sampling time. The concentration is then calculated in the same way as for an active sample, by dividing the mass collected by the volume of air sampled.
Although passive devices do not require a pump, their diffusion method generates low sampling rates; often these flow rates are 1,000-fold slower than rates in active sampling devices. Thus the contaminant mass sampled in a given time is correspondingly lower. However, in occupational settings concentrations are often sufficiently high that passive devices can still achieve excellent results. Further, laboratory analysis has substantially improved, reducing the amount of material needed for accurate quantification. When available and of sufficient precision and accuracy, passive sampling devices can be the method of choice. Passive devices for particulate matter are not yet of sufficient precision and accuracy to merit their use in typical occupational settings.
Biomonitoring (also called biological monitoring) involves the collection of body fluids or tissue such as saliva, blood, hair, or urine. These are analyzed for either the contaminant or a metabolite of that contaminant. Biomonitoring is discussed later in this chapter.
Control of workplace hazards is an important element of industrial hygiene practice that corresponds, in public health terms, to primary prevention (as discussed in Chapter 26). Several approaches are used to modify the workplace environment: substitution, isolation, ventilation, administrative changes, and personal protection. Substitution involves replacing a hazardous material or process with a less hazardous one. For example, benzene (a bone marrow toxin) might be replaced by less toxic toluene. Isolation involves containing or limiting human access to the hazardous materials, usually through engineering controls. For example, a metal casing may be may be used to enclose a solvent washer. For certain hazards, most notably chemical and heat-related hazards, ventilation provides a viable control strategy. For example, the introduction of fresh air or use of a local exhaust hood may significantly reduce exposure to these hazards. Administrative controls consist of policies and procedures that reduce risks. For example, maintenance workers place a lock and signed tag on machinery controls to prevent unintended operation during repairs, a standard injury prevention strategy called lockout tag-out. Rotating workers to limit the time spent by any individual in a high-exposure location may have a role as well (an approach used, for example, with radiation workers).
Protective devices are often used to control safety hazards. For example, a cutting machine may be designed so that the worker needs to push two buttons, one with each hand, to initiate a cut; this guarantees that the worker's hands cannot be in the cutting zone during operation.
Personal protective equipment (PPE), such as respirators, gloves, safety glasses, hardhats, safety harnesses, and steel-toed boots may be recommended, although this approach is less preferable than the environmental changes described previously. Figure 8.2 shows an example of personal protective equipment in use. Working at a degreasing tank, a worker may inhale vapors or absorb solvent that splashes on bare skin. This worker wears personal protective equipment consisting of gloves and a face shield to protect the hands and face from splashed solvent and a respirator to prevent vapors from being inhaled.
Figure 8.2 Personal Protective Equipment
Source: Courtesy of Phillip L. Williams, University of Georgia College of Public Health.
This worker is positioned over a solvent bath. Note the sampling apparatus on the worker's belt, and the hose running from the breathing zone.
Exposure science is broadly defined as the study of human contact with chemical, physical, or biological agents occurring in the environment. It focuses on the mechanisms and dynamics of events either causing or preventing adverse health outcomes (National Research Council, 2012). Exposure scientists quantify exposures in both occupational settings (as do industrial hygienists) and in community settings where people may encounter hazards as they go about their daily activities. Exposure science often focuses on evaluating exposure determinants, which are the factors and conditions that influence these exposures. Exposure assessment, one aspect of exposure science, aims to quantify exposures in both occupational and environmental settings. These assessments focus on key concepts such as concentration, exposure, and dose (as discussed in Text Box 8.2).
An important aspect of exposure is its time course, sometimes referred to as the exposure profile, which can be graphed as the concentration present in a person's breathing zone (or, less typically, in other media such as in drinking water or during dermal exposure) over a period of time. The term total exposure is sometimes used to describe the area under this exposure-time curve. Different exposure profiles can yield similar total exposures. For example, one worker may weld for 15 minutes in an enclosed space and sustain a concentration of metal fumes of 40 mg/m3, receiving a total exposure of (40 mg/m3)(0.25 hr) = 10 mg/m3 × hr. After finishing his task, he experiences no further exposure to welding fumes. A coworker, working in the same area but not exposed directly to the fumes, remains for the entire 8-hour shift. Over the course of the day, the coworker experiences a concentration of 1.25 mg/m3. The coworker receives an identical total exposure [(1.25 mg/m3)(8 hr)= 10 mg/m3 × hr], but the exposure profile is different. In some circumstances, different exposure profiles may have different health effects, even with equivalent total exposures.
The shape of the exposure profile matters because some contaminants are easily cleared at low exposure levels but toxic at higher levels. In this case, the dose rate may affect the health outcome. Exposure assessors focus on the intensity of exposure, frequency of exposure, and duration of exposure, asking questions such as: What is the peak concentration in the monitoring period? How much variability occurs from minute to minute or hour to hour? Do exposures recur regularly or episodically? Is the duration of exposure short followed by no exposure, or does exposure occur at moderate levels for a long period? Such information can prove invaluable in addressing potential effects and control strategies.
Three basic scenarios, defined by the U.S. Environmental Protection Agency (U.S. EPA), are used by scientists to distinguish exposures across different time periods (U.S. Environmental Protection Agency, Office of Research and Development, National Center for Environmental Assessment, 2015). Acute exposure occurs by ingestion, skin absorption, or breathing for twenty-four hours or less. Chronic exposure consists of repeated episodes that occur by the same routes for more than approximately 10% of the life span in humans. Subchronic exposure is repeated exposure by one or more of these routes for more than thirty days, up to approximately 10% of the life span in humans. When acute exposures occur at high levels, poisoning or other immediate responses may follow. Chronic exposures at lower levels may be linked to health outcomes such as cancer, chronic lung damage, or similar effects. Subchronic exposures are between these two and also may be episodic or recurring.
There are three principal routes of exposure for people: inhalation exposure, ingestion exposure, and dermal exposure. These routes of exposure are different from the exposure pathway, or the path by which the contaminant moves from a source to a human receptor. For example, pesticide exposures in children may come from several pathways. Children may ingest pesticides from residues present on food (a dietary pathway); they may get pesticides on their skin from their parents' contaminated clothing if the parents work on a farm (a take-home pathway); if spraying takes place close to their home, they may inhale pesticide particles or vapors (a drift pathway). These pathways differ substantially and each requires entirely different assessment and control strategies to reduce exposure.
Ideally, an exposure assessment method quantifies the mass of contaminant reaching the target organ in each exposed person. Of course this is generally not feasible, but four broad assessment method categories approximate this ideal to increasing degrees: imputing or modeling exposures, measuring environmental exposures, measuring personal exposures, and measuring biomarkers. In general these methods become increasingly expensive, and increasingly accurate, as one moves up this continuum.
Exposure scientists use indirect exposure assessment methods to impute exposures when they lack direct measurements or only have partial data. Indirect approaches are usually substantially simpler and less costly than direct measurements. Additionally, for retrospective studies, in which it is impossible to take measurements, indirect approaches are the only methods available.
Air pollution studies provide one example. In a study of inhalation exposure to air pollution, researchers might use a time-location study to identify various microenvironments (home, work, travel, etc.) in which people spend a significant portion of their day. The researchers then measure pollutant concentrations in representative microenvironments, and have subjects record or estimate the amount of time spent in the microenvironments. From these data, the scientists could multiply the concentrations by the amount of time spent in each microenvironment and sum the results for an estimate of each person's exposure. A similar approach can be used, for example, for ingestion. Concentrations of contaminants can be measured in many different foods; people can record the types and amounts of foods they eat, using a food diary; and dietary exposures can be estimated by summing over all the foods eaten.
A related strategy that uses exposure scenarios constructs estimates without direct measurement by assuming activity patterns for typical individuals (children, adult men, adult women, etc.). Available monitoring data for each activity and location can then be combined to model estimates of individual exposures. This approach is inexpensive because no individuals are measured or activities recorded. Exposure scenarios are used extensively in risk assessment.
A special case of indirect exposure assessment is the job-exposure matrix (JEM) (see Chapter 4). Suppose an exposure assessment is needed for a retrospective study of silica exposures in a worker cohort. Consulting old employment records, the exposure scientist identifies ten job categories, each with characteristic tasks, and fifteen work zones, each with silica concentrations derived from historical industrial hygiene monitoring or by using estimates from a panel of experts. The exposure scientist then constructs a JEM, assigning an exposure level to each worker based on his or her job assignment and work zone location. If the workplace changes over time, as is typical, then a JEM, for each time period is created to account for the differences. A JEM is often the only way to assess exposures in retrospective studies. However, JEM creation can be time consuming, and accurate records may not be available to complete the assessment.
Direct exposure measurements are typically done at fixed locations (area sampling) or on individual subjects (personal sampling). In air pollution monitoring in most major cities, fixed-site sampling has been commonly used over many years. Daily and hourly measurements of O3 (ozone), NOx (nitrogen oxides), SOx (sulfur oxides), and PM (particulate matter) track compliance with regulations. These measurements also provide information used to warn the public of dangerous peaks and to support health research.
Personal exposure monitoring generally involves placing a small, portable sampling device on a person to collect a sample within his or her breathing zone during daily activities. Personal monitoring started in the workplace in the early 1960s and has become routine practice in industrial hygiene assessments. Personal sampling is considered a reference method for assessment because it accounts for time, location, and the person's behavior; all of these can have a profound effect on exposures.
The breathing zone air sample can be analyzed for the contaminant of interest, either in real time or with a time-integrated sample collection device. With direct reading, real-time methods, a person's exposure profile can be observed and synchronized with a video recording. The video is overlaid with a bar graph or other indicator from the real-time monitor, demonstrating how specific activities contribute to exposures. This technique, known as video exposure monitoring (VEM), is a powerful training tool and aid for developing control strategies. Similarly, real-time monitors can be wirelessly paired with smartphones or GPS devices to evaluate time-activity contributions to exposure.
While the focus here is on inhalation exposures, generalization to other personal exposures is possible. For example, duplicate diet sampling, in which identical meals are collected from each individual in a study, is a method of quantifying ingestion exposures from food.
The Food Quality Protection Act of 1996 expanded the single contaminant approach to exposure assessment and introduced new concepts of aggregate exposure and cumulative exposure (also see Chapter 27). Aggregate exposure refers to assessing simultaneously all routes and exposure pathways involved for a single compound. An agricultural worker may have exposure to a single pesticide through both inhalation and skin absorption while picking crops. In addition, he may ingest food containing residues of the same pesticide. The clothing he wore at work and brought home may also carry residual contamination. Counting only one route and pathway, such as inhalation during spraying, underestimates his total exposure, perhaps substantially. Aggregate exposure assessment over all routes and pathways simultaneously is necessary in order to quantify the hazard accurately.
The concept of cumulative exposure extends this approach to multiple compounds that have similar biological mechanisms. Cumulative exposure is defined as aggregate exposure to a series of compounds (or nonchemical exposures) that affect health through similar mechanisms. A common example of cumulative exposure focuses on organophosphate (OP) pesticides such as chlorpyrifos, malathion, and diazinon (see Tox Box 18.1, in Chapter 18). These compounds share a common biological mechanism of toxicity: inhibition of the enzyme acetylcholinesterase, which is necessary for normal transmission of nerve signals. OP pesticides interfere with this process, resulting in continued firing of the neuron. A cumulative exposure assessment for all OP pesticides is needed to understand the impact of exposure not just to a single OP but rather to all pesticides operating through acetylcholinesterase inhibition. This requires measurement either of all the compounds simultaneously or of some biological effect, such as acetylcholinesterase inhibition in exposed people, that integrates over all exposures. Naturally such assessments are complex, especially when extended to nonchemical exposures, such as stress or malnutrition, that may compound the effects of chemical exposures.
So far we have focused on sampling environmental media from places where people are likely to contact a contaminant. However, it is possible to measure contaminant levels in humans themselves, and thereby verify that exposures occurred. Biological markers (sometimes referred to as biomarkers) of exposure are used for this purpose. Biomarkers are collected by sampling body fluids or tissues such as exhaled breath, urine, blood, feces, or hair. These biosamples are analyzed for the contaminants of interest (called the parent compound) or a related compound (usually a metabolite), or a biological response known to reflect exposure. For example, cotinine (a metabolite of nicotine) can be measured in urine to quantify exposure to tobacco smoke, and carboxyhemoglobin levels in blood are measured to quantify exposure to carbon monoxide. Pesticides provide another example. Blood samples can be analyzed for OP parent compounds, urine samples can be analyzed for OP metabolites such as dialkyl phosphates, or acetylcholinesterase enzyme activity in serum can provide a measure of the biological effect.
Biomarkers of exposure have important advantages and disadvantages. Detection of an exposure biomarker proves that absorption of the compound has occurred. Other environmental measurements cannot confirm this conclusion. Furthermore, biomarkers account for bioavailability, which describes the ability of a compound to pass across the contact boundary into the body through, for example, ingestion. Biomarkers also integrate over all routes of exposure and therefore are useful for aggregate assessments. For these reasons, biomonitoring has been called the “gold standard” for exposure assessment (Sexton, Needham, & Pirkle, 2004).
Biomonitoring has evolved rapidly to consider biological indicators not only of exposure but also of biological response to the exposures. Many of these approaches are called omics because they draw on a suite of molecular biology techniques with that suffix: genomics, proteomics, and metabolomics (as explored in Chapter 7). These techniques employ large-scale array technology to screen for hundreds or thousands of genes, proteins, and metabolites that are associated with particular exposures or in some cases correlate with disease risk.
Patterns or features associated with clusters of these proteins and biological molecules can be derived with bioinformatics and multivariate statistical analysis to provide insights into how an organism dynamically responds to environmental exposures. These responses may persist over various time scales: days, months, or years. Together, these biomolecular features, reflecting gene expression, cellular functions, and metabolism, become part of a person's exposome (the totality of exposure events that affect the person) (Wild, 2005). For example, exposures to OP pesticides can cause a nonspecific, short-term depression in enzyme activity that may last a few days; however, this exposure can also give rise to pesticide-protein adducts that persist in the blood for more than a month and that can identify the specific pesticides (Marsillach, Costa, & Furlong, 2013). Some exposures may also induce epigenetic changes that persist for years or become permanent (Kyrtopoulos, 2013). For example, smoking and exposure to tobacco smoke both result in epigenetic changes that revert to normal over time, and changes that can persist over decades (Bossé et al., 2012).
However, interpreting measurements of biomarkers in relation to the timing of the actual exposure events can be complex. Some compounds produce long-lived biomarkers that reflect months or years of exposure; other compounds produce short-lived biomarkers that may correspond only to exposures that occurred a few minutes ago. In addition, there are usually multiple metabolic pathways that influence biomarkers, and these may behave similarly for several related chemicals, or become saturated if multiple exposures use the same pathway. This may make it difficult to know what specific chemical caused the exposure. To understand and apply biomarkers properly, the exposure scientist should be well versed in the pharmacokinetics of the compound: that is, how it is processed in the body (see Chapter 6). Collaboration with a toxicologist can be helpful for understanding such problems.
The ability of biomarkers to integrate exposure over all routes and pathways, a major strength, also can be a major shortcoming. For example, once a molecule such as a pesticide enters the body and is metabolized, the source is no longer identifiable. The exposure may have come from inhalation, dermal contact, or through ingestion of residues in the food supply. A related problem occurs when individuals have different abilities to metabolize a contaminant, due to a genetic polymorphism (see Chapter 7). This can mean that identical exposures can generate different observed metabolite levels in individuals and also convey different risks of health effects. Failure to recognize these genetic differences may lead to misclassification of exposures and the risk of harm to the individual.
Given the valuable insights from such assessments, the use of biomarkers of exposure and effect is likely to increase in the future. New, more accurate biomarkers of exposure are being developed and are appearing in the literature. Current research suggests that panels of biomarkers, measuring multiple markers at once, may be able to overcome some of the shortcomings listed here while giving new and powerful insights into mechanisms of toxicity and control strategies for exposure (Ryan et al., 2007; Cohen Hubal et al., 2010; Cochran & Driver, 2012).
Ingestion and skin absorption are important routes of exposure in many circumstances. These two routes also pose special challenges for exposure assessment. A duplicate diet study is a direct approach to assessing ingestion exposure. Duplicate portions of the food eaten by test subjects are collected and analyzed for contaminant levels. Typically all of the food eaten is weighed and homogenized to create a single bulk sample. An aliquot of the sample is analyzed for contaminant concentration (mass/mass). The exposure is computed by multiplying this concentration by the amount eaten.
Dietary diaries offer another and indirect approach for assessment of ingestion exposures. Each subject keeps a food diary listing foods eaten and portion sizes. The researcher purchases these foods at local grocery stores for later analysis. A data set is compiled listing each type of food and the contaminant concentrations. The food diary data from each participant can be combined with the concentration data to estimate the amount of contaminant ingested.
Food diaries are much easier to administer than duplicate diet studies and so can be implemented on a large scale. Fewer food samples have to be analyzed because once all the individual food items have been assessed, no further analysis is needed. However, because the foods consumed are never measured, they may differ from what is analyzed. This causes error in the exposure estimates due to the variability in concentrations in various food items.
Dermal exposures can present unique challenges. Patch sampling is one direct assessment technique; an adsorbent material patch is placed on the skin or outer garments. The subject then carries out normal activities, creating exposure to the contaminant. The patches intercept the contaminant of interest before it reaches the skin. Following exposure the patches are removed and analyzed for contaminant mass. Knowing the area of the patch relative to the total exposed skin surface area, one can estimate overall skin exposure to the arms, legs, torso, and so forth. The total exposure is estimated by summing over all exposed areas of the body. Tape stripping is another technique. A special adhesive tape is applied to a known area of the skin, and then stripped away to remove a single layer of exposed skin cells. The tape samples are analyzed like the patch samples described previously, and used to compute total exposure over the skin area. Tape stripping has the advantage that it measures what gets on the skin; repeated stripping in the same area can measure the depth of contaminant penetration.
A limitation of both the patch and tape stripping methods is that they sample only a small portion of the overall exposed skin area. This makes it possible, indeed likely, that some exposed areas will be missed. This can lead to underestimation of exposure. The fluorescent tracer method used in pesticide sampling is an alternative that samples all skin areas. A nontoxic fluorescent tracer is added to the pesticide spray mix, and later the worker is video-imaged under ultraviolet light, revealing areas where the tracer deposited on the skin. The tracer technique can be very useful for both field studies and training simulations; it also has been used to assess potential biological contamination and hand-washing effectiveness.
The paradigm of industrial hygiene, based on the anticipation, recognition, evaluation, and control of workplace hazards, provides a framework for more general exposure assessment. Many tools of traditional industrial hygiene are transferable to exposure science, which spans both occupational and community settings. But exposure science requires some new tools as well. Sampling strategies, compliance with monitoring protocols, and field implementation are often more difficult in community exposure assessment studies. Community studies also call for statistical sampling techniques much like those used in epidemiological studies. Exposure science is a rapidly growing area, ripe for contributions from professionals in many areas of environmental health.
In addition, the Centers for Disease Control and Prevention maintains a Web site with useful information on biomonitoring: http://www.cdc.gov/biomonitoring
In addition to Kyrtopoulos (2013), listed in the References, see the following: