On July 16, 1945, in the Jornada del Muerto (Journey of Death) desert near Alamogordo, New Mexico, the fiery explosion of the Trinity test—the first atomic bomb—generated a light brighter than any ever seen on Earth. As it dimmed, it revealed a mushroom cloud of vaporized water and debris that grew thousands of feet into the air. J. Robert Oppenheimer (1904–1967), who more than anyone else was responsible for building the weapon, wrote afterward that watching the explosion brought to mind two lines from the sacred Hindu scripture the Bhagavad Gita: “If the radiance of a thousand suns were to burst into the sky that would be like the splendor of the Mighty One.” And: “I am become Death, the shatterer of worlds.” (It is perhaps more likely that his first thought was, Wow! Thank God, it worked!)
That shattering burst of energy was also an act of creation: it produced radioactive forms of natural elements that—apart from laboratory work during the bomb’s development—had never before existed on Earth, including cesium-137, iodine-131, and strontium-90. During the months that followed these newly created radionuclides circled the globe and silently entered the bodies of everyone alive. And because some of these radionuclides remain radioactive for hundreds or thousands of years, the children of these people, their children, and all humans from that date until our species ceases to exist will have radionuclides created at the Trinity explosion in their bodies. The same is true for the radionuclides released by the more than 450 atmospheric nuclear weapons tests carried out by the United States, the Soviet Union, Britain, France, and China between 1945 and 1980, and from several nuclear power facility accidents. Of course, the amounts of radionuclides released from each of these sources differ vastly. It is inappropriate to consider atomic weapons and nuclear power facility accidents comparable, because the quantity of radionuclides released varies greatly, because they are not uniformly distributed over the Earth, and because different people have different likelihoods of encountering them.
Some of the radionuclides released by nuclear weapons testing and by nuclear power facility accidents can cause cancer. But some of the same radionuclides are used to diagnose and treat cancers and save lives. What is the balance between the potential harm and benefit posed by radionuclides and by all forms of radiation?
To determine whether this balance favors harm or benefit, it is necessary to know what radiation dose a person has received. This is not as simple as it might seem (in fact it is exceedingly complex, even for radiation experts), so we ask the reader please to bear with the following several pages of technical information, knowing that in the end all you really need to remember is one technical term: millisievert (mSv), named for the Swedish medical physicist Rolf Maximilian Sievert (1896–1966), who did pioneering work on the biological effects of radiation exposure. A sievert (Sv) is a unit of radiation. Each year we generally receive a few thousandths of a sievert, called a millisievert. People in the United States on average receive 6.2 mSv of radiation annually.
Radioactivity is measured by the number of atoms decaying (losing energy by emitting radioactive particles and/or electromagnetic waves) in a certain amount of time. The disappearance of a radionuclide is measured by how long it takes for one-half of its atoms to decay. That can take a long time, as something can be reduced by one-half almost forever, until only one atom remains—and then it decays. But most of the starting radioactivity is gone after about 10 half-lives; only about one-thousandth of the starting radioactivity remains.
These measurements have many names, depending on what you want to quantify. At first it is easy to mistake which unit to use, so one can end up comparing the radioactive equivalent of eels to elephants. It is also easy to mistake amounts: 1 microsievert (a millionth of a sievert) is a thousand times smaller than 1 millisievert (a thousand mSv make 1 Sv), yet several news reports of the Fukushima Daiichi nuclear power facility accident confused these units.
In estimating how a radiation exposure might affect us, scientists need to consider the amount of radiation we are exposed to; what type of radiation it is; how much of it gets into the various cells, tissues, and organs in our body; and how susceptible these tissues and organs are to radiation-induced damage. Some cells, like bone marrow, skin, and gastrointestinal tract cells, are especially sensitive to damage from radiation. One reason is that they divide frequently—rapidly dividing cells are more sensitive to radiation that damage DNA. For example, a normal person needs to produce about 3 billion red blood cells each day to stay healthy. Other cells, predominantly those that divide infrequently, if ever, like heart, liver, and brain cells, are relatively resistant to radiation-induced damage.
So to determine the amount of radiation in an exposure, we must calculate the quantity of radiation emitted or released from a source, be it a CT scanner, a radiation therapy machine, a nuclear weapon, a failed nuclear power facility, or a radioisotope injected for a PET scan.
But calculating the quantity of radiation is complex. Some diagnostic radiation machines emit electromagnetic waves such as X-rays or particles such as protons, neutrons, or electrons. Other radiation-related activities, like fissioning uranium-235 or plutonium-239 in a nuclear weapon, emit gamma rays and neutrons. Most fission products emit electrons and gamma rays. The explosion of the Chernobyl nuclear reactor released into the environment more than 200 radionuclides in diverse physical and chemical forms, including radioactive gases such as xenon-133 and iodine-124 and -131, as well as radioactive particles. These gases rapidly disperse into the atmosphere. The radioactive particles also disperse across a very broad area—unless it happens to rain when the radioactive cloud passes over you and particles of cesium-137 and strontium-90 fall to the ground with the raindrops.
Unfortunately, it was raining when the radioactive plume from the 1986 Chernobyl accident containing particles with iodine-131 and cesium-137 passed over Scotland. Consequently, substantial amounts of these radionuclides landed on grass. The grass was subsequently eaten by grazing animals, especially sheep, and those radionuclides were incorporated into their bodies and secreted in their milk. The iodine-131, with an 8-day half-life, was gone in about three months. But the cesium-137 was concentrated in the meat of the sheep, and with its half-life of 30 years, it stayed around for the lifetime of the sheep. The level of cesium-137 in many of these animals exceeded government safety standards; consequently many sheep were killed and buried, and their meat was quarantined from human consumption.
For a radioactive substance or radionuclide, like a gram of radium-226 or a gram of cesium-137, we can compute how much radiation it releases by considering the number of spontaneous disintegrations (decays) that occur in the nuclei of the atoms in that gram over a certain time interval, for example, one second. This rate of decay, referred to as the amount of radioactivity in radionuclides, is measured in units called becquerels (Bq), named after the nineteenth-century French physicist Antoine-Henri Becquerel (1852–1908). One becquerel equals one nuclear disintegration per second. Because this is an extremely small quantity, scientists often speak of thousands of becquerels (a kilobecquerel, KBq), millions of becquerels (a megabecquerel, MBq), a million million becquerels (a terabecquerel, TBq), or even a billion billion becquerels (a exabecquerel, EBq). It’s like an expression of speed. If a becquerel is a person walking 1 mile per hour, a kilobecquerel is like the same person walking (or rocketing) 1,000 miles per hour, and so forth. However, the quantity of becquerels a substance contains is not the only consideration for human health. Because different nuclear disintegrations release different electromagnetic waves and particles, the same quantity of becquerels released can have substantially different potential health consequences. Also, not all radioactive substances are equally radioactive. When we compare similar quantities of thorium-230 and uranium-234, for example, the thorium-230 is about 1 million times more radioactive—it has 1 million times more disintegrations per second.
Once we have ascertained the amount of radioactivity, we must determine how much radioactive energy it deposits into something. That “something” can be the air, another substance, or our bodies.
Then we must determine how this radioactivity interacts with humans. This is referred to as dose, which is quite different from emitted radiation. Imagine a gram of cesium-137 inside a lead box. It is releasing radiation in the form of electrons and gamma rays, but no one is being exposed to it, because these radiations cannot penetrate the lead. So the dose of radiation to any person is zero, and hence it has no chance for harm to us. But if you are holding this same gram of cesium-137 in your hand, the same electrons and gamma rays that it emits through the spontaneous decay of its nucleus will interact with the skin, muscle, and nerve cells in that hand. And because gamma rays can travel considerable distances and pass through many substances, other parts of your body will be exposed to radiation, although not uniformly. As these gamma rays pass through your cells, they will deposit some of their energy within each cell they strike. This amount of energy is the radiation dose to the cell.
Another concept in radiation dosimetry is the radiation absorbed dose, which is expressed in units of grays (Gy), after the British physicist Louis Harold Gray (1905–1965). A gray is the amount of energy a dose of radiation deposits in a tissue. We will skip over them other than to say that the quantity of grays absorbed into a tissue or organ (adjusted for some biological factors) can be converted to a number of sieverts, the unit used to estimate risk of harm, like cancer, from radiation exposure.
Finally, determining the effective dose, measured in sieverts, considers two issues. First, not all types of radiation are equally damaging—for example, a dose of neutrons absorbed is much more damaging than the same dose of X-rays. Second, different cells, tissues, and organs in the body, as we saw, have different sensitivities to radiation damage. Effective dose adjusts for these variables and thereby gives a better estimate of the potential harmful consequences of a radiation exposure. This, at last, concludes our discussion of units of radioactivity. But please try to remember millisieverts, as we will translate everything into them from now on.
Having slogged through so many technicalities, let’s examine how scientists analyze radiation emitted, energy absorbed, and biological damage from that radioactivity, so that we can make the one judgment that really matters: What am I exposed to, and is it bad for me? We’ll use a basketball analogy, for simplicity.
When a player sends a basketball through the hoop, the number of points awarded can vary. A free throw is worth 1 point, a basket shot from inside a circumscribed area is worth 2 points, and a basket shot from outside that area is worth 3. The team’s score in a game is not the number of times its players sent the ball through the hoop but the total of the points awarded for those baskets. It’s the same with measuring radioactivity: the amount you are exposed to is not necessarily the amount that you will absorb, and that is not necessarily directly correlated with the amount of harm. Determining that final harm score means weighing and balancing several factors.
If there were a direct correlation between a specific amount of exposure and the onset of disease, a simple chart would clarify things for you. But the relationship between radiation and disease is not so simple.
The conventional approach to determining a person’s cancer risk from a radiation exposure is to compare the range of possible effects from the dose in the scientifically accurate but difficult-to-understand units we’ve detailed. When there is a nuclear or radiation accident, public health authorities often give information in terms of what radiation dose people received (or will receive in the future) and/or how much radioactivity is in something they may encounter, such as food or water. They then compare these doses or amounts of radioactivity to a benchmark, such as the normal background radiation dose, or the dose a nuclear power facility worker receives annually, or the regulatory limit or threshold for radioactivity in food or water.
Such information, given to people who are not radiation scientists or physicians, is likely to be uninformative at best and misleading at worst, and it is at once confusing and simplistic. The implication is that if you receive a dose similar to or less than your normal background dose, or less than a regulatory limit for food or water, you need not worry. For example, if the regulatory limit for radioactivity in milk is 500 Bq per liter and the milk you are drinking contains 350 Bq per liter, you are not at risk.
But things are not so simple. For any radiation dose, the risk of getting cancer also depends on one’s age at the time of exposure, estimated remaining life span, exposure to other cancer-causing agents (like cigarette smoke), concurrent health problems that can be exacerbated by radiation, and other complicated variables. Simply put, the implications for an eighty-year-old exposed to a given dose of radiation are entirely different from those for a three-year-old who receives exactly the same dose.
Assessing risk requires statistical analyses. You cannot rely only on dose to express a person’s risk of getting cancer, because dose is only an intermediate quantity between their radiation exposure and their cancer risk. A more helpful way to link cancer risk to exposure is to specify a person’s lifetime risk of cancer regardless of the cause; specify the additional lifetime risk resulting only from a specific radiation exposure; estimate future cancer risk for persons exposed in the past (or who soon will be exposed) and who are currently free of cancer, radiation related or not; or estimate the likely increase in numbers of cancers in an exposed population such as people evacuated from Fukushima.
When we talk about the dangers of radiation, we are usually referring to ionizing radiations, which can alter the structure of atoms, molecules, and chemicals in our cells and cause cancers. Most data suggest that exposure to nonionizing radiations (except UV), like those from TVs, computer screens, high-voltage electrical transmission wires, and the like, are not harmful. This area is controversial and conclusions may change, but the adverse effects of nonionizing radiations, if any, are unquestionably small compared to the proven harmful effects of ionizing radiations such as neutrons and gamma rays. The challenge in considering risk of illness from a new exposure to an ionizing radiation—say, from a radiation accident—is to compare it to voluntary and involuntary cancer and noncancer risks in everyday life, like driving a car, riding a motorcycle, flying in a jet aircraft, or going into a basement containing radon gas. By looking at the whole picture, we can weigh the cancer risk from a radiation exposure and decide whether a past exposure is important or whether a future exposure is acceptable.
Equally important, we must compare the risk of cancer (and the uncertainty surrounding it) with potential alternatives and potential benefits. For instance, someone who has a CT angiogram scan of his heart is exposed to about one-tenth the amount of radiation as the average survivor of the Hiroshima and Nagasaki A-bombs. For someone at risk of a heart attack and sudden death, this level may be acceptable, especially if a medical intervention based on results of the scan can substantially reduce the chance of sudden death. But someone who is in no immediate danger or who has no effective intervention and just hopes for reassuring information might decide against the test. A person having six PET scans to look for cancer receives about the same dose as an atomic bomb survivor. Some people undergo several PET scans for different reasons (see chapter 6).
The opinions of scientists and physicians regarding consequences of radiation exposures, such as after a nuclear power facility accident, sometimes seem polarized, and it may be difficult for people to know which viewpoint, if any, is correct. For example, some experts may suggest that hundreds, thousands, or even hundreds of thousands of cancers, birth defects, and genetic abnormalities may develop as the result of radiation over several decades post-accident, whereas others estimate that few, if any, will result. But if we exclude experts who take extreme positions (of whom there are many and who seem to be the focus of media attention), we find that knowledgeable scientists agree more than is immediately apparent.
The sources of uncertainty in estimating consequences of radiation exposure are many, but we will highlight only a few. First, is there a threshold or trigger point, beyond which radiation exposure can increase the risk of developing cancer? Scientists agree that above a certain dose (usually about 50 or 100 mSv) there is a linear relationship between radiation dose and cancer risk: the higher the dose, the greater the risk. But they disagree heatedly over whether there is an increased cancer risk from lower radiation doses, for several reasons. For one, the increased risk, if any, from these low doses may be so small that it would take studies of millions or even billions of people to be certain such a risk exists. On the other hand, data from the atomic bomb survivors, nuclear industry workers, X-ray technicians, children receiving CT scans, and children exposed to background gamma radiations (and perhaps radon) are consistent with an increased cancer risk even at the lowest doses received. Even if many other epidemiological studies show no increased cancer risk, we always need to remember that the inability to detect an increased cancer risk in a population exposed to radiation, even a large population, is not proof there is no risk.
Despite this uncertainty, scientists and regulatory agencies generally agree to assume that even a low dose of radiation is potentially harmful and that voluntary radiation exposures should be considered in the context of a potential benefit and possible risk. This linear relationship between any radiation exposure and risk is referred to as the linear, no-threshold radiation-dose hypothesis.
Opponents of this hypothesis typically cite the fact that very large studies of nuclear industry workers, radiologists, and populations living near nuclear facilities have, in general, failed to show a convincing increase in cancers or other adverse health effects. The exceptions, which we just discussed, inevitably engender the greatest media attention, sort of man-bites-dog versus dog-bites-man. As evidence of this ongoing controversy, the National Academy of Sciences has recommended to the Nuclear Regulatory Commission that it might undertake a more modern and sophisticated study because of concerns of technical and/or statistical flaws in earlier studies. Pilot studies have been proposed, but whether they will be carried out or, if they are, will result in a large-scale reexamination of this question is unknown.
Opponents of the linear no-threshold radiation-dose hypothesis also argue that globally people are exposed to vastly different doses of background radiation, sometimes a tenfold difference, with no detectable difference in cancer rates. For example, residents of Ramsar, Iran, live near hot springs containing high levels of radium and radon gas; they receive more than 40 times as much background radiation annually as someone living in New York or London, yet they have no special health problems. But we lack specific estimated radiation doses for most people living in Ramsar, and additional unaccounted-for confounding variables may be present.
Some data, somewhat controversial, even suggest exposure to low doses of radiation may have health benefits. This idea is known as hormesis. But very few scientific data support hormesis, and most scientists are unconvinced that such a benefit exists. However, radiation exposure may have complex and competing effects on cancer risk. For example, although radiation-induced mutations can cause cancer, radiation may decrease cancer risk by killing other potential cancer cells. Also, radiation may affect the immune system and thereby increase or decrease cancer risk. We cannot discern these individual and competing effects in humans and are consequently left to determine the net effect by comparing cancer incidences in persons exposed or not exposed to radiation. Confounding this issue even further is the observation that people with different genetic backgrounds may have different cancer risks from the same radiation dose.
A second source of uncertainty comes from the difficulty in comparing the consequences of a dose of radiation given over a short interval (for example, instantaneously for the atomic bomb survivors) with the same dose of radiation given over a protracted interval, perhaps days, months, years, or even decades. Most of our knowledge of the adverse health effects of radiation comes from A-bomb survivors or persons who received radiation over a relatively brief interval in a medical context, like radiation therapy for cancer, typically given over a few weeks. Many scientists argue that a radiation dose’s adverse effects are much less if the dose is given over a long interval. However, others argue that the same dose given over a long interval has a similar or even a greater effect. Some recent studies suggest that nuclear workers receiving a dose of at least 200 mSv over more than a decade have an increased risk of cancer. Recent research on nuclear workers and a thorough review of publications suggest prolonged exposure to low doses of radiation may be as harmful as (or perhaps more harmful than) the same dose given over a short interval. Estimates of adverse health effects from the same dose of acute versus chronic radiation exposure cover a very broad range; risks from prolonged exposures are said to be 4 times more to 4 times less effective in harming people than an acute exposure to radiation.
A third controversial issue is whether it is scientifically valid or appropriate to extrapolate a very small per-person risk—say, 1 in 10,000, 1 in 100,000, or 1 in 1 million—to a very large population. When one does so, individual risks that some might consider trivial are found to result in estimates of thousands of excess cancers. For example, is it valid to take the very small per-exposure risk associated with background scatter radiation from an airport security scanner and multiply it by the millions of passengers screened each day to estimate a number of cancers in American and European populations? Some scientists argue it is valid and, in fact, necessary to determine risk-to-benefit ratio for screening. Others, like the Health Physics Society, warn that this practice is scientifically invalid for radiation doses below 100 mSv. Nevertheless, this calculation may not be unreasonable if the uncertainty in the risk estimate is disclosed.
Fundamental to this issue is the concept of collective dose, which is the individual dose summed over the entire exposed population, or the average dose in the exposed population multiplied by the total number of persons exposed. The linear no-threshold radiation-dose hypothesis assumes that the consequences of a high radiation dose given to a small number of persons and a small dose given to a large number of persons are similar. Whether this is so is not known and may never be. If we accept that background radiation is associated with an increased cancer risk (not everyone does), then any additional radiation exposure is very likely to be associated with a further increase in cancer risk.
Most of us would like a precise estimate of our cancer risk from a radiation exposure, but the limitations of data, statistics, and our present state of knowledge do not lend themselves to precise estimates. The best we can do is estimate a range that most likely includes the correct number: say, between 500 and 2,000 extra cancers per 10 million people exposed to a range of radiation doses. For example, researchers estimate that of the approximately 170 million Americans who were alive during the Nevada atmospheric nuclear tests carried out between 1952 and 1957, between 11,000 and 270,000 developed extra thyroid cancers (mostly nonfatal) as a consequence of exposure to iodine-131. This is an important increase in thyroid cancers, but it is only a small proportion of the more than 2 million thyroid cancers diagnosed since 1952.
In short, most scientists and scientific organizations avoid (or should avoid) estimating precise numbers of events, like cancers, from radiation exposures. More often they suggest a possible range for these events. Sometimes these ranges are very large, perhaps ten- or hundredfold differences (for example, 10 to 1,000 cancers). Although people may wonder how the estimated ranges can be so large, these estimates reflect uncertainties in radiation dose, distribution, and potential biological consequences. The lower and upper ends of these estimates (10 to 1,000 cancers) show that experts disagree far less (if at all) than is sometimes emphasized in the media. Scientists can only weigh the evidence they have and reach the best possible conclusion, acknowledging that the whole truth is for now unknown and may even be unknowable.
With this in mind, we intend to set aside fears, some of which are baseless, and describe the effects of ionizing radiations by using the best data we have. Although the knowledge of radiobiology is not complete, the effects of radiation on human health have been studied extensively for decades, and we may know more about it than we know about the effects of most, if not all, other chemicals and toxins we are exposed to.