CHAPTER 2

RADIATION FROM DISCOVERY TO TODAY

A BRIEF HISTORY OF THE DISCOVERY OF RADIATION

In the late nineteenth century, radiation was an exciting curiosity; in the twentieth, scientists determined how to use it both as a boon and as a terror to mankind. In the past sixty years, our dose from man-made radiation has come to equal the dose we receive from natural background sources; medical diagnostic procedures and nuclear medicine now account for nearly half the dose that the average American receives each year. This figure is more than double what it was twenty years ago, and it continues to grow.

The nineteenth-century pioneers of radiation science were at first unaware of radiation’s inherent danger or benefit and instead were enthralled by its mysterious and seemingly magical effects. In 1879 Sir William Crookes (1832–1919), an Englishman, discovered what he called “radiant matter” in a vacuum-sealed electrified glass tube of his design, which was named for him.

Wilhelm Röntgen discovered X-rays during an experiment with a Crookes tube in November 1895. He had painted a screen with bits of fluorescent crystals for use in his next experiment; when he passed electricity through the tube, the rays it emitted struck the crystals and lit them up. The crystals glowed even after he covered the tube with black cardboard to keep out all visible and ultraviolet light. Röntgen knew that cathode rays travel only about 3 inches in the air, which meant that the fluorescence so far away could not have been caused by them, at least directly. These mysterious rays traveled through any paper and wood that he placed between the source and the screen; he put a variety of objects in front of the tube to see if there was anything they did not penetrate. On one occasion, he noticed the outline of the bones in his hand shining on the wall. He was so eager to study this curiosity that he moved into his lab, eating and sleeping there so he could work uninterrupted.

Uncertain what these powerful rays were, he decided to temporarily call them X-rays, using the mathematical symbol X for something unknown, hoping to come up with a more descriptive name once he learned more. He discovered that X-rays are created in a vacuum cathode tube when the electrical arc created by the current passed through the tube interacts with the inert gas inside it. An X-ray image is the result of the rays’ striking a solid object such as a bone; it shows the densest material the lightest, which is why the shadows of the bones in an X-ray are lighter than the penumbra of the less dense flesh that surround them. (In 1901 Röntgen received the first Nobel Prize in Physics for this discovery.)

About two weeks after his initial discovery, Röntgen put the left hand of his wife, Anna Bertha, on a photographic plate and made the first X-ray picture, or röntgengram, complete with wedding ring. Frau Röntgen was less impressed by the science than she was terrified by the sight of her bones. “I have seen my death!” she reportedly exclaimed.

In December 1895, Röntgen published a paper entitled “On a New Kind of Rays: A Preliminary Communication.” Other scientists quickly joined in the research into these mystical rays (at first also called Röntgen rays). In early 1896 Henri Becquerel, drawing on Röntgen’s work, accidentally discovered radioactivity (but did not name it), when in preparation for an experiment that required bright sunlight, he wrapped crystals of another fluorescent substance in photographic plates that he covered with thick black cardboard to protect the plates from light. But before a day sunny enough for him to perform the experiment came along, Becquerel noticed that the photographic plates already showed the image of the crystals. This eventually led to the realization that radiation can be spontaneously emitted, the characteristic now known as “radioactivity.”

Becquerel was the latest in a line of distinguished scientists. His grandfather Antoine César made advances in the fields of electricity and especially electrochemistry. His father, Alexandre, investigated solar radiation and phosphorescence. Henri followed in his father’s exploration of phosphorescence and the absorption of light by crystals. Less than three months after Röntgen’s discovery, Henri looked for a connection between X-rays and naturally occurring phosphorescence. Becquerel put uranium salts, which glow (phosphoresce) on exposure to light, near a photographic plate covered with light-opaque paper. (In chemistry, a salt is a compound that has been made electrically neutral.) The plate became fogged, which meant that something other than visible light had passed through the paper. After numerous experiments with uranium salts of differing composition produced the same reaction, Becquerel concluded this effect was a property of rays emanating from the uranium atoms. These rays were quickly named Becquerels. He later showed that these uranium rays differed from X-rays because exposure to an electric or magnetic field could deflect them, and because they gave an electrical charge to gases—they ionized them.

It was Marie Sklodowska Curie (1867–1934) who named radioactivity. In September 1897 she was looking for an idea for her Ph.D. thesis, and her husband, Pierre, the head of the laboratory at the Municipal School of Industrial Physics and Chemistry in Paris, suggested she investigate the curiosity described by Becquerel. Her experiments showed that the total radioactivity of some uranium and thorium salts was greater than the radioactivity from the uranium. This meant that there had to be something else in the uranium salts that emitted stronger radiations. Pierre and Marie then extracted from uranium salts tiny amounts of an until-then-unknown substance she called “radium” (after the Latin word for ray), which is more than a million times more radioactive than the same mass of uranium. She also discovered an element that decays from radium, which she called polonium, after her native Poland. Radium and polonium are the source of most of the radioactivity in uranium ore.

Madame Curie studied the radioactivity of all compounds containing the known radioactive elements, including thorium (named for the Norse god Thor), which she later discovered was also radioactive. She noticed that the strength of radiation from uranium could be measured exactly, and that no matter what compound it was in, the intensity of the radiation was proportional to the amount of uranium or thorium in that compound. This led her to the revolutionary realization that the ability of a substance to emit radiation does not depend on the arrangement of the atoms in a molecule. In 1903, for their discovery of natural radioactivity, the Curies shared the Nobel Prize in Physics with Becquerel. In 1911 Marie Curie received a second Nobel, this time in chemistry, for her isolation of radium and polonium, and for her inquiry into their chemical properties. (In 1935 her daughter Irène Joliot-Curie and Irene’s husband, Frédéric Joliot, shared the Nobel Prize in Chemistry for their discovery that radiation can be induced in certain elements.)

But these important discoveries were not restricted to the Curies or to France. In 1899 and 1900, while studying radium, the New Zealander Ernest Rutherford (1871–1937), working at McGill University in Montreal, discovered alpha and beta particles. At the same time, the French physicist Paul Villard (1860–1934) discovered another form of rays that he termed “gamma rays.” In 1914 Rutherford would prove that gamma rays were a form of light similar to X-rays but with a far shorter wavelength and thus penetrated deeper than the other rays or particles.

Also in 1900, British scientist Frederick Soddy (1877–1956) observed that radioactive elements spontaneously disintegrate into variants of the original. He called them “isotopes,” from the Greek iso (equal) and topos (place). He also discovered that radioactive elements have what he called a half-life—and he did the first work on calculating the energy released during this decay.

At first the dangers of radiation were not apparent. In the early twentieth century, watches had radium dials that glowed in the dark. Tiny green dots of radium were painted on with fine brushes by young girls, whose small hands made them the most dexterous to do the work. These “Radium Girls” sat at a bench with a little pot of liquid radium into which they dipped their brushes. To keep the dots small and perfectly rounded, the girls licked the brush tip to get a fine point. The body recognizes radium as calcium and deposits it in bone, so many of the girls soon developed cancer of the jaw—the nearest bone. (Because radium is absorbed by bone, it can also harm bone marrow, causing severe anemia.) Many of the young women were disfigured, and some died. When five dying Radium Girls filed a suit against their employer, the United States Radium Corporation responded by accusing them of having syphilis. But as the case unfolded, it became clear that the employers had long taken precautions to protect themselves against any radiation and had done all they could to cover up the danger, even telling the workers that it was safe to use their tongue to make the points on the brushes. The case led to establishing the right of employees to sue their employer for occupational diseases. Radium, applied in a safer manner, was used until the 1960s.

Marie Curie, who constantly handled radioactive materials unprotected and worked as a radiologist using unshielded early radiology devices, died in 1934, probably of radiation-induced aplastic anemia, or bone marrow failure, a condition caused when the bone marrow does not produce enough new blood cells to replace those that have lived out their normal life span. Today her laboratory papers and cookbook still have high levels of radioactivity and are kept in lead-lined boxes. It is not known if Pierre, who also worked with radioactive material, suffered from the effects of radiation, as he died in an accident in Paris in 1906 when he slipped and fell under a horse-drawn cart.

Others have been clearly damaged by radiation. In 1986 twenty-nine nuclear facility workers and firefighters died as a result of entering the Chernobyl reactor complex. They perished within a month not only from burns from the flames but also from radiation-induced bone-marrow-failure anemia from an exponentially higher dose than Marie Curie’s. (Two others were killed immediately by the explosion; their bodies were not recovered.) In addition to bone marrow failure, the firefighters and workers also suffered severe radiation damage to their lungs, gastrointestinal tracts, and skin. They also had injuries from thermal burns and trauma from the explosion.

But even with warnings like Marie Curie’s death and the aftereffects of atomic weapons, radiation was often treated as a harmless curiosity. Beginning in the 1920s, for example, shoe stores had X-ray fluoroscope machines (sometimes with the trade name Pedoscope) to determine the right fit. They were sources of amusement for customers, and all family members with them, who loved to see the bones of their toes inside the outline of their shoes. Typical exposure was about 15 seconds. On average that meant about 0.5 mSv, or one-sixth the average annual dose from background radiation. The inherent danger of the machines was understood in 1949, after the birth of atomic weapons, and most fluoroscopes were phased out in the 1950s.

The years immediately following the Curies’ breakthroughs were spent trying to unravel their implications, and the work brought discovery after discovery. In 1901 Soddy and Rutherford discerned that radioactive thorium was converting itself into radium. In 1904 Rutherford—the Magellan of nuclear science, discovering the bits and pieces of the orderly universe of the atom—found that alpha radiation is actually a heavy, positively charged particle with two protons and two neutrons.

The decade between 1905 and 1915 brought important advances in understanding the nature of atoms and subatomic particles. Robert Millikan (1868–1953) showed how to measure the electric charge and mass of an electron. Rutherford developed his theory of the structure of atoms. Soddy and the Polish-American Kasimir Fajans (1887–1975) separately developed the theory of isotopes of elements, and Fajans explained radioactive decay. (Seven years later, in 1919, Francis Aston [1877–1945] proved the existence of isotopes.) In 1914, at the start of World War I, H. G. Wells’s novel The World Set Free imagined an atomic war in 1956 that destroys the major cities of the world.

In 1919 Rutherford accomplished the first artificial nuclear reaction, which happens when particles from the radioactive decay of one element are used to transform the atomic nucleus of another, a process known as transmutation. He was hailed as having “split the atom,” although his work was far short of the nuclear fission reaction that comes from uranium and other “heavy” elements (those with properties of metals). For all his brilliance, however, Rutherford did not believe transmutation of atoms could be a source of power. In 1933 Leó Szilárd (1898–1964), a Hungarian-born physicist who had fled Nazi Germany and taken refuge in London, was the first to theorize that “if we could find an element, which is split by neutrons, and which would emit two neutrons when it absorbs one neutron, such an element, if assembled in sufficiently large mass, could sustain a nuclear chain reaction.” He did not envision nuclear fission as one of these neutron-producing reactions, since this reaction was not known at the time.

Szilárd filed to patent “the liberation of nuclear energy for power production and other purposes through nuclear ‘transmutation’ ” in 1934. He amended his patent application the next year, adding that uranium and bromine are “examples for elements from which neutrons can liberate multiple neutrons.” He hoped to keep the contents of the patents secret, but when he learned that to guarantee it he would have to assign the patents to an agency of the British government, he offered them to the War Office. His offer was refused because, he was told, “There appears to be no reason to keep the specification secret so far as the War Department is concerned.” A few months later the British Admiralty wisely accepted the patents.

In 1938 Lise Meitner (1878–1968), her nephew Otto Frisch, and others discovered that uranium can capture neutrons, then form unstable products and undergo fission, which causes the ejection of more neutrons in a continuous chain reaction. Meitner, an Austrian Jew who worked in Germany and then fled to Stockholm to evade the Nazis, is often thought to have been denied, because of her religion, part of the 1945 Nobel Prize in Chemistry awarded to the German Otto Hahn (1879–1968).

There are two kinds of chain reaction. The first is fission, the source of atomic bombs and nuclear power, in which matter is converted into energy. The nucleus of an atom, struck by a neutron and absorbed, splits into two pieces, one lighter than the other, producing gamma rays and releasing a tremendous amount of kinetic energy. Other neutrons, slowed from their natural pace to travel at just the right speed, strike other nucleii and keep the reaction going.

The second kind of energy in a chain reaction is called fusion: the joining of two or more things to create a single thing. In nuclear physics, fusion is a chain reaction in which the nuclei of two or more elements fuse to form a single nucleus of a new element and simultaneously release energy. This happens every instant within the Sun, and it happens in a hydrogen bomb. The difference between fission and fusion is that fusion requires a great deal more energy to start the chain reaction, but fusion also yields vastly more energy—a hydrogen (fusion) bomb is roughly one thousand times more powerful than an atomic (fission) bomb.

The discovery of fission was announced in January 1939 (though the discovery was in 1938). As soon as J. Robert Oppenheimer heard about it, he understood that an atomic bomb was possible. He was neither the only one nor the first. Szilárd immediately realized the possibility of using neutron-induced fission to sustain a chain reaction. In 1939 Szilárd and Italian-born Enrico Fermi, who had emigrated to the United States to protect his Jewish wife from Italian Fascists, proved this concept using uranium. In this reaction, a neutron plus a fissionable atom (uranium-235) causes a fission resulting in more than one neutron being released. These “excess” neutrons then fission other uranium-235 atoms, resulting in a nuclear chain reaction. Under appropriate circumstances, as in a nuclear power facility, the speed and extent of this reaction can be controlled by regulating the density and speed of the neutrons released and the concentration of the uranium-235 fuel. However, in some circumstances this reaction is self-propagating and thus self-sustaining. This is the principle when a nuclear reactor core melts down or an atomic bomb explodes.

The first man-made demonstration of a self-sustaining nuclear chain reaction was accomplished by Fermi and others in a laboratory beneath the football stadium at the University of Chicago in late 1942. The initial work on building an atomic bomb was carried out at Columbia University and in other parts of Manhattan, before the large team was assembled at Los Alamos, which is why the top-secret enterprise is known as the Manhattan Project.

With the announcement of the discovery of fission, Szilárd saw the urgent need for atomic bomb research. He hoped that Fermi, who won the 1938 Nobel Prize in Physics for his “demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons,” would write to President Franklin Roosevelt to explain the opportunity at hand. Fermi was reluctant, fearing it would jeopardize his and his wife’s émigré status. Szilárd did not feel that he had the stature to write to Roosevelt, so he enlisted Albert Einstein. Einstein’s letter of August 1939—written with Szilárd’s help—set the United States on the road to developing nuclear weapons.

THE A-BOMBS

On August 6, 1945, six and a half years after the announcement of the discovery of fission and just two weeks after the Trinity test in New Mexico, “Little Boy” was exploded over Hiroshima. Three days after that, the bomb named “Fat Man” was dropped over Nagasaki. (The bombs were set to explode about one-third of a mile above the cities, to give them added force from the blasts’ downward effects.) 150,000 to 240,000 people were killed, half of them during the first day. It is widely believed that radiation caused most of these deaths, but this is incorrect.

(Illustration Credit 2.1)

(Illustration Credit 2.2)

Although certainly those people with the most severe radiation exposures died shortly afterward, radiation released by the bombs was not the cause of most deaths. The immediate effect of a fission bomb is superfires from the intense heat generated and the potent concussive wave that moves away from a bomb—the same effects, though vastly greater, of a conventional bomb. About 60 percent of the immediate deaths (and about 90 percent overall) in Japan resulted from the force and fire, and the majority of the remainder from falling debris. About 10 percent of deaths are attributed to radiation.

Masao Tomonaga, a Tokyo physician friend of Bob and a world-famous hematologist, was a two-year-old living in Nagasaki the day the bomb was dropped. His father, a physician in the Japanese Army Air Force, was in Taiwan. Masao and his mother lived in a typical Japanese wooden house, a mile and a half from ground zero.

“A small hill behind the house protected us against the terrible blast wind,” he wrote to us in April 2012,

but part of the house was knocked down and according to my mother’s memory, within ten minutes the house caught fire, and we escaped to a nearby shrine. Unfortunately (?) [the question mark is his], I have no memory of the bomb’s effects. It totally devastated Nagasaki Medical College, 600 meters from the epicenter, and about 900 professors, students, and nurses were killed. There are thousands of records written by survivors about their direct experience with the heat and blast generated by the bomb; descriptions of skin flash burn and damaged skin coming off in pieces are the most frequent. The wind blast caused instantaneous death and severe bodily injuries including hundreds of skin cut-wounds due to flying broken glass.

Horrific as this description is, it is not that dissimilar to those of the firebombing of Dresden and Tokyo during World War II. It is Masao’s description of what happened next that expresses the particular horror of an atomic weapon. Conventional bombs had been dropped on Nagasaki before the A-bomb. People knew the resulting effect on bodies. What occurred in the days following the A-bomb blasts was completely new. Perhaps it is the effects of one big bomb, rather than many small ones, with deferred damage whose cause cannot be seen that makes nuclear weapons and radiation in general all the more terrifying.

The earliest sign of high-dose radiation was, as you know very well, hair loss that began a week or two later. Then severe diarrhea started with bloody stool due to bone marrow failure. The next step was high fever due to infection when neutrophils [white blood cells that fight infection] declined severely. In the case of the Nagasaki bomb, 35,000 people died within a day and another 37,000 within three months. Almost the same number of people survived but after three years developed leukemia, and solid cancers of various organs after 15–66 years (even now).

Masao has spent his professional life studying long-term cancer consequences of the atomic bomb explosions and was head of the Atomic Bomb Disease Institute at Nagasaki University.

A survey in 1950 estimated that 160,000 people survived the Hiroshima bomb and 125,000 survived the one over Nagasaki. The American Medical Association estimates that more than 40 percent of the survivors were still alive in 2011, sixty-seven years later, and that 80 percent were exposed before age twenty. About 93,000 survivors of the two blasts have been—and still are—closely monitored over their lifetime. Radiation received by the survivors ranged from less than a normal medical procedure to doses large enough to cause bone marrow failure. The average dose of survivors in Hiroshima was 200 mSv or about 30 times the average American’s annual radiation dose. It is also about one-half of the amount of radiation the average American receives in a lifetime, but it was received over an instant rather than over seventy-five years and adds to normal lifetime radiation dose. As many as 160 people with both extraordinarily bad and good luck are thought to have been in Hiroshima and in Nagasaki the day the bombs were dropped, and survived.

Leukemia was the first radiation-related cancer detected in the A-bomb survivors. Japanese physician Takuso Yamawaki in Hiroshima noted an increase of leukemia among his patients in the late 1940s. He wrote to his Western colleagues about his observations and published his experience in the medical literature. This aroused considerable interest and led to the establishment of a registry of leukemia and related disorders (originally called the Atomic Bomb Casualty Commission, or ABCC, and now called the Radiation Effects Research Foundation, RERF, funded by the Japanese and U.S. governments). Reports of increased leukemia risk were published in the early 1950s.

The ABCC set up several studies of the A-bomb survivors. To determine the extent of any increase in leukemia or other cancers, it was necessary to know how frequently these diseases occurred in a control population that was not exposed to radiations from the A-bombs. People normally living in Hiroshima and Nagasaki who were out of town when the A-bombs exploded where chosen as controls.

These groups—A-bomb survivors, their children, and their luckier neighbors—comprise the Life Span Study (LSS) and have been evaluated annually for health problems including cancer, heart disease, birth defects, and heritable genetic abnormalities. Of the about 120,000 people in the LSS, about 93,000 are A-bomb survivors; the other 27,000 were in neither city when the bombs exploded and are the control group. This information is of considerable import, not just for those being studied but also because it represents much of what we know about the effects of high doses of radiation on humans delivered in a very short period of time. These data are used to develop regulatory standards and to estimate effects of nuclear and radiation accidents.

Although the risk of many cancers is increased in the A-bomb survivors, leukemias are special, for several reasons. Risks for radiation-induced leukemia differ in two major respects from those for most other radiation-related cancers. First, radiation caused a larger proportional increase in leukemia rates than in the rates of other cancers. Cells in the bone marrow are especially sensitive to cancer-causing mutations from ionizing radiation. We know that a greater proportion of the leukemia cases seen were caused by radiation because “naturally occurring” leukemia is a relatively rare cancer. Of the roughly 93,000 A-bomb survivors being followed, there were about 200 cases of leukemia. One-half of these (about 100 cases) are estimated to have been caused by radiation exposure. The 25 cases of leukemia in the approximately 700 people who received a radiation dose greater than 2,000 mSv were probably caused by the A-bombs.

A second important aspect of these leukemias is that they developed sooner than other A-bomb radiation-induced cancers, especially in children. Children who were 10 years old when exposed to the A-bombs had a threefold or greater risk of developing leukemia than someone who was 30 years old when exposed. Also noteworthy in the A-bomb survivors is that, in contrast to other cancers, the relationship between dose and cancer risk is not linear (a straight line) but a more complex mathematical function (called linear quadratic): high doses produced more than we would expect if the risk were linear. This is not so for other radiation-induced cancers, where the relationship between dose and cancer risk is linear.

Cases of A-bomb–related leukemia began appearing approximately 2 years after radiation exposure, peaked between 6 and 8 years after exposure, and returned to normal levels after 10 to 15 years. However, some recent data suggest that one form, chronic myelogenous leukemia, may have increased for many more years. Also, a close relative of leukemia termed myelodysplastic syndrome increased more slowly than the usual forms of leukemia, and its risk remains elevated in the A-bomb survivors more than 60 years later.

Another curiosity is that chronic lymphocytic leukemia (CLL), the most common leukemia in people of European descent, was not detected in the A-bomb survivors. CLL was also absent in Japanese who were not exposed to the A-bombs. This observation led to two important concepts. First, radiation exposure increases the risk of cancers that occur normally in a population but not cancers that are rare or absent. Second, CLL, in striking contrast to the other leukemias, is not a radiogenic cancer—one that can be caused by radiation. This second concept has been recently challenged by claims, not entirely convincing, of increased CLL in the Chernobyl-exposed population. This controversy is still being sorted out.

Because the relatively rapid onset of most leukemias after radiation exposure from the A-bombs contrasts with the onset of other types of cancer, which took decades to develop, it suggests that after a radiation accident, we can get an early readout of what is likely to happen later on by looking for leukemias in the exposed population.

Data on leukemia risk after the A-bomb exposures also tells us much about how radiation and cancer risk interact. For example, normally about 7 of every 1,000 Japanese will die of leukemia in their lifetime. However, in the A-bomb survivors leukemia deaths increased to 10 per 1,000 people. Thus, although the absolute number of extra cases is small (3 per 1,000 people), they represent a more than 40 percent increase, which to epidemiologists and statisticians is very large. Similarly, although leukemia accounts for only 1 percent of cancer deaths in unexposed people, it accounts for about 15 percent of cancer deaths in A-bomb survivors.

These data convey an important message for expressing the risk of a rare cancer like leukemia. Say the risk increases from 10 to 20 per 1,000 people. It is correct to say there will be 10 extra cancers in every 1,000 people or 1 case in every 100. If we consider that about 45 percent of males will get cancer in their lifetime, the number of cancers in 100 males will increase from 45 cases to 46 cases. This increase sounds small to most people. However, it is equally correct to say that the risk of the rare cancer has doubled (from 10 cases per 1,000 people to 20 cases per 1,000). A doubling in cancers sounds frightening to most people. An increase in cancer risk can sound small or large depending on how the data are presented and how they are understood.

Solid cancers—the more common cancers like breast and lung cancers—also increased in the A-bomb survivors compared to unexposed Japanese. Among the survivors who received the lowest radiation dose, only about 400 of about 5,500 cancer deaths (less than 10 percent) appear to have been caused by A-bomb radiations. Stomach cancer (a very common cancer in Japan) and lung cancer were the most frequent solid cancers among the survivors. Those who smoked were at an even greater risk of lung cancer. Liver cancer, also common in Japan, was the third most frequent cancer in A-bomb survivors. Risk of liver cancer was greater in persons who were exposed in their twenties than for those who were younger or older.

Risk of thyroid cancer in the A-bomb survivors was closely correlated with age at exposure. Most radiation-induced cases occurred in children younger than ten years of age when the A-bombs exploded. This is similar to the situation at Chernobyl, where almost all excess thyroid cancers attributed to iodine-131 occurred in children and adolescents who were younger than 20 years old when the accident occurred.

Not every type of cancer was increased in the A-bomb survivors. Why? There are several possible reasons. One is that cells of different tissues and organs may have different susceptibilities to radiation-induced damage, including the mutations in DNA that are typically the first step in cancer. (Some scientists argue that heritable changes other than in DNA, referred to as epigenetic changes, can also start the path to cancer, but this is controversial.) Another possibility is that mutations in DNA occur with equal frequency in cells in all tissues and organs but that some cells are better able to repair the mutations than others. Both concepts may operate.

Still another possible explanation is that A-bomb survivors may be too few in number to show a small increase of a rare cancer. Failure to find an increase of cancer in a epidemiological study does not mean there is no increase, but it does tell us that any increase must be small.

Consider, for example, bone cancers. Substantial data indicate bone cancers increase after exposure to very high doses of radiation, doses greater than 60,000 mSv, usually given to a small part of the body like a limb. However, such a high exposure given to the whole body would have resulted in immediate death in the A-bomb population, where the average dose was 200 mSv, or 300 times less. Consequently, even though bone cancers are radiogenic, we cannot know if bone cancers in the A-bomb survivors did not increase because the dose was too low to cause bone cancer, or because the number of survivors was too few to detect an increased risk, or both.

Of those 160 people to have survived both the Hiroshima and Nagasaki A-bombs, Tsutomu Yamaguchi, the last known, died in January 2010 of stomach cancer, at age 93. It is impossible to know whether his cancer was related to his A-bomb exposures.

THE EVOLUTION OF FEAR OF RADIATION

In 1954 the world’s first nuclear power facility was opened in Obninsk, near Moscow; there are now over 430 worldwide. The medical use of radiation has gone from simple X-rays to CT and PET scans.

How did radiation go from being something that most people admired and wished to experience to something that most people fear and want to escape? There is, of course, no simple answer, and people’s behavior is sometimes contradictory. For example, parents might worry that their child’s sleeping under an electric blanket may cause leukemia from radiation, but they may also insist, often contrary to medical advice, that a head CT scan be done to exclude the possibility of a brain cancer after their child has a seizure. A head CT scan delivers a dose of radiation to the head equivalent to that of someone who was about four miles from the Hiroshima A-bomb, whereas an electric blanket emits no ionizing radiation. Similarly, people concerned about global warming are often firmly opposed to nuclear energy, yet it is the only immediately available energy source able to substantially reduce carbon dioxide emissions, albeit with some inherent but potentially solvable problems.

Several considerations may help explain this paradox. The discovery of radiation by Röntgen and others opened great horizons. Radiation allowed people to see into objects and observe their workings. Medical use of radiation could show bone and other internal structures, aid diagnoses, and save lives. Later, radiation therapy was developed, and for some cancers, it still offers the best chance of cure. Some people continue to go to radium hot springs and caves for treatment of chronic diseases such as eczema and fungal infections. In the 1940s and 1950s, physicians used radiation to cure ringworm and to shrink an enlarged thymus and tonsils (mistakenly) believed to cause recurrent upper respiratory tract infections in children.

It is difficult to say precisely when enthusiasm for radiation began to change. Even at the outset, there was evidence of potentially harmful effects of too much radiation, such as the death of Marie Curie from aplastic anemia and cancers in some of the early radiologists and in the radium dial painters. Certainly a major shift occurred after the atomic bombs were exploded in Japan. During the 1930s, when the potential of developing an atomic weapon became apparent, several physicists voiced concern about the morality of this enterprise, including Szilárd, the German Werner Heisenberg, the Dane Niels Bohr, and Einstein, among others. However, the threat that Germany would develop an atomic bomb before the United States did, the surprise Japanese attack on Pearl Harbor, and the huge loss of lives in the Pacific War overrode these reservations.

When an army or government undertakes a strategic project, the project often takes on a life of its own that eclipses the will of the participants. Once the building of the A-bombs started, there was probably no way to stop their being detonated over Japan short of an unconditional Japanese surrender. (This unstoppable force is nicely portrayed in the 1989 movie Fat Man and Little Boy, directed by Roland Joffé.) And most Americans were happy to see the Pacific War end quickly; only years later did they have second thoughts about civilian casualties from the A-bombs.

The shift in the view of radiation from mankind’s helper to menace was accelerated by the unfortunate lack of trust between the United States and the Soviet Union immediately after the A-bomb explosions. It is impossible to know if the nuclear arms race would have developed had the Soviets been allowed access to the Manhattan Project or had President Harry Truman and Prime Minister Winston Churchill agreed to share nuclear technologies with the Soviets. But once one country had the bomb, human nature dictates that others would want it as well. What is certain is that the secrecy surrounding the development of nuclear weapons, and as well as nuclear warships and submarines, led to growing distrust between the United States and Britain, on the one hand, and the Soviet Union, on the other; to nuclear weapons escalation; and to a rising and persistent public distrust of their government’s nuclear weapons policies, especially atmospheric testing. This distrust unavoidably affected public opinion on nuclear energy policy.

Because of the way governments and industry handled the development of nuclear energy, it unfortunately became confused with nuclear weapons. On December 8, 1953, President Dwight Eisenhower gave his famous “Atoms for Peace” speech to the UN General Assembly. He discussed the peaceful uses of nuclear energy in agriculture, in medicine, and especially in the generation of electricity. He predicted that in the future electricity would be so cheaply made, it could be provided for free. Unfortunately, this has not happened. Nuclear energy meets a substantial amount of the energy needs of many developed countries, but its progress has been troubled. Many citizens perceived government regulators and industry as not transparent in safeguarding the public. Some of these concerns are warranted, whereas others are not. The nuclear facility accidents at Three Mile Island, Chernobyl, and Fukushima Daiichi further heightened global radiation fears.

Confusion between nuclear energy and nuclear weapons is even more complex. Since the early 1990s, Iran and North Korea have been accused of claiming to be developing only nuclear power facilities while actually enriching uranium or producing plutonium to make nuclear weapons. Many people see a direct connection between the supposedly legitimate use of nuclear technology and, in nations such as Iran and North Korea, the goal of building nuclear weapons. This attitude leads inevitably to another, perhaps even greater concern: a direct correlation between the diffusion of nuclear energy technologies and the ability of nations to develop nuclear weapons. This concern is real; it cannot be ignored.

Finally, nuclear energy and nuclear weapons are commonly linked to the issue of spent nuclear fuel. Concerns abound that terrorists will gain access to these materials to develop an improvised nuclear or radiological device. These concerns are furthered by discussions of developing fast breeder reactors, which produce even greater amounts of weapons-grade materials. Perhaps in part because movies and other media often portray exaggerated impacts of radiation, many people think a nuclear power facility can explode as if it were an atomic bomb, even though this is impossible. Certainly there have been explosions within nuclear power plants, but they were not nuclear, and they had nowhere near the effect of an explosion caused by nuclear fission. The Chernobyl reactor building was destroyed by a steam explosion, and part of the Fukushima reactor building was destroyed by an explosion of highly flammable hydrogen gas.