I am sure that at the end of the world—in the last millisecond of the Earth’s existence—the last human will see what we saw.
—George Kistiakowsky, in reaction to witnessing the first nuclear bomb explosion, Alamogordo, New Mexico, on July 16, 19451
IN 1945, SCIENCE WAS RIDING HIGH. SCIENTIFIC INVENTION WAS REGARDED BY AMERICANS as prestigious, and scientists were involved at the top levels of decision-making, including with regard to national security.2 Particularly on atomic matters, they were experts with indispensable knowledge.
There was a further surge of support—and government funding—after Sputnik. In 1960, Time named America’s scientists as Person of the Year, and the magazine’s tone was one of optimism: “1960 was the richest of all scientific years and the years ahead must be even more fruitful.”3 Federal government support for research and development reached almost 2 percent of GDP in the mid-1960s, at the height of the Apollo program.
Today, however, the situation is quite different. We spend only about 0.7 percent of GDP on publicly supported science, and scientists now have less political clout.4 President Eisenhower responded to the Sputnik crisis by appointing James Killian, president of MIT, as his science advisor; the announcement was well received as a signal that serious thinking would take place behind the scenes.5 In contrast, President Trump’s science advisor was not appointed until eighteen months after his new administration took office.6
Science—and scientists—have become controversial and less universally respected. When did this happen and why? Three issues stand out as contributing to the erosion of scientists’ access to power and to budgetary dollars after 1945.
The first is that postwar expectations for science—particularly atomic energy—were exaggerated. This is perhaps understandable given the speed with which enormous advances were made in the early 1940s. Still, an essential part of the backdrop is that science failed to deliver as promised.
Even worse, the potential danger of unintended consequences was underemphasized by political leaders and some of their high-profile scientific advisors. The original sin may have been to underestimate the risks associated with radiation, in weapons testing and through accidents, and particularly with regard to handling nuclear waste. Once undermined, public trust in atomic power proved hard to rebuild. This loss of credibility proved contagious. If the government and its technical experts could be wrong about radiation, what else might they be concealing?
The second and more dramatic issue was growing disagreement between scientists and politicians. By the 1960s, some senior scientists were becoming skeptical about how technology was being used in the name of national security. The bombing of North Vietnam was a particularly controversial issue, with politically plugged-in scientists on both sides of the issue and a heated debate behind closed doors.
Matters became more public at the end of the 1960s, under Presidents Johnson and Nixon. These presidents wanted to build high-profile pieces of hardware, in particular a supersonic jet and an antiballistic missile system. Top science advisors—insiders with access to the corridors of power—expressed reservations, and even in some instances helped opponents of the administration win the congressional debate against the proposed systems. Politicians did not soon forget this perceived betrayal, and political leaders control the purse strings.
Finally, the rise of the anti-tax movement reshaped how Americans think about what government should do. In the recurring budget debates since the 1970s, science has been repeatedly squeezed. We still fund science, particularly when there is a potential national security dimension, but at a much lower scale relative to the size of our economy.
Understanding this erosion of political support for science spending matters today. It is not enough to propose specific projects or to create moments of enthusiasm. Government support for research and development has greater effects when it can be sustained.
When Vannevar Bush argued for the establishment of the NDRC in mid-1940, the potential power of uranium fission was not yet widely appreciated. There had been steady theoretical advances before 1940, and Ernest Lawrence, a physicist at Berkeley, won the 1939 Nobel Prize for inventing the cyclotron—the first particle accelerator, which created new possibilities for work at the atomic level. Meanwhile, in Europe, progress with some other parts of the basic science was moving fast; the first splitting of the uranium atom took place in Germany in December 1938.
James Conant, president of Harvard, was sent to the UK in early 1941 to investigate further. The British were cagey about what they knew—despite the previous year’s Tizard mission, they were not keen to reveal that they now thought an atomic bomb was feasible. Then Frederick Lindemann, Churchill’s scientific advisor, privately convinced Conant that an atomic bomb was feasible. And if the British might be close to making it work, could the Germans be far behind?7
Vannevar Bush was initially skeptical but quickly became persuaded, in part because of his assessment of German capabilities.8 Once convinced, Bush—in typical fashion—agreed to organize a major effort, with Conant, Lawrence, and Alfred Loomis helping to bring scientists on board.
In an exception to the rule for almost all other major technology developed under the NDRC/OSRD during the war, the military was formally put in charge of the overall project, with Major General Leslie Groves in command. Even Groves had to concede scientific leadership to a civilian, Robert Oppenheimer.9 And the scientists, even while secluded at Los Alamos, were not subject to standard military discipline. Invention needed creative space for the inventors.
It also needed money. As a matter of urgent national priority, civilian scientists had been encouraged to spend whatever was necessary to build a viable bomb. This proved to be a large amount of taxpayer money. From 1942 to 1946, the Manhattan Project received over $1.5 billion of funding, with funding in its peak year reaching 0.4 percent of GDP (the same percentage of GDP is about $80 billion today).
The role of scientists as shapers of policy was evident at the highest level. In 1945, President Truman convened what was known as the Interim Committee to advise him on whether to use the atomic bomb against Japan; this group included Bush, James Conant, and Karl Compton among its members. Bush and his colleagues had long argued for scientists to share responsibility at the highest possible level of military strategy, and now they had it.10
By the end of July 1945, there was no denying that the world had changed. Either a country would have the most modern science and the latest weapons, or it would not. And the frontier, in terms of the military equipment and how to use it, was now likely to move fast.
In the immediate aftermath of World War II, American enthusiasm for the potential of new technology—and the benefits it could bring—was almost unlimited.11 The atomic bomb had moved from theory to detonation in less than a decade, so who knew what could come next. From the public perspective, the prospects for an atomic age seemed literally incredible.
The academic experts were on board. Nine big-name northeastern universities jointly created Associated Universities Incorporated in 1946.12 The goal was to manage complex projects, and Brookhaven National Laboratory—for atomic energy research—was their initial project.13
Even the atmospheric testing of atomic bombs was almost fashionable at first. Aiming to catch the cultural moment created by the demonstration of power in a nuclear test on Bikini Atoll in the Marshall Islands on July 1, 1946, the bikini design for swimwear was invented (and registered as a trademark) by a Parisian designer, Louis Réard.14 A leading rival swimsuit, designed by Jacques Heim, was named the Atome.15
The political vision was not just about building more effective weapons, at least at the level of political rhetoric. In December 1953, President Eisenhower announced an Atoms for Peace program, which included building nuclear reactors and sharing atomic technology with countries seeking economic development. This program remains controversial in terms of its motivation and impact, but there is no question about the promise behind the idea that atomic power could be useful for purely civilian purposes and with great positive impact.
Atomic power could be harnessed in myriad ways, including to drive vehicles or planes. As late as 1960, the US Air Force still had a nuclear-powered bomber under development.16 In 1958, one company announced a potential atomic pen, and another raised the possibility of a nuclear-powered car.17 Nuclear weapons could also be used to dig very large holes—for example, to expand the Panama Canal.18 There was serious discussion of building atomic engines to power spacecraft; multiple “nuclear thermal rocket” prototype engines were successfully tested in a fifteen-year program that ran into the 1970s.19
In the 1950s, the chairman of the Atomic Energy Commission even promised that electricity might soon become effectively free, with implied positive effects for the economy.20 The idea that atomic power might not be entirely safe rarely came up.
The first domestic US nuclear power plant opened in 1957. The pace of construction stepped up during the 1960s, and thirty-seven plants were in operation by 1973. In retrospect, however, the tide of ideas had started to turn much earlier, with growing public concern about radiation.21
In a caustic 1950 volume, Science Is a Sacred Cow, Anthony Standen argued that much of the newly acquired prestige of science was at best exaggerated. In his view, this led to overconfidence or even arrogance: “Completely gone is any pretense of inculcating the virtue of reserving judgment until all the facts are in.”22 Harsh words, and very much against the grain of public and professional opinion in that moment. But also prescient—as became vividly illustrated by a key side effect of atomic weaponry and power.
Immediately after the bombing of Hiroshima and Nagasaki, the Japanese authorities reported cases of radiation poisoning—a fact disputed by the American military. Writing in the New York Times, with a byline dated September 9, 1946, prominent science writer William L. Laurence claimed to have seen evidence clearly refuting claims that radiation could become a significant cause of death.23 Senior officers were keen to downplay the idea that radiation would have long-lasting effects on people and places.
The military was wrong about radiation; the effects in these specific instances subsequently proved significantly more negative, including a higher incidence of cancer, birth defects, and—in cases of very high exposure—rapid death.24 Moreover, the New York Times neglected to disclose that Mr. Laurence had been seconded to the US military and was essentially presenting the official views disguised as independent reporting.25 Laurence was a journalist, not a scientist, but the line between independent expert assessment and the official press line was starting to blur.26
Fear of radioactive fallout had spread in the 1950s, particularly as a result of atmospheric nuclear tests, and some prominent scientists had backed the case for a test ban.27 There were close to 120 nuclear weapons tests in 1958, and in February that year, the issues were brought alive in a dramatic television debate between well-informed experts who strongly disagreed: Edward Teller, inventor of the H-bomb, and Linus Pauling, winner of the 1954 Nobel Prize for chemistry. Teller was a master of rhetoric: “Now let me tell you right here, this alleged damage which the small radioactivity is causing by producing cancer and leukemia has not been proved, to the best of my knowledge, by any kind of decent and clear statistics.”28 He went on to suggest “there is the possibility, furthermore, that very small amounts of radioactivity are helpful.” In retrospect, he was far too sanguine.29
It’s hard to say who won that particular debate, which included questions of values and how to think about the Soviet Union. More broadly, however, people became increasingly worried about the unintended consequences of new technology and tended no longer to believe government assurances. The failure to deliver on postwar expectations was becoming a bigger concern.
Political leaders—and the military who worked for them—wanted technology to serve their version of the national purpose. Facts about side effects were inconvenient and brushed aside. To be fair, some scientists had great reservations from the beginning about the way in which nuclear technology was being developed.30 Others saw the world through more rose-tinted glasses, at least until the broader debate about technology began to shift in the early 1960s.
Silent Spring is a vivid and compelling book. Rachel Carson was a well-established science writer, a marine biologist specializing in explaining phenomena of the sea for broad audiences. The Sea Around Us won a National Book Award in 1952, and The Edge of the Sea, published in 1955, was also a best seller. But she is remembered primarily for the 1962 publication and subsequent impact of Silent Spring on the use and abuse of pesticides in American agriculture.
Carson’s critique was not so much of science but rather of the careless way new scientific discoveries were being applied—by firms in the private sector, with government connivance. Drawing a parallel with the insidious and initially invisible effects of radiation, she argued that American communities were being poisoned by the overuse of pesticides, the chemical DDT in particular.31
In 1939, Paul Müller, a Swiss scientist, had discovered that DDT was effective in killing insects, including mosquitoes that transmit typhus and malaria. Shipped to the United States in 1942, this insecticide was quickly put into mass production and was used widely by the US military in its operations around the world—beginning with the successful effort to quell an outbreak of typhus in Naples in October 1943. Tests in Italy and Greece, supported by the Rockefeller Foundation, found that malaria incidence could be reduced dramatically.32
The chemical industry swung into high gear, and around 1.35 billion pounds of DDT were used in the United States over the next three decades.33 The US Department of Agriculture (USDA) had some reservations as it became clear that DDT could have adverse effects on animals that were not pests, and from 1957, there were some limitations placed on where DDT could be used. At the same time, however, USDA officials and scientists remained in favor of widespread agricultural use, including for cotton, peanut, and soybean crops.
As Carson pointed out, however, the USDA was slow to recognize the way in which a wide variety of insecticides negatively impacted benign insects, as well as birds and the broader ecosystem. There were also legitimate concerns about the impact on people, including death as a result of acute exposure and possible connections to cancer (this is still disputed for DDT).34
Carson was no technophobe; she was well aware of the benefits of technology applied to agriculture. She was also making a profound observation about the development of technology—unintended and unfortunate consequences could easily predominate, and the Department of Agriculture stood accused of encouraging excessive risk taking with human health in pursuit of higher yields.35 Carson’s first New Yorker article on this topic emphasized the parallel with radiation.36
Carson’s arguments were widely embraced by the public, by opinion makers, and even by politicians.37 There was a strong reaction from the chemical industry—arguing that Carson was exaggerating and even mistaken on key facts. DDT was banned from most uses in the United States in 1972, although the debate, remarkably, continues more than fifty years later.38
Irrespective of how you view the merits of DDT, it is undeniable that Silent Spring’s impact was felt immediately and profoundly.39 In the post–World War II American love affair with—and generous funding levels for—science and its applications, Carson sounded a major discordant note.40 By the mid-1960s, many people, including scientists, were increasingly skeptical about the uses to which science was being put.41
The environmental movement emerged energized and with more support from the 1960s, based on multiple legitimate concerns.42 Among these was the simple and increasingly obvious point that the promise of science had been overstated by people in positions of private-sector and governmental power. And the pursuit of profit meant that important unintended effects, including health effects that might manifest only over time, were ignored or downplayed. Rachel Carson’s contribution was just the beginning of a much longer debate about how best to protect or help the environment.
Subsequently, of course, the nuclear accidents at Three Mile Island (1979) and Chernobyl (1986) made the painful point that the civilian benefits of nuclear technology had been overstated (free electricity!) and the risks understated (radioactive waste). Managing these kinds of complex systems also proved harder than the experts had imagined.43 Between 1977 and 1989, forty reactor construction projects were canceled in the United States.44 From the late 1970s through 2013, no new reactors were built.45 Rachel Carson had drawn an even more powerful parallel than she’d realized.
The reaction to Silent Spring represented a turning point in popular and political views on the application of new technologies. No longer would citizens automatically accept the word of scientists at face value. The relationship between science, government, and the military was also increasingly called into question.
At the same time, ironically, another divide was opening up that would have an equally damaging impact on publicly supported science—this time between the scientists and the politicians who provided their funding. There is no better way to convey the breakdown in this relationship than by reviewing the career of George Kistiakowsky.
George Kistiakowsky experienced the twentieth century in unique fashion. Born in 1900 in Kiev, Ukraine, he not only had a ringside seat for the Russian Revolution but also briefly joined the ill-fated White Army in its fight against the Bolsheviks.46
Following his escape from the Communists, Kistiakowsky completed his education at the University of Berlin, immigrated to the United States while still a young man, joined the faculty of Harvard, and established himself as one of the country’s leading chemists. It was no surprise when Vannevar Bush appointed him to head the wartime work of the NDRC on explosives.47 In October 1943, with the Manhattan Project struggling to figure out how to start a chain reaction, Kistiakowsky was brought in to solve the problem—which he did, to great professional acclaim.48 Building on this experience and reflecting the growing importance of America’s nuclear arsenal, Kistiakowsky was an obvious choice as a next-generation leader on science policy.49
Appointed to the Science Advisory Board of the air force chief of staff, Kistiakowsky helped convince the air force to develop intercontinental ballistic missiles (ICBMs), moving them away from reliance on long-range bombers. He was an expert on right-sizing the warhead, including facing down pressure from Vice President Nixon, who wanted to build something bigger, presumably for the symbolism. “Couldn’t we afford it?” Nixon reportedly asked. Kistiakowsky and his colleagues prevailed, because the facts and the science mattered.50
In 1959, Kistiakowsky became chief science advisor to President Eisenhower—the second person ever to hold that position. Few scientists knew more about how the military worked, and Kistiakowsky had become acutely aware of the potential for mass destruction. Kistiakowsky was influential in assessing the feasibility of US war plans and, in that context, began to suggest setting limits on nuclear testing.51
In 1960, Kistiakowsky was concerned about the Soviet missile threat, perhaps more so than was Eisenhower.52 The more he learned, however, the more Kistiakowsky felt information was being distorted within the decision-making structure. Looking back at age eighty-one, he put it this way: “As I got up higher and higher on the rungs I began to realize that these policies were based on frequently very distorted, sometimes deliberately, intelligence information.” The bomber gap, the missile gap, and all the other supposed gaps vis-à-vis the Soviets were greatly exaggerated, at least in his retrospective view.53
In 1965, Kistiakowsky was disappointed when it became apparent President Johnson would ignore the recommendations of a task force on preventing the spread of nuclear weapons.54 The Vietnam War, particularly the bombing of North Vietnam, only increased Kistiakowsky’s disquiet. President Kennedy had asked the Presidential Scientific Advisory Committee (PSAC) for ideas on Vietnam, but Johnson seemed much less interested in input.55
Matters came to a head in 1966, when Kistiakowsky worked with a group of scientists who attempted to design a barrier of button-sized electronic sensors that could be used to prevent infiltration into South Vietnam. From Kistiakowsky’s perspective, this would be an alternative to bombing North Vietnam. Robert McNamara solicited the idea but never seemed fully committed.56 The air force, once the sensors and related technology were available, seemed to regard it as a complement to—rather than a substitute for—bombing. The scientists themselves were increasingly divided, based in part on how skeptical they were of what the military wanted to do.57
Kistiakowsky came to feel that he and his scientific colleagues were being manipulated by the Pentagon to simply justify more bombing, and he severed his government connections.58 He subsequently became active in the Council for a Livable World, founded by another atomic pioneer working for arms limitations.59
Interviewed in 1980, Kistiakowsky expressed his extreme reservations about the military and the way that science had been used in the Cold War. Even inventors of the atomic bomb were turning against what science policy had become—or how it was being used.
The reservations about Vietnam expressed by Kistiakowsky and other scientific advisors opened a rift that only widened in subsequent years. In David Halberstam’s influential assessment, highly qualified people held positions in government during the Vietnam War period and ended up making decisions with deeply unfortunate consequences.60 Kistiakowsky’s career and the fate of science advisors more broadly illustrates an additional dimension of what happened. In the 1940s and 1950s, politicians listened to their scientific advisors, in part because the issues were technical. Could we build a hydrogen bomb, would missiles work, and even should nuclear testing continue to take place aboveground? There remained many technical issues also in the 1960s, and science advice was still welcome, but under President Johnson, and then President Nixon, politics moved to the forefront.61 When scientific advice collided with what politicians wanted to do, the result was protracted struggle.
That was the case, for example, with the development of supersonic civilian aircraft, originally floated under President Kennedy and pushed forward, despite some expert skepticism, under President Johnson and again under President Nixon. The scientific issue was quite simple—the plane created a sonic boom that was vastly louder than the noise created by a regular jet plane. Residents of Oklahoma City were subjected to this level of sound for six months during 1964, by way of experiment. More than fifteen thousand people filed complaints, and five thousand filed damage claims. Not surprisingly, local residents felt the noise level was unacceptable.
Scientists raised concerns about these side effects, along with the small associated benefits of supersonic planes relative to traditional air travel. Prominent former science advisors spoke against the design in congressional testimony. Russell Train, chairman of the Council on Environmental Quality (under the White House) appeared before the Joint Economic Committee in May 1970 and emphasized the potential issue of stratospheric pollution.62 Another congressional witness was Richard Garwin, who had been a confidential advisor to the White House on supersonic travel. He stated that supersonic jets would generate airport noise that “is far beyond the maximum acceptable for jet aircraft now.”63
The US aircraft industry wanted to build the plane, and influential nonscientists in policy circles thought that it would boost US prestige—and match the Concorde, which was developed by the British and the French during this period.64
In the end, the opposition scientists prevailed. Funding for the supersonic aircraft was withdrawn, much to the annoyance of the White House and to the delight of Senator Bill Proxmire, who had summoned the scientists to testify. Proxmire felt that the program had been revealed to be nothing more than excessive government support for private business: “We were financing a completely private, commercial enterprise with hundreds of millions of Federal research dollars.”65
The conflict between independent science and establishment politics came to a further dramatic head under President Nixon, when current and former members of the President’s Science Advisory Committee publicly challenged the administration’s proposal for antiballistic missile systems—intended to shoot down or divert incoming nuclear warheads (a difficult if not impossible proposition).66 As the Soviet Union and China built nuclear weapons, pressure had developed for some kind of antiballistic missile defense. Large-scale systems had proven prohibitively expensive and of dubious value, so President Johnson’s Defense Department proposed the “lighter” or less comprehensive coverage that could (arguably) be provided by the Sentinel program. By the time Richard Nixon was elected president, Sentinel (rebranded as Safeguard) had become intensely controversial—with people living in “protected” cities taking the view that the system just made them into more prominent targets.
Part of the political pushback came from local people in places like Seattle and suburban Chicago, who became concerned about the risks of nuclear accidents that might be associated with antiballistic missile (ABM) bases. The military initially refused to discuss the details, arguing that much of the information was classified. However, when nuclear physicists weighed in with their negative assessments, the Pentagon was forced into an all-out publicity effort.67
Most notable in this fight over ideas was the role of physicists from the Argonne National Laboratory, located near Chicago.68 In November 1968, despite working for the government, John Erskine, David R. Inglis, and their colleagues took the lead in organizing and disseminating information that was directly counter to what the administration was trying to achieve.
General Alfred Starbird, head of the army’s Sentinel program, insisted, “There cannot be an accidental nuclear explosion.” George Stanford, one of the Argonne scientists, rebutted this as “a ridiculous statement.… They have circumvented a lot of possibilities, but they still have the human and mechanical components to consider.”69 In town hall–type meetings, the physicists prevailed in swinging local opinion against the ABM bases.70
Former members of PSAC testified in Congress against the Sentinel program. One senator reportedly remarked that he was “unable to find a former presidential Science Advisor who advocates the deployment of the ABM program.”71
Nixon initially wanted to the keep the program, but the opposition became too widespread, and Congress declined to fund the proposed version.72 When Franklin Long, a chemist from Cornell, was proposed to be the next science advisor, President Nixon turned him down—apparently because Long had opposed the ABM program.73 Soon after that, Nixon eliminated the science advisor position and actually shut down the PSAC.74 Scientists had won the battle over Sentinel, but their privileged relationship with power was further eroded. Speak truth to authority—and authority will cut your funding.
Killian and Kistiakowsky had been remarkably successful as science advisors, participating in the creation of NASA, improving weapons, and even pushing for arms control.75 Within the White House, their voices were authoritative, and they were backed by physicists and others who had worked together under Vannevar Bush’s World War II effort. It helped that they worked on well-defined technical questions, such as whether to switch from bombers to missiles as a defense priority and how to think about a potential ban on nuclear weapons testing.
By the late 1960s, in contrast, even the strategic defense questions had become complex in political terms and less amenable to technical solutions—this was exactly Kistiakowsky’s experience with Vietnam. The change was not that science was becoming harder—inventing the atomic bomb, based only on relatively new theory, was at least as hard as building missiles. But missiles seemed increasingly irrelevant—or an expensive distraction—as attention turned to civil rights and debates about the causes and effects of poverty. At the same time, science was facing a new pressure it hadn’t seen in decades: a squeeze on its funding.
Perhaps the hardest-hitting critique of the 1960s was also the most lighthearted. In a series of Science articles, culminating in The Politics of Pure Science (published in 1967), Daniel Greenberg peeled back the mystique of modern science, exposing the same sort of pursuit of subsidy we see in all other industries. Science had virtually unlimited access to funding in the early 1960s, and this generated resentment.
Greenberg created the humorous character of Dr. Grant Swinger, director of the Breakthrough Institute and chairman of the board of the Center for the Absorption of Federal Funds.76 Among Dr. Swinger’s more memorable proposals was the Transcontinental Linear Accelerator (TCLA), designed to run from Berkeley to Cambridge, “to pass through at least 12 states, which means 24 senators and about 100 congressmen could reasonably be expected to support it.” The route might even skirt “several congressional districts which went against the administration in the last election.”77
In retrospect, Greenberg’s critique was the beginning of the end for unchecked scientist access to federal funds. Just as concerns about the environmental damage of scientific advancement were gaining traction on the left, concerns about the Bush model of endless subsidies for academically led science were starting to increase. And these concerns found their focus on the other side of the aisle.
On the right of the political spectrum, skepticism about science can perhaps be traced back to the anti-fluoridation messages of the John Birch Society, founded in 1958.78 Senator Barry Goldwater’s election campaign in 1964 can be seen as foreshadowing important Republican messages of the current era, including small government, but there was no evident opposition to science in general and certainly no concern expressed about the application of science to warfare.79 “Among Goldwater Southerners, even thermonuclear warfare gets identified with regional pride, sentiment, and rancor.”80
Richard Nixon’s position on science was more complex or perhaps just ambiguous. He created the Environmental Protection Agency (EPA), addressing issues raised by the growing environmental movement. But libertarian groups and others on the right had begun to raise questions about whether federally supported research was truly useful. Was this money just being used to support what had become hotbeds of left-wing protest—the universities?
In the account of one key aide (speechwriter and later presidential candidate himself), Pat Buchanan, a turning point for Nixon’s 1968 presidential bid came with student unrest in the spring of that year, particularly the occupation of buildings and other actions on the campus of Columbia University. Nixon called this “the first major skirmish in a revolutionary struggle to seize the universities of this country and transform them into sanctuaries for radicals and vehicles for revolutionary political and social goals.”81 Universities—and their government funding—would never be seen in the same way again.
There were campus protests against the military—and against science helping the military—on all manner of campuses, including places previously regarded as elite.82 In 1966, there were protests against war-related companies, such as Dow Chemical, manufacturer of napalm. Some prominent scientists based in Boston took out a newspaper advertisement opposing the United States’ use of chemical weapons, such as the defoliant Agent Orange, in Vietnam; others, including from Columbia University, made the case against antiballistic missile (ABM) systems and sites.83 There were student protests against classified defense-sponsored research on campus.84 The Bush model was focused around the university and assumed a tight relationship with the government. As parts of the political establishment became suspicious of university faculty and students—and vice versa—the Bush model became harder to sustain.
The election of 1968 arguably represented the end of the postwar political consensus and the beginning of our modern social and geographic polarization. Attitudes did not shift overnight, but from 1968, the rising narrative was much more anti-government and therefore also against government-supported activities, such as university-based research.85 Books and reports on how the government wasted taxpayer money became a popular genre. Politicians vied with each other to make fun of federally supported research projects.86 The federal research budget declined more than 10 percent in inflation-adjusted terms from 1968 to 1971. The percentage of faculty who received federal support fell from 65 percent in 1968 to 57 percent in 1974.87
These political shifts coincided with America’s greatest technological achievement to that date—putting a man on the moon in July 1969. But this also meant that the space mission, as defined by President Kennedy, had been accomplished.
At the same time, the United States was facing budgetary pressures at a level not seen since the cost of fighting World War II. The United States spent about $168 billion on direct military operations in Vietnam, 1965–1972, which would be equivalent to over $1 trillion in today’s money. Veterans costs and support to the regime in Saigon (until 1975) added significantly to the tab.88
At the same time, unlike World War II, the United States was not just trying to restore its preferred world order outside its borders—the country was transforming within its borders as well through the massive Great Society programs of the 1960s. For example, Medicare, created in 1965, provided subsidized medical care to everyone over the age of sixty-five—and by 1970, twenty million Americans were eligible. Mandatory spending—mostly Social Security and Medicare—jumped from 30 percent of all federal spending in 1962 to close to 50 percent in 1975.89
Facing these budgetary pressures, President Nixon presided over an overall decline in federally funded R&D spending.90 From 1967 to 1975, federal support for basic research declined by about 18 percent in inflation-adjusted terms.91 Most dramatic was the decline in funding for NASA, the darling of the 1960s. This decline ultimately amounted to half a percentage point of GDP ($100 billion in today’s money), perhaps the biggest single science cutback of all time. NASA was not alone in experiencing cuts.
In part, this decline was precipitated by pressure from Senator Michael “Mike” Mansfield, majority leader of the Democrats in the Senate. Criticism of the military—including its influence over the economy—had grown during the 1960s, primarily due to the Vietnam War, but also in reaction to what was seen as an excessive buildup of nuclear weapons. Protests grew during 1968, including at the Democratic convention in Chicago, when ten thousand demonstrators had violent confrontations with local police and the National Guard—broadcast live on television.
Reacting to these events, and the pressure to curtail the influence of the military, in 1969, Mansfield proposed a major change in how federal research was structured. His amendment to the Military Authorization Act forbade the Defense Department from using its funds “to carry out any research project or study unless such project or study has a direct and apparent relationship to a specific military function.”92
Mansfield’s view was that up to $311 million in research funding could be switched over to civilian research efforts, as led by the NSF.93 But the overall effect was that the NSF did not expand, other than taking over some materials research laboratories. The squeeze on federal support for research and development intensified, with the largest declines in physics and chemistry.94
The employment impact of publicly funded research spending was mentioned by analysts but largely ignored in the political discussions.95 There was little or no systematic official consideration given to how knowledge is developed or its value as it moves across sectors of the economy—for military to civilian use or vice versa.
The divergence of interests between politicians and scientists was one major reason for the turn away from public funding of research and development. This was augmented by the new budgetary pressures arising from the Vietnam War and the Great Society. But the US government still had sufficient funds to finance ongoing research commitments. From 1950 to 1974, the United States had never had a deficit of more than 1.5 percent of GDP that was not eliminated within three years.96 The federal budget deficit grew to 3.3 percent of GDP by 1975, its highest point since World War II, but fell thereafter and was down to 1.6 percent by 1979.
In the same time frame, a new force increased budgetary pressure: the anti-tax movement. While taxes were never popular in the United States, anti-tax sentiment picked up significantly starting in the mid-1970s. The genesis of the anti-tax movement can be traced to California’s Proposition 13 in 1978. Proposition 13 was the first of a series of state laws that limit the ability of localities in a state to levy property taxes. Since its passage, nearly forty statewide tax-limiting measures have been passed by voters in eighteen states through the initiative process.97
The anti-tax movement reached the federal level with the election of Ronald Reagan in 1980, who ran on a strongly anti-tax platform. A weak economy combined with the largest tax cuts of the postwar period led the deficit to rise to a postwar high of 5.9 percent of GDP by 1983. The deficit averaged almost 4 percent of GDP over the next decade before falling under President Clinton to become a surplus in 1998. By 2002, due partially to another huge round of tax cuts under President George W. Bush, it rose again before peaking at 9.7 percent of GDP in 2009 in the midst of a deep recession.98 The deficit has since slowly been reduced as the economy recovered, and it is currently projected to be about 3 percent of GDP by 2023.
Ronald Reagan favored more research, with a very specific weapons-development objective.99 On March 23, 1983, President Reagan announced his Strategic Defense Initiative, creating “a long-term research and development program” with the aim of intercepting enemy missiles—and eliminating the threat they posed to the United States. His project, dubbed “Star Wars” was not about creating new basic science and stands in strong contrast to, for example, the development of the digital computer.100 Overall Department of Defense spending under Reagan rose from $40.7 billion (0.48 percent of GDP) in 1980 to $76.5 billion (0.7 percent of GDP) in 1987.
However, while public spending on military R&D rose along with the defense budget, by more than 40 percent from 1979 to 1988, public spending on R&D outside the Department of Defense fell by 30 percent. On net, despite the famous Reagan military buildup, total public R&D was basically constant during his presidency.101
Particularly striking was reduced support for energy research. Following the Organization of Petroleum Exporting Countries (OPEC) embargo of 1973, atomic energy research was combined with other efforts and placed under the Department of Energy in 1977. Significant funds were expended amounting, at their peak, to about 0.5 percent of federal government spending.102
However, federal government–supported energy research fell by almost 50 percent under President Reagan. The threat to national security receded as oil prices came down, in real terms, during the 1980s. In contrast with the Manhattan Project and the Apollo program, the client for energy research was not directly the government but rather the private sector—which was not consistently enthusiastic about this form of government intervention.103
By the 1990s, the situation had shifted further against federal funding of science. When Republicans took control of Congress in 1994, they took the opportunity to bring pressure on the EPA and its regulations, including the merits of the underlying science. Led by House Speaker Newt Gingrich, the Republicans also eliminated the Office of Technology Assessment, which had existed since 1972, but which had apparently been too critical of the Star Wars initiative.104
The fall of the Berlin Wall in 1989 represented the end of the Soviet threat—and this removed what had been, at least since Sputnik, the major motivation behind a great deal of government support for science. There is no better illustration of this than the debate over the US supercollider facility in the early 1990s. A who’s who of physics spoke up in support of the Superconducting Super Collider that was under construction in Texas, which would have been the world’s most powerful particle accelerator (by about twenty times) and would have pushed the frontier in terms of potential discoveries for high-energy physics.105 It was to no avail—Congress canceled the project in October 1993, in part due to large cost overruns. With the Cold War over, the pressure to increase knowledge of nuclear phenomena was less compelling. This time, when scientists ran up against the politicians, the politicians won.
A major supercollider ended up being built at the CERN facility in Switzerland. And the results in terms of new discoveries have been impressive. At the level of pure science, researchers using the supercollider confirmed the existence of the subatomic Higgs boson particle.106 At a more applied level, a team in New Zealand used technology developed at CERN—in pursuit of the Higgs boson—to produce the first color 3-D x-ray scanner.107 There is no way to know if this new technology would have been developed in the United States had the supercollider been located here or whether it would lead to a new growth sector, but these are the types of scientific investment risks that the United States took in the 1950s and 1960s and that we no longer take.
Government spending on science experienced a slight renaissance during the Great Recession that began in 2008—because key parts of the Obama stimulus package were focused on investments in science and technology, particularly in the area of clean energy. That resurgence proved short-lived. Beginning in 2011, some politicians, particularly Republicans in the House of Representatives, pushed back hard on the deficit spending. In the subsequent prolonged battle over the proper role of government, publicly funded R&D proved to be very much on the chopping block.
The Budget Control Act of 2011 cut discretionary spending, which includes public R&D financing, by $1.2 trillion over the next decade, with a specific cap for each year. Subsequent budget deals doubled down on this strategy, and the net result has been a large reduction in nondefense discretionary spending, which fell by 15 percent from 2010 to 2014.108 As a direct result of these budget battles, publicly financed research and development fell from 0.98 percent of GDP in 2008 to 0.71 percent of GDP in 2018.109
Think back to the spring of 1940, with Vannevar Bush about to visit the Oval Office. The federal government had a minimal presence in supporting science. The potential relationship between government and academic research seemed fraught with complications. And powerful interests—the US Navy among them—were more than skeptical about the benefits of rapid innovation for national defense (and prosperity).
In the subsequent seventy years, attitudes changed completely. We have embraced technology and science more than any other previous civilization, and we have helped spread those values and this way of organizing the economy around the world. Americans watched The Jetsons in the early 1960s, Star Trek later in that decade (and again in most decades that followed), 2001: A Space Odyssey (1968), and of course Star Wars (1977) and a lot more science fiction subsequently.110
In the early twentieth century, American scientists were seen mostly as conservative, in political terms.111 This is not surprising; they were prosperous white men.112 Vannevar Bush and his scientific friends did not generally express positive views about the New Deal. They went to work for FDR and the federal government because they feared the rise of Germany and—correctly—anticipated its science-based approach to war needed to be matched. Almost without exception, they felt more comfortable with Dwight Eisenhower in power.
Since that time, it seems fair to say, scientists have moved to the left and the political spectrum has shifted to the right—in ways that are not favorable to supporting unfettered scientific research and its implications.113 Over the past few decades, debates about science and its implications have been widespread and sometimes virulent, including the validity of scientists’ views on global climate change, as well as a wide range of other issues, such as endangered species, the dangers associated with high levels of dietary sugar and fat, and whether abstinence-only birth control is effective.114
The decrease in publicly funded research relative to the size of our economy from the 1970s to the present has been significant. However, while there have been ups and downs, over the same period, total research-and-development spending has not declined, fluctuating since the late 1960s around 2.5 percent of GDP.
The reason is straightforward—as publicly funded research declined relative to the size of the economy, there was an offsetting increase in private-sector research and development. This raises an obvious and important question: If private invention and commercialization can replace or effectively substitute for what was previously funded by the government, is there really a problem?