1

For Our Comfort, Our Security, Our Prosperity

Here we are, standing alone. What is going to happen?

—British prime minister Winston Churchill, in conversation with James B. Conant, April 10, 19411

BY 1940, THE UNITED STATES HAD ESTABLISHED ITSELF AS ONE OF THE MOST INNOVATIVE nations on earth—based not so much on scientific leadership as on practical engineering applied to sectors such as automobiles and telecommunications. What Vannevar Bush realized is that innovation of a different magnitude was needed, specifically when it came to the technology that would win the war. For example, in his view, the range and performance of combat aircraft would likely play a decisive role. American fighter planes in 1940, however, were outclassed by those of Germany and Japan.2

More broadly, it started to become clear—at least to Bush and his colleagues—that the United States needed to more urgently develop technology that could be applied to war. One obvious response would be for the government to draft scientists and put them to work in its own laboratories, along the lines of the German or Soviet model. The early success of German technology in World War II certainly recommended that model.

Another possibility would be to hand the task directly to private business—but the goal now was national defense, not making profits. Over the previous century, the private sector had racked up impressive achievements, including developing railroads, electricity, and telephones. What was the right way to break loose from the traditional profit-oriented framework of private business while retaining private initiative and the ability to move fast? To understand the strategic design choice faced by Vannevar Bush in 1940—and much of what followed during the war and after—we need to step back briefly into the history of American innovation.

THE RISE OF CORPORATE INNOVATION

The United States of America did not begin as a technologically advanced country. Largely agrarian at the time of independence, the United States was behind the United Kingdom in terms of engineering capabilities not just in 1776 but for at least the next half century. For example, in the building of canals—cutting-edge engineering between 1800 and 1820—Americans routinely relied on imported (mostly British) advice, yet frequently struggled to get it right. American canals leaked at best, and some even had to be completely rebuilt.3

American construction techniques eventually improved, however, motivated in large part by the incentive to construct robust means of transportation across, by European standards, very long distances and difficult terrain. From the experience of building in a harsh landscape came important lessons and new opportunities—seen, most clearly, in the development of railroads.

The British developed the first engine that ran on wheels and metal track.4 Britain also took the lead in building railroads, for which miles of track is one measure of development. In 1830, Britain already had over 125 miles of track, while the United States had only between 23 and 40 miles.5 Both systems had 7,000 miles in the late 1840s or 1850s. Then the American system grew much larger—reaching 30,000 miles by 1860, while the British system was still only around 13,500 miles in 1869 and, at its peak, 20,000 miles in 1914.6

Mileage comparisons might be considered unfair—the United States had more land area and greater distances to cover—but progress in terms of American-built locomotives was just as impressive. The United States imported its first train engines (from the UK) in 1829, but these were quickly found not ideal for American conditions, where it was more difficult to lay track and curves were often sharper. The result was a six-wheeled design (rather than the British four wheels)—and the beginning of a boom in design and production for railroads.7 Soon Americans were exporting locomotives to the world.

America had become good at practical engineering, long before it led the world in science. Alexis de Tocqueville wrote this of America as he found it in 1831:

In America the purely practical part of the sciences is cultivated admirably, and people attend carefully to the theoretical portion immediately necessary to application; in this way the Americans display a mind that is always clear, free, original, and fertile; but there is almost no one in the United States who gives himself over to the essentially theoretical and abstract portion of human knowledge.8

Tocqueville thought this reflected on the essential nature of democracy, but it may just have indicated there was a lot of practical engineering work to do—and the rewards to pure science seemed relatively small or distant in terms of payoffs.

The early American technology development experience was dominated by a few men, largely self-taught and with minimal formal science background. Samuel Morse, a professional painter, invented the telegraph in the 1830s. Cyrus McCormick, a farmer and blacksmith, developed the mechanical reaper in the 1830s (an improvement on his father’s design). Isaac Singer, an actor, came up with his own version of the sewing machine in 1851. Charles Goodyear, owner of a hardware store, invented vulcanized rubber in 1844.9

The major shift, from the 1870s and particularly during the 1880s, began with electricity—and a more corporate-driven approach to innovation. The main theoretical ideas and experiment-based proofs behind electricity had been established by researchers much earlier, almost entirely in Europe, specifically Germany and the UK. Building on that foundation were brilliant Americans with breakthrough ideas, including Alexander Graham Bell (the telephone, 1876), George Westinghouse (alternating current in the 1880s), and Nikola Tesla (multiple inventions related to electricity in the 1880s). And of course, there was the legendary figure of Thomas Edison, inventor of the light bulb and perhaps the first person to focus his efforts directly on the process and commercialization of invention—with his famous research lab in Menlo Park, New Jersey.

Before Edison, individual inventors, operating alone and with limited resources, had most of the big ideas. After Edison, and following the development of electricity, came the rise of corporate invention, a lot of lawyers, and well-financed patent wars. Corporations, looking for the next wave of invention, became much more interested in science, including hiring scientists and building labs.

Early efforts were hardly auspicious. In 1864, William Franklin Durfee built a Bessemer convertor—a furnace for making steel—in Wyandotte, Michigan. He attached a “steelworks analytical laboratory”—the first industrial lab in the United States and one of the first in the world. His workers were not uniformly in favor of this form of progress. “Those who manned and managed the [Bessemer] converter looked upon the laboratory at first with amazement and then with fear. One dark night they burned the whole thing to the ground.”10

The first modern corporate R&D lab was arguably that of General Electric, founded in 1900. By 1906, this department had over 100 employees.11 By 1920, the GE lab employed 301 people, and by 1929, it had a head count of 555.12

Research at the Bell telephone companies was consolidated into a separate organization in 1910/11. According to Frank Jewett, who headed this effort—and what became Bell Labs after 1925—the industry had outgrown random invention and also what could be accomplished by engineers alone. Now, in Jewett’s assessment, those companies “most obviously dependent on science have organized research laboratories whose sole function it is to search out every nook of the scientific forest for timber that can be used.”13

By the end of World War I, almost all large industrial corporations had research labs. The Bell companies, International Business Machines (IBM), General Electric, and Westinghouse were early to develop strategies in which their engineers deliberately sought patents for new inventions and used this process to strengthen their market position.14 There had been just 45,000 American engineers in 1900. This number rose to 230,000 by 1930—of whom 90 percent worked in industry.15 By 1940, on the eve of war, two-thirds of all science spending was in the hands of the corporate sector.16

The American private sector had become a systematically innovative place, with emphasis on understanding and developing whatever knowledge seemed likely to boost corporate profits for incumbents. Research was expensive and conducted by relatively few large firms. In the 1930s, thirteen companies employed one-third of all researchers.17

However, investing in basic science—discovery for the sake of discovery—did not make sense as a private-sector priority, and there was no money in it. The private sector was very good at what it was supposed to do: making profits and investing the proceeds in developing new products that seemed likely to generate future profits.

UNIVERSITIES CHALLENGED

In modern America, we have become accustomed to the idea that universities lead the way on basic science. Throughout the pre–World War II period, however, American universities remained small and more focused on teaching than on research. Land-grant colleges, the first of which were created in 1863, were intended to bring improved technology to agriculture and, in a few instances, also to industry. They did so in an applied fashion. But there was little interest in or money for more fundamental research.18

The best-funded and most prestigious universities preferred to offer a classical education. The first engineer graduated from Harvard only in 1854, and by 1892, the grand cumulative total of engineering graduates from that college was only 155.19

Overall, American universities barely figured in the development of practical technology, relative to industry.20 As hubs for scientific endeavor, they were dwarfed by efforts in the big European research universities and institutes. The best technical education available to an aspiring young person at the beginning of the twentieth century was without question in Germany, France, or the UK.21

Europeans won fourteen Nobel Prizes for chemistry before Theodore Richards of Harvard University won the first for the United States in 1914. The United States did not win another chemistry Nobel until 1932, during which time the Europeans won another fifteen prizes. The American winner in 1932 was Irving Langmuir, of General Electric, who had been educated in part in Germany (he had a PhD from the University of Göttingen).

The pattern was similar for the medicine and physics prizes.22 Of all the Nobel Prizes for medicine awarded from 1901 through 1932, only two went to researchers based in the United States—both worked at the Rockefeller Institute for Medical Research in New York, and both were born and educated in Europe. In physics, the first native-born American to win a Nobel Prize was Robert Millikan in 1923, the second was Arthur Compton in 1927, and the third was Carl Anderson, who did not win until 1936. By the mid-1930s, the Netherlands had won more physics prizes (four) than had people born in the United States.23

ABSENTEE GOVERNMENT

The role of the American government in scientific development and the application of technology before 1940 was consistently small.24 There was some support for weapon development, such as armories that manufactured guns. But this was all narrowly focused, and the slight increase in intensity that emerged during World War I proved fleeting.25 The Great Depression of the 1930s further tightened the already parsimonious government purse strings.

President Roosevelt attempted to organize support for unemployed and underemployed scientists, but in the face of so much other need, this went nowhere. Scientists were also divided—traditionally and still in the 1930s—on whether they wanted government support, with the potential constraints and control this implied.

In 1933, President Roosevelt convened a Science Advisory Board. The group was chaired by Karl Compton who argued for government spending that would help employ engineers and scientists. The initiative quickly went nowhere—spending on science was not a sufficiently high priority. The relationship between leading scientists and FDR’s administration slipped to a new low.26

CAN YOU SEE ME NOW?

The tide—in terms of government funding for what private industry could not or would not do—began to turn in late August 1940. Sir Henry Tizard arrived in Washington, DC, as head of an expert team bearing information about some of Britain’s most important technological discoveries. The Battle of Britain raged through the summer and fall of 1940, with the German Luftwaffe first crippling critical airfields and then switching to the Blitz bombing of civilian areas, including London, Coventry, Birmingham, and other major cities.27 In this moment of great national desperation, Tizard and a few others persuaded Churchill’s government to put aside all conventional notions of secrecy, with the goal of receiving greater material assistance from the United States.28

Tizard’s mission famously included all its most precious papers and artifacts in a single metal box. Once safely delivered and opened, it revealed impressive technical details on topics as diverse as gun turrets for aircraft, antiaircraft guns, the Kerrison Predictor (an automatic gunfire-control system), armor plate, torpedoes, self-sealing gasoline tanks, and explosives.29 The British had been working intensely on a myriad set of engineering problems related to war, all of which were now urgent priorities. Tizard’s group was authorized to put almost every single card on the table without requesting any reciprocity.30

Of all Tizard’s offerings, without question the most immediately consequential was a small mechanical device, about the size of a hockey puck: the resonant cavity magnetron. This simple and even elegant piece of equipment created the possibility—with a lot of additional work—of smaller, more powerful, and more accurate radar sets.

In retrospect, what the Americans called Radio Detection and Ranging (RADAR) was developed independently and in secret in at least thirteen countries.31 The underlying science of radio waves emerged in the late nineteenth century based on pioneering research in Europe.32 The radio quickly became a wonder of the early twentieth century and was put into wide commercial use during the 1920s. The development of this technology’s long-distance communication naturally raised the question: What other applications were possible?

Numerous people noticed the annoying way planes interfered with radio transmissions. A few of the more farsighted wondered: Could this be the basis for actually detecting the location and direction of travel of, for example, a bomber? Researchers in the United States were among the earliest to investigate this question and to propose a workable solution—to bounce radio waves deliberately off distant objects and track carefully what rebounded back.33 Unfortunately, the American army and navy—the primary potential sources of funding for such work in the United States—did not regard this as a top priority for national defense. The United States, at that time, saw itself as distant from any potential enemies—and far outside the range of aircraft that could carry bombs.

In contrast, from at least the early 1930s, military leaders and top civilian scientists in Britain were increasingly focused on the dangers posed by bombers, with the potential to target large urban areas with conventional explosives or chemical weapons. The speed and effectiveness of aircraft increased significantly during the 1920s—it now took only two hours to fly from Germany to London.34 And by the 1930s it was clear that German rearmament included building a large air force against which conventional air defenses would be mostly ineffective. In a 1932 speech, leading British politician Stanley Baldwin expressed the growing fear in a memorable phrase: “The bomber will always get through.”35

In the early 1930s, none of the existing lines of technology development against the threat of bombing seemed immediately promising. In response, a committee was established, with Henry Tizard in charge.36 Under Tizard’s guidance and with a modest amount of funding, British research teams made surprisingly quick progress, and by 1938, a national radar system had been assembled that could detect initially high-flying aircraft—with extensions soon added to spot any plane that tried to approach the British Isles at low altitude.37

But the equipment involved was bulky, and the method of detection—using relatively long-wave or low-frequency radio waves—worked best for roughly establishing the position and bearing of large objects, such as bombers attacking during the day.38 In both Britain and the United States—and, for that matter, in Germany—researchers were keenly aware of the advantages of both miniaturizing key elements of the technology and making it able to detect much smaller objects, like the periscopes of submarines.

On the American side, prewar radar research was conducted in a fairly low-key manner, characteristic of the time period, but rather quaint when seen from a modern perspective. A significant part of the civilian work was led by Alfred L. Loomis, a lawyer who had taught himself a great deal of modern physics and built a laboratory in his house. It was a big house, the lab was well-equipped, and Loomis was a good scientist. But literally and metaphorically, it was an amateur effort. When the British saw Loomis’s work in fall 1940, they were polite, but it was obvious that the Americans were far behind. The Loomis team’s accomplishment was modest—essentially a forerunner of the police radar gun.39

Still, Loomis was a great networker who was close friends with—and a source of funding for—scientific luminaries such as Ernest Lawrence, winner of the Nobel Prize for physics in 1939, James Conant, chemist and president of Harvard, Karl Compton, president of MIT, and, of course, Vannevar Bush. Bush put Loomis in charge of microwave-related research on the NDRC.

On September 19, 1940, at the Wardman Park hotel in Washington, DC, Tizard’s team revealed their breakthrough, the cavity magnetron, which could emit a large amount of power at what was then a very short wavelength.40 After further technical discussion, which took place at his house in Tuxedo Park, New York, on the weekend of September 28–29, 1940, Loomis fully understood that this technology could turn the tide of war.41

While the British had cracked a key piece of the scientific puzzle, a working microwave radar system required much more, including a receiver, a way to handle signals without interference, and overall robustness. Ideally, it also had to fit in the nose of an aircraft. And it obviously had to function effectively under a wide variety of difficult conditions. The British, under siege, did not have the resources necessary to take these next steps. The Americans had scientists and available industrial capacity—but would they take on the work?

Bush strongly believed in delegation, and he trusted Loomis, a bond that been formed over the previous decade, during which time Loomis had risen to prominence as the convener of scientific gatherings and as a generous funder of experiments, increasingly around radio waves.42 If Loomis and his team said the United States should back this technology, that is what they would do.

Once the decision was taken to back this radar development as fast and as far as it would go, an argument broke out—who exactly should be in charge? For Frank Jewett, president of Bell Labs, there was no question: the project should go to Bell Labs, cooperating perhaps with other bastions of private enterprise.43

Bush and Loomis respected Jewett and were themselves no fans of government-led anything; both had built their careers either largely independent of the government (Bush) or keeping ahead of regulation (Loomis’s main fortune was made in electricity generation and distribution, the unregulated frontier of the 1920s and 1930s). In his 1970 memoir, Bush was blunt: “Like many a man from New England, I had snorted at the New Deal, and I had been appalled at some of F.D.R.’s political theory and practice.”44

But Bush was also intensely pragmatic—and not at all ideological in the modern sense of the word. What was needed was not incremental improvement or marginal adjustments; it was fundamental breakthroughs and at great speed. Relying on the private business sector would surely result, in his view, in less than was needed. Seeking profits was fine for incremental change in the civilian economy, but not when the goal was big breakthroughs for military applications. Bush recognized also that this was no time to be hoarding information about what worked and what did not work—sharing knowledge freely and without restriction would result in faster progress but was not in the private sector’s interest.

Bush therefore preferred for the project to be based at a university, to more effectively mobilize faculty from around the country.45 Bush, with Loomis in strong agreement, settled on MIT. Karl Compton was not immediately in agreement, fearing a major disruption to the usual work of his university. But quickly Compton was persuaded that the national interest came first.

It helped that the money was good. Bush contracted directly between the federal government and universities, erring on the side of being generous—paying the “full costs” of these research activities, which included overhead—“the portion of its general expenses properly attributable to the added operation.”46 Fortunately, Bush was able to persuade the House Committee on Appropriations that this was the best way to encourage innovative work.

Bush was also completely clear that the NDRC owned full rights to all inventions developed with its support, although its primary purpose was to help develop more useful good ideas. Unlike the usual situation during peacetime, patents were no impediment to sharing ideas across researchers, irrespective of where they were working.

Bush and his colleagues understood fully the key issue about invention. New ideas benefit the person who has those ideas, but there are also major positive potential effects on others who are pursuing related lines of inquiry. With effective patent pooling under the NDRC, as researchers working on radar—or anything else—witnessed ideas being developed by others, they could more quickly decide to adjust their own direction of work.

With Loomis in charge of the radar microwave committee of the NDRC, the mission was to scale up and apply invention at speeds never previously seen. The effort was launched in mid-October 1940, with plans to hire the first twelve university researchers and arrangements made to contract with private-sector supplies for components.47

MIT immediately made ten thousand square feet of lab space available.48 Loomis then traveled the country’s leading scientific outposts, recruiting top talent. He was joined in this effort by Ernest Lawrence from the University of California at Berkeley. Loomis and Lawrence persuaded Lee A. DuBridge, a nuclear physicist at the University of Rochester, to become director. As assistant director, they brought in future Nobel Prize winner Isidor I. Rabi from Columbia, along with Jerrold Zacharias and Norman Ramsey (who had just joined the faculty at the University of Illinois).49 Luis Alvarez, another future Nobel Prize winner, and Edwin McMillan joined from Berkeley. Lawrence was “so successful in rallying his colleagues to the cause that by November one eminent physicist was joining the staff every day.”50

The pace of work was remarkable. The first lab meeting was held November 11, 1940, thirty physicists were at work by mid-December, and soon a rudimentary radar system was operating and being tested on the roof of an MIT building. At its peak, the Radiation (Rad) Lab—the code name for this effort—employed nearly four thousand people and designed, by one estimate, half of all the Allied radar systems that were in use by 1945.51 Among the major achievements were a gun-laying radar and an airborne interception radar (used by night fighters). The lab also developed a separate bombing radar and the first-ever worldwide radio navigation system—known as Long-Range Navigation (LRN or LORAN).52

EMBRACING THE FUTURE, EVENTUALLY

Persuading the military to buy some new gadgets was not hard, particularly as rapidly expanding budgets coincided with pressure from the secretary of war to adopt the latest technology. But getting the army and particularly the navy to actually integrate radar and related tools into battlefield decision-making was much harder. Working this out would lay the groundwork for future productive relationships between civilians and the country’s military—a major change of mind-set about the inherent usefulness of science and innovation in national defense.

No less a figure than Admiral Ernest J. King, chief of naval operations and commander in chief of the US fleet, played down the importance of radar in 1941: “We want something for this war, not the next one.”53 This reluctance to embrace new technology was not unusual among top officers. The US Army resisted using rockets against German tanks and never adopted an infrared sight developed for its own tanks, even though it was proven to help night vision for sniper rifles. When the NDRC proposed an amphibious truck, the DUKW, “the head of the Service of Supply, said to me [Bush] forcibly that the Army did not want it and would not use it if they got it.” The DUKW was developed and proven a great success—for example, landing troops and equipment during the Normandy invasion.54

Despite initial military conservatism, science had a lot to offer—as the disaster at Pearl Harbor made apparent. The first wave of attacking Japanese planes was spotted by radar technicians at a range of 132 miles, using US Army mobile SCR-270 long-wave radar sets. With almost an hour in hand, there was time to launch at least some defensive air cover and to get ships under way, but the radar warnings were ignored by the responsible officer.55 In perfect hindsight, the power of radar—and the destruction caused by ignoring the information that radar systems could provide—should have been evident, but the military was still not fully convinced.

The navy’s hesitance to fully embrace all the changes made possible by radar was perhaps most costly in what became known as the Battle of the Atlantic. Convoys—groups of merchant ships with naval escorts—had proven reasonably effective in World War I against submarines. But at the start of World War II, it became evident that the Germans had shifted tactics, including attacking at night, on the surface, and in packs.

The losses to Allied shipping were devastating and unremitting. In June 1942, the Allied fuel supply came under pressure due to U-boat attacks, and there were questions about whether the US Navy could defend even its own Atlantic coastal waters. In late 1942, there was a U-boat focus on northern transatlantic convoys, which lost an average of 26 ships per month. In early 1943, the loss rate for those convoys actually accelerated—reaching 49 ships in March. By the last half of 1942, Germans were building U-boats faster than the Americans, British, and Canadians could sink them.56

Shipping losses on this scale convinced the navy that it was time to effectively deploy radar technology—and this helped change the war. Centimeter airborne radar, hot from the Rad Lab, could find submarines on the surface. Sonobuoys and new methods of magnetic detection meant submarines could be tracked more accurately under the waves. The antisubmarine rocket and target-seeking torpedo could then be deployed effectively.

The results were stunning.57 In forty-four months of war up to May 1943, the Allies sank 192 U-boats; in the next three months—May, June, and July 1943—they sank 100. The ratio of ships sunk to U-boats destroyed shifted dramatically, from 40:1 at its worst to 1:1.58 More raw materials could now flow into the United States, and American manufactured goods—such as guns, ammunition, vehicles, aircraft, and food—could reach the front largely unimpeded. Radar had demonstrated that new technology was no longer an optional add-on. It had become central to war.

The point was driven home at the Battle of the Bulge in 1944. The Germans attacked under the cover of bad weather, reckoning correctly that lack of visual contact made it hard for conventional artillery to be effective. The Americans, again based on early British ideas, had made great strides with proximity fuses—which exploded shells close to their targets, based on a form of radio frequency sensing (an application of short-range radar). The effect against German ground forces proved devastating. The same technology was also used against enemy planes and V-1 flying bombs; it increased effectiveness of the five-inch antiaircraft batteries by a factor put at “probably about seven.”59

THE POSTWAR INVENTION BOOM

Vannevar Bush was a master of managing perceptions. He understood firsthand that “engineers” of the day were regarded by senior military personnel as “in all probability a thinly disguised salesman, and hence to be kept at arm’s length”—and he insisted that his team be referred to consistently as scientists. In one sense, this was accurate, as the people he hired, particularly for the Rad Lab, were actually scientists, mostly physicists.60

Realistically, however, most of their wartime efforts were devoted to applications that should more accurately be regarded as engineering—applications of existing knowledge to practical problems—rather than as science, the creation of new knowledge through theory and controlled experiments. Still, their strong training in science served these “engineers” well once they could take their wartime experiences, including with hands-on electronics, back to their labs—and onto inventions such as digital computers and semiconductors.

The postwar invention boom was boosted by the fact that the devices and processes developed under the NDRC (and its successor, the higher-profile and better-funded Office of Scientific Research and Development [OSRD]) were to some extent rough and ready—everyone was in a hurry to make things work and deploy robust versions into combat situations. The flip side was that many interesting problems became more obvious, both in terms of basic science and potential further improvements for products.

For example, the amphibious truck DUKW later became the model for snowmobiles.61 DDT, a newly developed chemical, found much broader use—beginning in anti-malaria campaigns, it soon spread to become a much more widely used (and arguably overused) pesticide.

As a direct continuation of the government-supported wartime aerospace program, it was the Americans who brought the next generation of jet engine–based flight technologies to scale. It was America, not exhausted and cash-strapped Britain or broken Germany, which proved best positioned to take advantage of the related commercial developments.62 The early engines for jet planes were developed in the late 1940s and early 1950s, initially for military applications.63

In 1953, building on its military-related efforts—including development of the Boeing KC-135 tanker—Boeing produced its four-engine 707 passenger jet.64 This was followed by other new products at regular intervals—including the 747 in 1969. By the early 1980s, Boeing was one of the country’s leading exporters; in some years, it was sold more overseas (in dollar terms) than any other company.

Of all the wartime science projects, radar can undoubtedly claim the longest list of useful spin-off products.65 Modern commercial air travel is made possible by hundreds of radar systems across the United States. Much of the useful information in weather reports is based on some form of radar.

There were also more indirect effects. The transistor emerged in part as a consequence of work done on solid-state semiconductor crystals for radio receivers.66 Cathode ray tubes and memory for digital computers were the immediate descendants of World War II radar systems.

Microwave telephones and early television networks received significant assistance from radar. Beginning in 1951, television added ultrahigh-frequency (UHF) transmission—carried coast to coast over 107 microwave towers built by AT&T.67 New antennas—for example, built by RCA—were needed.

Astronomy was transformed by the creation of radio telescopes. Particle accelerators and microwave spectroscopy can also trace their lineage back to the MIT campus—as can the nuclear magnetic resonator (the basis for modern magnetic resonance imaging, MRI) and the maser (used in atomic clocks and spacecraft, and forerunner of the laser), for which work Nobel Prizes were awarded in 1952 and 1964, respectively.

And, of course, there is the microwave oven. Raytheon had made magnetron tubes during the war and now needed a new market. The ability of radio waves to heat food was either the result of years of careful study or due to a candy bar melting serendipitously, depending on which version of history you prefer.68 At first, the machines proposed were large and expensive, more appropriate for professional use. Eventually, the first generally affordable microwave oven appeared in 1967. It was named, descriptively if not appealingly, the Radarange.

Old Washington hands like to emphasize that “personnel is policy”—meaning that who you hire has a major impact on what gets done. But conversely, who gets trained to do what, while working for the government, can have significant impact on what they think about—and invent—later. Judged in those terms, the wartime science effort propelled a generation forward in terms of scientific and industrial achievement. Ten Nobel Prizes can either be traced back to work done at the Rad Lab or were won by people who spent formative years building radar systems.69

Most of the postwar top science advisors to government cut their teeth somewhere in the OSRD, most commonly at the Rad Lab. Right through to the Nixon administration, thinking about science policy—and what exactly to support—was shaped by people who had worked alongside Vannevar Bush.

ENDLESS MONEY

It’s hard to imagine now, but Bush’s research organization had access to essentially unlimited funding. Once the NDRC got under way, Bush saw his major role as managing the relationship with Congress, particularly the appropriations committees.

Initially, Bush suggested to his associates that they might need to spend $5 million per year. In 1942, what had become the OSRD spent $11 million. In 1943, it spent $52.2 million, $86.8 million in 1944, and peaking at $114.5 million in 1945.70 The combined R&D budgets of the air force, the army, and the navy peaked at $513 million.71

Those numbers do not include the Manhattan Engineer District, which built the atomic bomb. Research and development on nuclear weapons was essentially zero in 1940. By 1943, this work cost $77 million, jumping nearly tenfold to $730 million in 1944 and peaking at $859 million in 1945. The Manhattan Project became one of the largest industrial-scientific projects in the history of the world to that date.72 At its peak, this work employed 130,000 people.73

Bush and his colleagues repeatedly stressed that the constraints on their activities were the number of available engineers and scientists—and they pushed back hard against efforts to draft these specialists into frontline forces. Money, however, was never an issue. The top—and perhaps only—priority was inventing what could be useful and important to the war effort. Ironically, the postwar American commercial success was helped greatly by inventions that emerged from the simplest noncommercial motivation: patriotism and fear of a smart enemy, hell-bent on new applications of scientific knowledge.

NEW FRONTIERS

Civilian physicists had been proven spectacularly right during World War II, not just in their theoretical thinking about hidden power in the universe but also in their ability to harness that power in practical ways. The world had changed, completely and forever. Science and its intelligent applications now trumped everything. The question now was how to harness this idea for the broader social good.

With the war drawing to a close, Bush set himself the task of articulating what should come next—in terms of not just funding for science but how that funding should be structured and supervised. In Science: The Endless Frontier, his 1945 report for the president, Bush pulled together the best thinking about what had worked during the war and what could be done next. The wartime effort had focused on mobilizing what was already known, either in terms of specific facts or, more importantly, the skills and abilities of individual scientists. As Bush put it later, “The war effort taught us the power of adequately supported research for our comfort, our security, our prosperity.”74

The priority task following the war was creating new knowledge. Bush had not changed his pro-free-market beliefs, but he felt that science was a frontier—and the American federal government had always been comfortable expanding frontiers. “It is in keeping also with basic United States policy that the Government should foster the opening of new frontiers and this is the modern way to do it.”75

Attempting to catch the incipient postwar mood, Bush led off his report not with weapons but with the many potential ways that lives could be saved and improved—the first substantive points in his report are “For the War Against Disease.” Bush’s statement of the potential impact on health and longevity was not exaggerated: “It is wholly probable that progress in the treatment of cardiovascular disease, renal disease, cancer, and similar refractory diseases will be made as the result of fundamental discoveries in subjects unrelated to those diseases and perhaps entirely unexpected by the investigator.”76

Industrial research was important, but this would always be of a more applied nature.

Basic research leads to new knowledge. It provides scientific capital. It creates the fund from which the practical applications of knowledge must be drawn.… Today, it is truer than ever that basic research is the pacemaker of technological progress.… A nation which depends upon others for its new basic scientific knowledge will be slow in its industrial progress and weak in its competitive position in world trade, regardless of its mechanical skill.77

Bush did not advocate continuing the organization of research along the relatively centralized lines of the Rad Lab or the Manhattan Project. The Rad Lab was shut as soon as the war ended. The Manhattan Project was brought more completely under military control—and, understandably, became less oriented to breakthrough ideas and more about incremental adjustments (and nuclear testing). The Atomic Energy Commission was established in 1946.78

Moreover, a key constraint was the number of skilled scientists who could be trained—the title of this chapter in Endless Frontier is “Renewal of Our Scientific Talent.” There was a wartime-induced skills deficit—talented people had not gone to graduate school, because they had been pulled into the armed forces. But more than making up that deficit, the United States needed to train more scientists per year, in particular by broadening the pool of potential students—which meant finding ways for people from lower-income backgrounds to afford higher education.

Bush believed strongly that university-based research, organized through contracts with the federal government, was the way to make progress. These grants should be awarded on a competitive basis to the best scientists. And a relatively strong National Research Foundation with an influential board of directors should run the process. And this funding should extend not just to research projects but to the financing of advanced scientific education—and increasing the number of trained scientists.79

Wartime service for young people meant there was a deficit, relative to what otherwise would have been the case, of about 150,000 science and technology students. To increase scientific capacity in a meaningful way, college needed to become more affordable for more people, including through the provision of scholarships.

In 1940, over half of the US adult population had left school with no more than an eighth-grade education; only 6 percent of men and 4 percent of women had completed college.80 Between 1940 and 1960, college attendance more than doubled—meaning there were an additional two million students enrolled. The number of instructional faculty in higher education increased commensurately from around 110,000 to just over 280,000.81

The university-based model paid dividends, as the United States was catapulted to the forefront of global scientific achievement—aided by an influx of talented foreign scientists fleeing either Nazism or Communism. Prior to 1930, ninety Nobel Prizes were awarded for physics, chemistry, and medicine—and the United States had picked up only five (6 percent of the total).82 In the 1930s, the United States did better, winning ten science Nobel Prizes—or 28 percent of the total.83

There was a jump up in the 1940s—the United States won fourteen of thirty prizes awarded. This was the new normal. In subsequent decades, the United States never won less than 49 percent of all the science prizes, and it peaked at 72 percent in the 1990s—a remarkable forty-three out of sixty total prizes.84

The United States, long a nation of practical engineers, was becoming a place that valued science and supported scientists—largely because the connections from theory and the laboratory to practical applications were becoming much more apparent. The spread of new technology through more efficient machines and better factory design was accelerated and improved, on balance, by the war effort—reaching a broad range of different activities.

With the removal of wartime controls, the potential productivity gains were obvious. What, however, would be the implications for jobs? Who would gain and who would lose from this surge forward in technology?

MIDDLE-CLASS MIRACLE

In Kurt Vonnegut’s first novel, Player Piano, published in 1952, automation has become so advanced that workers with only a high school education are not needed to run factories.85 Managers with higher degrees in engineering are in charge of design and operation. Machines that break down are discarded rather than repaired. Salaries are high for those still working; everyone else gets menial tasks provided—outside of factories—by the government at subsistence wages.86

Vonnegut’s dystopia articulated the fears of many people who had experienced the 1940s (and the Great Depression of the 1930s) and who could see the transformation of American production through the application of science and science-based engineering. Specifically, what Vonnegut and others anticipated was not that better machines would destroy all jobs but rather that it would make some people—with a great deal of appropriately technical education—more productive, while the need for less-educated people (or any kind of manual work) in factories would decline.

In the terminology of modern economics, this phenomenon is called skill-biased technological change. While Vonnegut was obviously dramatizing the effect, a great deal of subsequent research has confirmed that this is part of what happened, over time, after World War II.87 Automation was a well-established idea that was carried to a new theoretical and practical level during the war, including in the development of systems that controlled the automatic aiming and firing of antiaircraft guns.88 But, as Vonnegut’s story highlights, automation also creates a higher level of demand for skilled labor, as such workers become more productive by working with the new machinery.

If increased demand for skilled labor is the only or predominant change, we would expect the relative wage of skilled people compared to unskilled people—known as the skill premium—to increase, perhaps sharply. Some people—perhaps relatively few—would become better off, while most people would not benefit directly (or could actually be worse off, as in Vonnegut’s novel).

However, what if the supply of skilled people increases at a pace that at least roughly matches the arrival of machines that make skilled people more productive? In that case, the skill premium may not increase much.89 Rather, average wages for most of the population would increase—allowing them to afford more goods, buy houses, and perhaps even start to save for retirement or their children’s education. There are also important spillover effects to local economies and to wages for all skill levels, as employment in construction and retail sectors increases.90

This path—scientific innovation matched by newly skilled workers seeking jobs—proved to be the broadest legacy of the technological breakthroughs of the 1940s. The timing could not have been more propitious, facilitating the rapid switch from wartime to civilian production and making it possible to sustain a high rate of growth through the 1960s. It was only possible, however, because of the commensurate increase in higher education—with a lot more people attending college.

After World War I, discharged veterans received sixty dollars and a ticket home; the result was a great deal of resentment—including a march on Washington in 1932. Taking on board that lesson, the Servicemen’s Readjustment Act of 1944—known as the GI Bill of Rights—was designed to ease the transition back into civilian life. “[The GI Bill] was seen as a genuine attempt to thwart a looming social and economic crisis. Some saw inaction as an invitation to another depression.”91

The GI Bill provided assistance with unemployment insurance and assistance in buying a house.92 It also provided tuition and other financial support if veterans decided to continue their education.

Veterans accounted for nearly half of all college admissions in 1947. Perhaps the single best indicator for the growing impact of science on the US economy may be the choices made by those veterans.93 In the official reckoning, 7.5 million veterans took advantage of the legislation; of them, more than 2 million attended some form of college. Nearly three-quarters of a million people took scientific courses.94

Average wages increased steadily from the late 1940s, through the 1950s, and into the 1970s.95 The skill premium remained at its immediate postwar level, although the labor force as a whole became much more skilled both in terms of years of education and in more specialized technical skills. The number of engineers in the workforce grew rapidly—from about 0.5 percent to 1.5 percent of all employment during the 1940s and 1950s.

At the same time, there was a very real sense that more people were participating in economic gains, certainly compared with the 1930s (when long-term unemployment was a major problem) and even compared with the 1920s. A majority of workers held white-collar jobs as managers, teachers, salespersons, or other office employees. Firms provided implicit long-term employment contracts and good benefits. Long-standing class distinctions began to fade.96 The number of cars in the United States doubled during the 1950s, from thirty-nine million to seventy-two million—and by 1960, Americans owned more cars than the rest of the world put together. There were eight large shopping centers in the entire country in 1945; by 1960, there were 3,840.97 More than 3,000 drive-in movie theaters were operational by 1956.98 The Highway Act of 1956 helped connect the country across interstate roads. Motels sprang up around the country.

The Veterans Administration provided low-cost mortgages to 2.4 million war veterans. Before World War II, around 40 percent of American families owned their own homes; this rose to 62 percent by 1970.

Health indicators also improved, driven in part by improved nutrition but also by medical breakthroughs that had been accelerated by the war—and which were subsequently pushed forward by what became the National Institutes of Health and the National Science Foundation. Antibiotics were, for the first time, readily available. Streptomycin proved, at least initially, to be a wonder drug against tuberculosis. Childhood vaccination reduced or eliminated some previous scourges, such as scarlet fever and diphtheria. In 1939, on the eve of the United States’ entry to World War II, life expectancy was 62.1 years for men and 65.4 years for women. Just a decade later, it had risen to 65.2 and 70.7, respectively—an impressive improvement.

Not everyone benefited equally, of course—there is a dark side to economic miracles. The movement of commerce and population to the suburbs meant that some people were left behind in inner cities, without the skills and financial resources needed to remain prosperous. Ongoing racism and sexism meant weaker job opportunities for women and minorities. Access to health care improved, although it remained much better for white Americans than for minorities. But overall, there was broad progress for the newly emergent middle class.

POLITICS OF A NEW GLOBAL ORDER

World War II involved rapid scaling up of American production of military equipment. In four years, the United States built three hundred thousand planes, six hundred thousand jeeps, two million army trucks, and twelve thousand large ships. Aluminum production increased fourfold, and steel production rose nearly four times. Auto manufacturers switched to produce vehicles and components for the military, including nearly four hundred thousand aircraft engines (half of all US production), learning important lessons about reliability along the way.99

At their peak, defense-related jobs constituted 40 percent of employment.100 Most of those government contracts ended abruptly after the surrender of Japan, and mass unemployment seemed entirely possible. Following the end of World War I, when the necessary conversion away from military production was on a significantly smaller scale, unemployment reached 5.2 percent in 1920 and peaked at 11.7 percent in 1921.101 Unemployment during the worst years of the 1930s was still a recent and traumatic memory for many—measured rates had ranged between 20 and 25 percent.

In 1945, there were 11.43 million people in the armed forces, compared with a total civilian labor force of 53.86 million. Demobilization was rapid. The military was down to 1.59 million people in 1947, while the civilian labor force (age fourteen and older) climbed to over 60 million.102 Unemployment was practically nonexistent during the war—1.2 percent of the civilian labor force in 1944—and there was a slight rise to 3.9 percent in 1947. Over the next decade, the labor force grew by more than 7 million workers, yet unemployment stayed consistently below 5 percent.103 How was this possible?

One part of the answer is increased exports, through easier access to markets around the world. US manufacturing companies, based on improved applications of science, created new and improved products for which there was potential demand around the world.104

Lowering tariffs had long been a bone of contention in American politics, with significant parts of industry arguing that protectionism (taxes on imports) was essential for prosperity.105 However, between 1939 and 1945, the world’s trading picture changed dramatically.106 The United States had provided the material goods that its allies needed to fight effectively. With the end of hostilities, the United States moved to provide what was needed for rebuilding—and offered cheap loans to finance those purchases.107

US sectors that had an export surplus—exporting more than they imported—were now strongly in favor of more open trade. These included industries that had benefited from the wartime scientific push on electronics, engine design, and better chemistry, including machinery, vehicles, and chemicals.108 For those sectors, both business leaders and trade unions were not opposed to tariff reductions on goods coming into the United States—if the quid pro quo was increased access for American goods to overseas markets.109

The United States helped build a global trading system within which American companies could export first to Europe and Japan, and increasingly to other countries with rising income levels. This open trade strategy worked, in terms of helping sectors that had export and other growth potential. Some of the highest rates of growth from 1947 to 1973—over 6 percent per annum on average—were recorded by electrical machinery and chemicals as well as telephones and other communication services. Rates of productivity growth in those sectors were also among the highest in the nation.110

A new trade policy helped promote sectors with potential for global growth. Who gained? At least in the immediate postwar era, a broad swath of the American middle class took a major step forward.

MAKING AMERICA GREAT

The first intense technology-based arms race, 1939–1945, was won by Americans—native born and recent immigrant—in a remarkable come-from-behind fashion, specifically by giving civilian scientists the mandate to invent and to think very far outside what was regarded as reasonable or established practical knowledge. This was not science practiced by lone self-financed inventors, such as had prevailed in the nineteenth century, or by corporations, which had dominated the research-and-development landscape in the early twentieth century. The winning teams in World War II were university-based researchers backed by a vast amount of taxpayer money. This had a major positive impact on the economy and people’s jobs.

Much of this postwar impact was unintentional. The priority during the war was just on winning—and on finding ways to make the military effort more meaningful. The United States had stumbled into a new way to organize and finance science, and it quickly found ways to commercialize those new ideas.

After the war, the GI Bill was a stroke of brilliance and luck—strengthening skills for millions of people at just the right time to match the changing nature of machines and organizations. The shift in US trade policy was a logical continuation of the desire to increase the number of good jobs. The United States made goods that people around the world wanted to buy.

World War II called forth collective efforts in an unprecedented manner, with immediate and also long-lasting effects. Existing ideas had been focused onto the war effort, to great effect. There had also been some breaking down of barriers to innovation. The war had challenged long-standing American ideas about the role of government—and how potentially to structure its productive relationship with the private sector. How much of this new model could be sustained?