In Kenya’s Great Rift Valley, Samson Kamau sat at home, wondering when he’d be able to get back to work. It was April 2010. He should have been in a greenhouse on the shores of Lake Naivasha, as usual, packing roses for export to Europe. But the outbound cargo flights were grounded because the Icelandic volcano Eyjafjallajökull had, without sparing the slightest thought for Samson, spewed a cloud of dangerous ash into Europe’s airspace.
Nobody knew how long the disruption might last. Workers like Samson feared for their jobs; business owners had to throw away tons of flowers that were wilting in crates at Nairobi airport.1 As it happened, flights resumed within a few days. But the interruption dramatically illustrated just how much of the modern economy relies on flying, beyond the ten million passengers who get on flights every day.2 Eyjafjallajökull reduced global output by nearly $5 billion.3
You could trace the extent of our reliance on air travel to many inventions. The jet engine, perhaps; or even the airplane itself. But sometimes inventions need other inventions to unlock their full potential. For the aviation industry, that story starts with the invention of the death ray.
No, wait—it starts with an attempt to invent the death ray. This was in 1935. Officials in the British Air Ministry were worried about falling behind Nazi Germany in the technological arms race. The death ray idea intrigued them: they’d been offering a £1,000 prize for anyone who could zap a sheep at a hundred paces. So far, nobody had claimed it. But should they fund more active research? Was a death ray even possible? Unofficially, the Air Ministry sounded out Robert Watson Watt of the Radio Research Station. And he posed an abstract mathematical question to his colleague, Skip Wilkins.
Suppose, just suppose—said Watson Watt to Wilkins—that you had eight pints of water, one kilometer above the ground. And suppose that water was at 98 degrees Fahrenheit and you wanted to heat it to 105 degrees. How much radio frequency power would you require, from a distance of five kilometers?
Skip Wilkins was no fool. He knew that eight pints was the amount of blood in an adult human. While 98 degrees was normal body temperature, 105 degrees was warm enough to kill you, or at least make you pass out—which, if you’re behind the controls of an airplane, amounts to much the same thing.
So Wilkins and Watson Watt understood each other, and they quickly agreed the death ray was hopeless: it would take too much power. But they also saw an opportunity. Clearly, the ministry had some cash to spend on research; perhaps Watson Watt and Wilkins could propose some alternative way for them to spend it?
Wilkins pondered: it might be possible, he suggested, to transmit radio waves and detect, from the echoes, the location of oncoming aircraft long before they could be seen. Watson Watt dashed off a memo to the Air Ministry’s newly formed Committee for the Scientific Survey of Air Defence: Would they be interested in pursuing such an idea?4 They would indeed.
What Skip Wilkins was describing became known as “radar.” The Nazis, the Japanese, and the Americans all independently started work on it, too. But by 1940, it was the Brits who’d made a spectacular breakthrough: the resonant cavity magnetron, a radar transmitter far more powerful than its predecessors. Pounded by Nazi bombers, Britain’s factories would struggle to put the device into production. But America’s factories could.
For months, British leaders plotted to use the magnetron as a bargaining chip for American secrets in other fields. Then Winston Churchill took power and decided that desperate times called for desperate measures: Britain would simply tell the Americans what they had and ask for help.
So it was that in August 1940 a Welsh physicist named Eddie Bowen endured a nerve-racking journey with a black metal chest containing a dozen prototype magnetrons. First he took a black cab across London to Euston station: the cabbie refused to let the clunky metal chest inside, so Bowen had to hope it wouldn’t fall off the roof rack. Then a long train ride to Liverpool, sharing a compartment with a mysterious, sharply dressed, military-looking man who spent the entire journey ignoring the young scientist and silently reading a newspaper. Then the ship across the Atlantic—what if it were hit by a German U-boat? The Nazis couldn’t be allowed to recover the magnetrons; two holes were drilled in the crate to make sure it would sink if the boat did. But the boat didn’t.5
The magnetron stunned the Americans; their research was years off the pace. President Roosevelt approved funds for a new laboratory at MIT—uniquely, for the American war effort, administered not by the military but by a civilian agency. Industry got involved; the very best American academics were headhunted to join Bowen and his British colleagues6 in improving the device.
By any measure, Rad Lab was a resounding success. It spawned ten Nobel laureates.7 The radar it developed, which could accurately track planes and submarines, helped win the war.8 But urgency in times of war can quickly be lost in times of peace. It might have been obvious, if you thought about it, that civilian aviation needed radar, given how quickly it was expanding: in 1945, at the war’s end, U.S. domestic airlines carried 7 million passengers; by 1955, it was 38 million.9 And the busier the skies, the more useful radar would be at preventing collisions.
But rollout was slow and patchy.10 Some airports installed it; many didn’t. In most airspace, planes weren’t tracked at all. Pilots submitted their flight plans in advance, which should in theory ensure that no two planes were in the same place at the same time. But avoiding collisions ultimately came down to a four-word protocol: “See and be seen.”11
On June 30, 1956, two passenger flights departed Los Angeles airport, three minutes apart: one was bound for Kansas City, one for Chicago. Their planned flight paths intersected above the Grand Canyon, but at different heights. When thunderclouds developed, one plane’s captain radioed to ask permission to fly above the storm. The air traffic controller cleared him to go to “a thousand on top”—a thousand feet above cloud cover. See and be seen.
Nobody knows for sure what happened: planes then had no black boxes, and there were no survivors. At just before 10:31, air traffic control heard a garbled radio transmission: “Pull up!” “We are going in . . .” From the pattern of the wreckage, strewn for miles across the canyon floor, the planes seem to have approached each other at a 25-degree angle, presumably through a cloud.12 Investigators speculated that both pilots were distracted trying to find gaps in the clouds, so passengers could enjoy the scenery. One hundred twenty-eight people died.
Accidents happen. The question is what risks we’re willing to run for the economic benefits. That question’s becoming pertinent again with respect to crowded skies: many people have high hopes for unmanned aerial vehicles, or drones. They’re already being used for everything from moviemaking to crop-spraying; companies like Amazon expect the skies of our cities soon to be buzzing with grocery deliveries. Civil aviation authorities are grappling with what to approve. Drones have “sense and avoid” technology, and it’s pretty good; but is it good enough?13
The crash over the Grand Canyon certainly concentrated minds.14 If technology existed to prevent such catastrophes, shouldn’t we make more of an effort to use it? Within two years, what’s now known as the Federal Aviation Administration was born in the United States.15 And American skies today are about twenty times busier than they were in 1956.16 The world’s biggest airports now see planes taking off and landing at an average of nearly twice a minute.17 Collisions today are remarkably rare, no matter how cloudy the conditions. That’s due to many factors, but it’s largely thanks to radar.