‘Nothing we design or make ever really works … Everything we design and make is an improvisation, a lash-up, something inept and provisional.’
– David Pye
‘The end of surprise would be the end of science. To this extent, the scientist must constantly seek and hope for surprises.’
– Robert Friedel
In 1931, the British Air Ministry sent out a demanding new specification for a fighter aircraft. It was a remarkable document for two reasons. The first was that throughout its existence the Royal Air Force had been dismissive of fighters. The conventional wisdom was that bombers could not be stopped. Instead, foreshadowing the nuclear doctrine of mutually assured destruction, the correct use of air power was widely presumed to be to build the largest possible fleet of bombers and strike any enemy with overwhelming force. The second reason was that the specification’s demands seemed almost impossible to meet. Rather than rely on known technology, the bureaucrats wanted aviation engineers to abandon their orthodoxies and produce something completely new.
The immediate response was disappointing: three designs were selected for prototyping, and none of them proved to be much use. The Air Ministry briefly went so far as to consider ordering aircraft from Poland.
Even more remarkable than the initial specification was the response of the ministry to this awkward failure. One of the competing firms, Supermarine, had delivered its prototype late and well below specification. But when Supermarine approached the ministry with a radical new design, an enterprising civil servant by the name of Air Commodore Henry Cave-Brown-Cave decided to bypass the regular commissioning process and order the new plane as ‘a most interesting experiment’. The plane was the Supermarine Spitfire.
It’s not hard to make the case that the Spitfire was one of the most significant new technologies in history. A brilliant, manoeuvrable and super-fast fighter, the Spitfire – and its pin-up pilots, brave to the point of insouciance – became the symbol of British resistance to the bombers of the Nazi air force, the Luftwaffe. The plane, with its distinctive elliptical wings, was a miraculous piece of engineering.
‘She really was a perfect flying machine,’ said one pilot. A Californian who travelled to Britain to sign up for the Royal Air Force agreed: ‘I often marvelled at how this plane could be so easy and civilized to fly and yet how it could be such an effective fighter.’
‘I have no words capable of describing the Spitfire,’ testified a third pilot. ‘It was an aircraft quite out of this world.’
It wasn’t just the Spitfire pilots who rated the plane. The top German ace, Adolf Galland, was asked by Hermann Göring, head of the Luftwaffe, what he required in order to break down the stubborn British resistance. ‘I should like an outfit of Spitfires,’ was the terse reply. Another German ace complained, ‘The bastards can make such infernally tight turns. There seems to be no way of nailing them.’
Thanks to the Spitfire, Britain’s tiny Royal Air Force defied overwhelming odds to fight off the Luftwaffe’s onslaught in the Battle of Britain. It was a dismal mismatch: Hitler had been single-mindedly building up his forces in the 1930s, while British defence spending was at historical lows. The Luftwaffe entered the Battle of Britain with 2600 operational planes, but the RAF boasted fewer than 300 Spitfires and 500 Hurricane* fighters. The wartime Prime Minister himself, Winston Churchill, predicted that the Luftwaffe’s first week of intensive bombing would kill 40,000 Londoners. But thanks in large part to the Spitfire’s speed and agility, the Germans were unable to neutralise the RAF.
This meant the Germans were unable to launch an invasion that could quickly have overwhelmed the British Isles. Such an invasion would have made D-Day impossible, denying the United States its platform to liberate France. It would likely have cost the lives of 430,000 British Jews. It might even have given Germany the lead in the race for the atomic bomb, as many of the scientists who moved to the United States to work on the Manhattan Project were living in Britain when the Spitfires turned back the Luftwaffe. Winston Churchill was right to say of the pilots who flew the Spitfires and the Hurricanes, ‘Never in the field of human conflict has so much been owed by so many to so few.’
It is only a small exaggeration to say that the Spitfire was the plane that saved the free world. The prototype cost the government roughly the price of a nice house in London: £10,000.
When we invest money now in the hope of payoffs later, we think in terms of a return on our investment – a few per cent in a savings account, perhaps, or a higher but riskier reward from the stock market. What was the return on Henry Cave-Brown-Cave’s investment of £10,000? Four hundred and thirty thousand people saved from the gas chambers, and denying Adolf Hitler the atomic bomb. The most calculating economist would hesitate to put a price on that.
Return on investment is simply not a useful way of thinking about new ideas and new technologies. It is impossible to estimate a percentage return on blue-sky research, and it is delusional even to try. Most new technologies fail completely. Most original ideas turn out either to be not original after all, or original for the very good reason that they are useless. And when an original idea does work, the returns can be too high to be sensibly measured.
The Spitfire is one of countless examples of these unlikely ideas, which range from the sublime (the mathematician and gambler Gerolamo Cardano first explored the idea of ‘imaginary numbers’ in 1545; these apparently useless curiosities later turned out to be essential for developing radio, television and computing) to the ridiculous (in 1928, Alexander Fleming didn’t keep his laboratory clean, and ended up discovering the world’s first antibiotic in a contaminated Petri dish).
We might be tempted to think of such projects as lottery tickets, because they pay off rarely and spectacularly. They’re rather better than that, in fact. Lotteries are a zero-sum game – all they do is redistribute existing resources, whereas research and development can make everyone better off. And unlike lottery tickets, bold innovation projects do not have a known payoff and a fixed probability of victory. Nassim Taleb, author of The Black Swan, calls such projects ‘positive black swans’.
Whatever we call them, such ventures present us with a headache. They are vital, because the payoff can be so enormous. But they are also frustrating and unpredictable. Usually they do not pay off at all. We cannot ignore them, and yet we cannot seem to manage them effectively either.
It would be reassuring to think of new technology as something we can plan. And sometimes, it’s true, we can: the Manhattan Project did successfully build the atomic bomb; John F. Kennedy promised to put a man on the Moon inside a decade, and his promise was kept. But these examples are memorable in part because they are unusual. It is comforting to hear a research scientist, corporation or government technocrat tell us that our energy problems will soon be solved by some specific new technology: a new generation of hydrogen-powered cars, maybe, or biofuels from algae, or cheap solar panels made from new plastics. But the idea that we can actually predict which technologies will flourish flies in the face of all the evidence. The truth is far messier and more difficult to manage.
That is why the story of how the Spitfire was developed against the odds offers a lesson for those of us who hope technology will solve the problems of today. It was developed in an atmosphere of almost total uncertainty about what the future of flying might be. In the previous war with Germany, which ran from 1914 to 1918, aeroplanes were a brand-new technology and were used mainly for scouting missions. Nobody really knew how they could most effectively be used as they matured. In the mid-1920s, it was widely believed that no aeroplane could exceed 260 miles per hour, but the Spitfire dived at over 450 mph. So it is hardly surprising that British air doctrine failed for such a long time to appreciate the potential importance of fighter planes. The idea of building fighters that could intercept bombers seemed a fantasy to most planners.
The Spitfire seemed especially fantastical as it fired directly forward, meaning that in order to aim at a target, the entire plane needed to change course. A design that struck many as much more plausible was a twin-seater plane with a gunner in a turret. Here are the words of one thoughtful and influential observer in 1938, one year before Germany and Britain went to war:
We should now build, as quickly and in as large numbers as we can, heavily armed aeroplanes designed with turrets for fighting on the beam and in parallel courses … the Germans know we have banked upon the forward-shooting plunging ‘Spitfire’ whose attack … if not instantly effective, exposes the pursuer to destruction.
The name of this Spitfire sceptic was the future Prime Minister, Winston Churchill. The plane he demanded was built all right, but few British schoolboys thrill to the legend of the Boulton-Paul Defiant. No wonder: the Defiant was a sitting duck.
It is easy to say with hindsight that official doctrine was completely wrong. But it would also be easy to draw the wrong lesson from that. Could ministers and air marshals really have predicted the evolution of aerial combat? Surely not. The lesson of the Spitfire is not that the Air Ministry nearly lost the war with their misconceived strategy. It is that, given that misconceptions in their strategy were all but inevitable, they somehow managed to commission the Spitfire anyway.
The lesson is variation, achieved through a pluralistic approach to encouraging new innovations. Instead of putting all their eggs in what looked like the most promising basket – the long-range bomber – the Air Ministry had enough leeway in its procedures that individuals like Air Commodore Cave-Brown-Cave could fund safe havens for ‘most interesting’ approaches that seemed less promising, just in case – even approaches, like the Spitfire, that were often regarded with derision or despair.
In September 1835, Charles Darwin was rowed ashore from The Beagle and stepped into the breakers of the Galapagos Islands. He soon discovered some remarkable examples of how safe havens provide space for new things to develop – examples that would later lead him towards his theory of evolution through natural selection. Darwin, a meticulous observer of the natural world, noted the different species of finch that inhabited the islands. Not a single one was found anywhere outside the Galapagos archipelago, which lies in the Pacific Ocean 600 miles west of Ecuador in South America. Even more intriguingly, each island boasted a different selection of finches, all of similar size and colour but with very different beaks – some with thin, probing bills to grab insects, others with large powerful bills to crack seeds, still others adapted to eat fruit. The famous giant tortoises, too, had different species for different islands, some with a high-lipped shell to allow browsing on cactuses, those on the larger, grassier islands with a more conventional high-domed shell. This caught Darwin so unawares that he mixed up his specimens and had to ask the island’s vice-governor to unscramble them; Galapagos tortoises are like no other tortoise on earth, so it took Darwin a long time to figure out that there were several distinct species. When Darwin turned his attention to Galapagan plants he discovered the same story yet again. Each island had its own ecosystem.
The Galapagos Islands were the birthplace of so many species because they were so isolated from the mainland and, to a lesser degree, from each other. ‘Speciation’ – the divergence of one species into two separate populations – rarely happens without some form of physical isolation, otherwise the two diverging species will interbreed at an early stage, and converge again.
Innovations, too, often need a kind of isolation to realise their potential. It’s not that isolation is conducive to having ideas in the first place: gene mutations are no more likely to happen in the Galapagos than anywhere else, and as many people have observed, bright ideas emerge from the swirling mix of other ideas, not from isolated minds. Jane Jacobs, the great observer of urban life, looked for innovation in cities, not on Pacific islands. But once a new idea has appeared, it needs the breathing space to mature and develop so that it is not absorbed and crushed by the conventional wisdom.
This idea of allowing several ideas to develop in parallel runs counter to our instincts: we naturally tend to ask, ‘What is the best option?’, and concentrate on that. But given that life is so unpredictable, what seemed initially like an inferior option may turn out to be exactly what we need. It’s sensible in many areas of life to leave room for exploring parallel possibilities – if you want to make friends, join several social clubs, not just the one that appears most promising – but it is particularly true in the area of innovation, where a single good idea or new technology can be so valuable. In an uncertain world, we need more than just Plan A; and that means finding safe havens for Plans B, C, D and beyond.
The Spitfire was a long way down the alphabet from Plan A, not least because the Galapagan isle from which it emerged was populated by some highly unlikely characters. There was Noel Pemberton Billing, a playboy politician most famous as a campaigner against lesbianism. Billing successfully provoked a sensational libel trial in 1918 by accusing the exotic dancer Maud Allan of spreading this ‘Cult of the Clitoris’, and then used the trial to publicise his rather unconventional view that almost 50,000 ‘perverts’ had successfully been blackmailed by German spies into undermining the British war effort.
When not whipping the media into a frenzy about seditious sapphists, Billing was running Supermarine, a ragtag and notoriously disorganised aeronautical engineering company which in 1917 had employed a second unlikely character: a shy but bloody-minded and quite brilliant young engineer by the name of Reginald Mitchell. On his first job, the foreman complained that Mitchell had served him a cup of tea that ‘tastes like piss’. For the next brew, Mitchell steeped the tea leaves in his own boiling urine. ‘Bloody good cup of tea, Mitchell,’ was the response.
No surprise, then, that Mitchell reacted furiously when the large defence engineering company Vickers bought Supermarine, and tried to place him under the supervision of the great designer Barnes Wallis – who later became famous as the creator of the bouncing bomb used by the Dambusters. ‘It’s either him or me!’ Mitchell fumed. Whether by good judgement or good fortune, the board of Vickers Aviation decided Barnes Wallis should be moved elsewhere, and Mitchell’s team continued to enjoy Galapagan isolation from the committees of Vickers.
Then there was the most unexpected escape of all. In 1929 and 1930, Mitchell’s planes – the direct ancestors of the Spitfire – held the world record for speed, winning the Schneider Trophy set up to test competing designs. But the government, which was providing much of the funding for these record attempts, decided that they were frivolous in a time of austerity. Sir Hugh Trenchard, Marshal of the Royal Air Force at the time, called high-speed planes ‘freak machines’. Without the development money for the latest world record attempt – and with Henry Cave-Brown-Cave not yet on the scene to pay for an ‘experiment’ – Supermarine was set to abandon the project.
Rescue came from the most unlikely character: Dame Fanny Houston, born in humble circumstances, had become the richest woman in the country after marrying a shipping millionaire and inheriting his fortune. Lady Houston’s eclectic philanthropy knew few bounds: she supported oppressed Christians in Russia, coalminers and the women’s rights movement. And in 1931 she wrote a cheque to Supermarine that covered the entire development costs of the Spitfire’s predecessor, the S6. Lady Houston was furious at the government’s lack of support: ‘My blood boiled in indignation, for I know that every true Briton would rather sell his last shirt than admit that England could not afford to defend herself against all-comers.’ The S6 flew at an astonishing speed of 407.5 mph less than three decades after the Wright Brothers launched the Wright Flyer. England’s pride was intact, and so was the Spitfire project. No wonder the historian A.J.P. Taylor later remarked that ‘the Battle of Britain was won by Chamberlain, or perhaps by Lady Houston.’
The lone furrow ploughed by Mitchell pre-dated by over a decade the establishment of the celebrated ‘Skunk Works’ division of Lockheed. The Skunk Works designed the U2, the high-altitude spy plane which produced photographs of nuclear missile installations in Cuba; the Blackbird, the fastest plane in the world for the past thirty-five years; and radar-invisible stealth bombers and fighters. The value of the ‘skunk works’ model – a small, unconventional team of engineers and innovators in a big corporation, deliberately shielded from a nervous corporate hierarchy – has since become more widely appreciated. Mitchell’s team, like the Skunk Works, was closely connected with the latest thinking on aeronautical engineering: Mitchell tested his designs against the world’s best each year in the Schneider Trophy races. But the team was isolated from bureaucratic interference. In a world where the government was the only likely customer, this was no small feat.
Protecting innovators from bureaucrats won’t guarantee results: on the contrary: we can confidently expect that most of the technological creations that stumble out of these Galapagan islands of innovation will prove singularly ill-equipped to thrive in the wider world. But if the occasional Spitfire also results, the failures will be worth it.
If such amazing results can emerge when new ideas are protected and nurtured, one might think that there is no problem encouraging innovation in the modern world. There have never been more universities, more PhDs, or more patents. Look at the world’s leading companies and consider how many of them – Google, Intel, Pfizer – make products that would either fit into a matchbox, or have no physical form at all. Each of these large islands of innovation is surrounded by an archipelago of smaller high-tech start-ups, all with credible hopes of overturning the established order – just as a tiny start-up called Microsoft humbled the mighty IBM, and a generation later Google and Facebook repeated the trick by outflanking Microsoft itself.
This optimistic view is true as far as it goes. Where it’s easy for the market to experiment with a wide range of possibilities, as in computing, we do indeed see change at an incredible pace. The sheer power and interconnectedness of modern technology means that anyone can get hold of enough computing power to produce great new software. Thanks to outsourcing, even the hardware business is becoming easy to enter. Three-dimensional printers, cheap robots and ubiquitous design software mean that other areas of innovation are opening up, too. Yesterday it was customised T-shirts. Today, even the design of niche cars is being ‘crowd-sourced’ by companies such as Local Motors, which also outsource production. Tomorrow, who knows? In such fields, an open game with lots of new players keeps the innovation scoreboard ticking over. Most ideas fail, but there are so many ideas that it doesn’t matter: the internet and social media expert Clay Shirky celebrates ‘failure for free’.
Here’s the problem, though: failure for free is still all too rare. These innovative fields are still the exception, not the rule. Because open-source software and iPad apps are a highly visible source of innovation, and because they can be whipped up in student dorms, we tend to assume that anything that needs innovating can be whipped up in a student dorm. It can’t. Cures for cancer, dementia and heart disease remain elusive. In 1984, HIV was identified, and the US health secretary Margaret Heckler announced that a vaccine preventing AIDS would be available within a couple of years. It’s a quarter of a century late. And what about a really effective source of clean energy – nuclear fusion, or solar panels so cheap you could use them as wallpaper?
What these missing-in-action innovations have in common is that they are large and very expensive to develop. They call for an apparently impossible combination of massive resources with an array of wildly experimental innovative gambles. It is easy to talk about ‘skunk works’, or creating safe havens for fledgling technologies, but when tens of billions of dollars are required, highly speculative concepts look less appealing. We have not thought seriously enough about how to combine the funding of costly, complex projects with the pluralism that has served us so well with the simpler, cheaper start-ups of Silicon Valley.
When innovation requires vast funding and years or decades of effort, we can’t wait for universities and government research laboratories to be overtaken by dorm-room innovators, because it may never happen.
If the underlying innovative process was somehow becoming cheaper and simpler and faster, all this might not matter. But the student-startup successes of Google and Facebook are the exceptions, not the rule. Benjamin F. Jones, an economist at the Kellogg School of Management, has looked beyond the eye-catching denizens of Silicon Valley, painstakingly interrogating a database of 3 million patents and 20 million academic papers.
What he discovered makes him deeply concerned about what he calls ‘the burden of knowledge’. The size of teams listed in patent citations has been increasing steadily since Jones’s records began in 1975. The age at which inventors first produce a patent has also been rising. Specialisation seems sharper, since lone inventors are now less likely to produce multiple patents in different technical fields. This need to specialise may be unavoidable, but it is worrying, because past breakthroughs have often depended on the inventor’s sheer breadth of interest, which allowed concepts from different fields to bump together in one creative mind. Now such cross-fertilisation requires a whole team of people – a more expensive and complex organisational problem. ‘Deeper’ fields of knowledge, whose patents cite many other patents, need bigger teams. Compare a typical modern patent with one from the 1970s and you’ll find a larger team filled with older and more specialised researchers. The whole process has become harder, and more expensive to support in parallel, on separate islands of innovation.
In academia, too, Jones found that teams are starting to dominate across the board. Solo researchers used to produce the most highly cited research, but now that distinction, too, belongs to teams of researchers. And researchers spend longer acquiring their doctorates, the basic building blocks of knowledge they need to start generating new research. Jones argues that scientific careers are getting squashed both horizontally and vertically by the sheer volume of knowledge that must be mastered. Scientists must narrow their field of expertise, and even then must cope with an ever shorter productive life between the moment they’ve learned enough to get started, and the time their energy and creativity starts to fade.
This is already becoming true even in some areas of that hotbed of dorm-room innovation, software. Consider the computer game. In 1984, when gamers were still enjoying Pac-Man and Space Invaders, the greatest computer game in history was published. Elite offered space combat depicted in three dimensions, realistic trade, and a gigantic universe to explore, despite taking up no more memory than a small Microsoft Word document. Like so many later successes of the dot-com era, this revolutionary game was created by two students during their summer vacation.
Twenty-five years later, the game industry was awaiting another gaming blockbuster, Duke Nukem Forever. The sequel to a runaway hit, Duke Nukem Forever was a game on an entirely different scale. At one stage, thirty-five developers were working on the project, which took twelve years and cost $20 million. In May 2009, the project was shut down, incomplete. (As this book was going to press, there were rumours of yet another revival.)
While Duke Nukem Forever was exceptional, modern games projects are far larger, more expensive, more complex and more difficult to manage than they were even ten years ago. Gamers have been eagerly awaiting Elite 4 since rumours of its development surfaced in 2001. They are still waiting.
Outside computing, this trend is even starker. The £10,000 that the Spitfire prototype cost is the equivalent of less than a million dollars today, and the plane took seven years to enter service. The US Air Force’s F-22 stealth fighter, made by the real Skunk Works of Lockheed, was an equally revolutionary aircraft in a different technological era. It required government development funds of $1400 million in today’s terms, plus matching funds from Lockheed Martin and Boeing, just to produce the prototype. The plane took a quarter of a century to enter service.
The proliferation of iPhone and Android apps has hidden the uncomfortable truth, which is that innovation has become slower, harder and costlier, and in most areas we have fallen far short of the hopes of our predecessors. Flip through a report penned in 1967 by the influential futurist Herman Kahn, and you will discover that by the year 2000 we were expected to be flying around on personal platforms, curing hangovers with impunity and enjoying electricity that was too cheap to meter, beamed down from artificial moons. Kahn was no idle fantasist. He was accurate in his ideas about progress in communications and computing. He predicted handheld communicators, colour photocopying and the digitisation of financial transactions, and he was right. But this is exactly the sector of the economy where pluralism is alive and well.
Another sector of the economy that must have seemed set for never-ending improvement at the time Kahn was writing is long-haul air travel. Who would have expected in the late 1960s, when the Boeing 747 was designed, that the same plane would still dominate the industry over forty years later? If we had asked business travellers of the 1960s to predict what their counterparts in the 2000s would vote as ‘the travel innovation of the decade’, they would surely have thought of jet packs or flying cars. Half a century later and the real winner of the vote was ‘online check-in’.
Cars have comfier interiors, better safety systems, and louder sound systems, but fundamentally they are not much more efficient than in 1970. Nuclear fusion is three decades away, as it has been for three decades; China instead depends on the less than revolutionary technology of coal-fired electricity plants, while clean energy from the sun or the wind is expensive and sporadic. As for the pharmaceutical industry, the number of highly successful ‘blockbuster’ drugs has stopped rising over the past decade and fell for the first time ever in 2007; the number of new drugs approved each year in the US has also fallen sharply.
Over the past few decades, the number of people employed in research and development in the world’s leading economies has been rising dramatically, but productivity growth has been flat. Yes, there are more patents filed – but the number of patents produced per researcher, or per research dollar, has been falling. We may have booming universities and armies of knowledge workers, but when it comes to producing new ideas, we are running to stand still.
This is particularly worrying because we are hoping that new technology will solve so many of our problems. Consider climate change: Bjorn Lomborg, famous as ‘the sceptical environmentalist’ who thinks we worry too much about climate change and not enough about clean water or malaria, argues that we should be spending fifty times more on research and development into clean energy and geoengineering. If that’s the demand from someone who thinks climate change is over-hyped, we are entering a world in which we expect much, much more from new technology.
The obvious place to turn for solutions is to the market, where countless companies compete to bring new ideas into profitable shape, from start-ups to giant innovation factories such as Intel, General Electric and GlaxoSmithKline. As we’ve seen, the market is tremendously innovative – as long as the basic setting is fierce competition to develop super-cheap ideas, such as new software.
But when it comes to the more substantial, expensive innovations – the kind of innovations which are becoming ever more important – the market tends to rely on a long-established piece of government support: the patent. And it’s far from clear that patents will encourage the innovations we really need.
The basic concept is sound: patents entice inventors by awarding them a monopoly on the use of their idea, in the hope that the cost of this monopoly is offset by the benefits of encouraging innovation in the first place. Whether patents actually get this balance right is an open question. They have been discredited by the appearance of absurdities such as US patent 6,004,596, for a ‘sealed crustless sandwich’, or patent 6,368,227, ‘a method of swinging on a swing’, which has been granted to a five-year-old boy from Minnesota. These frivolous patents do little harm in their own right, but they exemplify a system where patents are awarded for ideas that are either not novel, or require little or no research effort.
Consider IBM’s patent for a ‘smooth-finish auction’, where the auction is halted at an unpredictable moment – unlike an eBay auction, which is vulnerable to opportunistic last-second bids. The patent office’s decision to grant the patent is puzzling, because the idea is not new. In fact, it is very old indeed: the auction expert Paul Klemperer points out that Samuel Pepys, London’s most famous diarist, recorded the use of such auctions in the seventeenth century. (A pin was pushed into a melting candle. When it dropped, the auction ended.) Such mistakes happen, but there is no simple way to correct them: to do so requires going into direct competition with IBM, hiring an army of lawyers, and taking your chances. A cheaper way of fixing errors is essential.
Or take the idea of using a smart phone to scan bar codes in stores, and immediately reading reviews and checking whether the product is available more cheaply nearby. The concept of the scanner-phone popped into the head of a young Canadian economist called Alex Tabarrok while he was taking a shower one morning, at the height of the dot-com boom. Alas for Tabarrok, it had popped into other people’s heads too, and he soon discovered that patent 6,134,548 had been awarded for the same proposal just a few months earlier. That might seem like a misfortune for Tabarrok alone, but in fact we all suffer: a patent given out as a reward for random moments of inspiration delivers all the costs of intellectual monopoly without any of the benefits.
Worse, patents also fail to encourage some of the really important innovations. Too strong in the case of the scanner phone and the smooth-finish auction, they are too weak to inspire a vaccine for HIV, or important breakthroughs in clean energy. Part of the problem is the timescale: many important patents in, say, solar power, are likely to have expired by the time solar energy becomes competitive with fossil fuels, a technology which has been accumulating a head start since the industrial revolution began.
A second, ironic, problem is that companies fear that if they produce a truly vital technology, governments will lean on them to relinquish their patent rights or slash prices. This was the fate of Bayer, the manufacturer of the anthrax treatment Cipro, when an unknown terrorist began mailing anthrax spores in late 2001, killing five people. Four years later, as anxiety grew about an epidemic of bird flu in humans, the owner of the patent on Tamiflu, Roche, agreed to license production of the drug after very similar pressure from governments across the world. It is quite obvious why governments have scant respect for patents in true emergencies. Still, if everybody knows that governments will ignore patents when innovations are most vital, it is not clear why anyone expects the patent system to encourage vital innovations.
The cheese sandwich patent problem could be fixed with some simple administrative improvements, but questions remain about whether any reform to the patent system can encourage companies to focus on truly large-scale, long-term projects. The innovation slowdown is likely to continue.
If patents can’t encourage the market alone to unleash the scale of innovations we need, the obvious alternative is governments. Governments, after all, are supposed to have the long time horizons, and the interest in solving our collective problems. But so far, government grants have failed to deliver their full potential. A clue as to why emerges from one of the most remarkable lives of the twentieth century.
Mario Capecchi’s earliest memory is of German officers knocking on the door of his mother’s chalet in the Italian Alps, and arresting her. They sent her to a concentration camp, probably Dachau. Mario, who had been taught to speak both Italian and German, understood exactly what was being said by the SS officers. He was three and a half.
Mario’s mother Lucy was a poet and an antifascist campaigner who had refused to marry his abusive father, Luciano, an officer in Mussolini’s air force. One can only imagine the scandal in prewar, Catholic, fascist Italy. Expecting trouble, Lucy had made preparations by selling many of her possessions and entrusting the proceeds to a local peasant family. When she disappeared, the family took Mario in. For a time he lived like an Italian farmer’s son, learning rural life at an apron hem.
After a year, his mother’s money appears to have run out. Mario left the village. He remembers a brief time living with his father, and deciding he would rather live on the streets: ‘Amidst all of the horrors of war, perhaps the most difficult for me to accept as a child was having a father who was brutal to me.’ Luciano was killed shortly afterwards in aerial combat.
And so Mario Capecchi became a street urchin at the age of four and a half. Most of us are content if, at the age of four and a half, our children are capable of eating lunch without spilling it or confident enough to be dropped off at nursery without tears. Mario survived on scraps, joined gangs, and drifted in and out of orphanages. At the age of eight he spent a year in hospital, probably suffering from typhoid, passing in and out of feverish oblivion each day. Conditions were grim: no blankets, no sheets, beds jammed together, nothing to eat but a crust of bread and some chicory coffee. Many Italian orphans died in such hospitals.
Mario survived. On his ninth birthday, a strange-looking woman arrived at the hospital asking to see him. It was his mother, unrecognisable after five years in a concentration camp. She had spent the last eighteen months searching for him. She bought him a suit of traditional Tyrolean clothes – he still has the cap and its decorative feather – and took him with her to America.
Two decades later Mario was at Harvard University, determined to study molecular biology under the great James Watson, co-discoverer of DNA. Not a man to hand out compliments easily, Watson once said Capecchi ‘accomplished more as a graduate student than most scientists accomplish in a lifetime’. He had also advised the young Capecchi that he would be ‘fucking crazy’ to pursue his studies anywhere other than in the cutting-edge intellectual atmosphere of Harvard.
Still, after a few years, Capecchi had decided that Harvard was not for him. Despite great resources, inspiring colleagues and a supportive mentor in Watson, he found the Harvard environment demanded results in too much of a hurry. That was fine, if you wanted to take predictable steps along well-signposted pathways. But Capecchi felt that if you wanted to do great work, to change the world, you had to give yourself space to breathe. Harvard, he thought, had become ‘a bastion of short-term gratification’. Off he went instead to the University of Utah, where a brand-new department was being set up. He had spotted, in Utah, a Galapagan island on which to develop his ideas.
In 1980, Mario Capecchi applied for a grant from the US National Institutes of Health, which use government money to fund potentially life-saving research. The sums are huge: the NIH are twenty times bigger than the American Cancer Society. Capecchi described three separate projects. Two of them were solid stuff with a clear track record and a step-by-step account of the project deliverables. Success was almost assured.
The third project was wildly speculative. Capecchi was trying to show that it was possible to make a specific, targeted change to a gene in a mouse’s DNA. It is hard to overstate how ambitious this was, especially back in 1980: a mouse’s DNA contains as much information as seventy or eighty large encyclopedia volumes. Capecchi wanted to perform the equivalent of finding and changing a single sentence in one of those volumes – but using a procedure performed on a molecular scale. His idea was to produce a sort of doppelganger gene, one similar to the one he wanted to change. He would inject the doppelganger into a mouse’s cell, and somehow get the gene to find its partner, kick it out of the DNA strand and replace it. Success was not only uncertain but highly improbable.
The NIH decided that Capecchi’s plans sounded like science fiction. They downgraded his application and strongly advised him to drop the speculative third project. However, they did agree to fund his application on the basis of the other two solid, results-oriented projects. (Things could have been worse: at about the same time, over in the UK, the Medical Research Council flatly rejected an application from Martin Evans to attempt a similar trick. Two research agencies are better than one, however messy that might seem, precisely because they will fund a greater variety of projects.)
What did Capecchi do? He took the NIH’s money, and ignoring their admonitions he poured almost all of it into his risky gene-targeting project. It was, he recalls, a big gamble. If he hadn’t been able to show strong enough initial results in the three to five-year timescale demanded by the NIH, they would have cut off his funding. Without their seal of approval, he might have found it hard to get funding from elsewhere. His career would have been severely set back, his research assistants looking for other work. His laboratory might not have survived.
In 2007, Mario Capecchi was awarded the Nobel Prize for Medicine for this work on mouse genes. As the NIH’s expert panel had earlier admitted, when agreeing to renew his funding: ‘We are glad you didn’t follow our advice.’
The moral of Capecchi’s story is not that we should admire stubborn geniuses, although we should. It is that we shouldn’t require stubbornness as a quality in our geniuses. How many vital scientific or technological advances have foundered, not because their developers lacked insight, but because they simply didn’t have Mario Capecchi’s extraordinarily defiant character?
But before lambasting the NIH for their lack of imagination, suppose for a moment that you and I sat down with a blank sheet of paper and tried to design a system for doling out huge amounts of public money – taxpayers’ money – to scientific researchers. That’s quite a responsibility. We would want to see a clear project description, of course. We’d want some expert opinion to check that each project was scientifically sound, that it wasn’t a wild goose chase. We’d want to know that either the applicant or another respected researcher had taken the first steps along this particular investigative journey and obtained some preliminary results. And we would want to check in on progress every few years.
We would have just designed the sensible, rational system that tried to stop Mario Capecchi working on mouse genes.
The NIH’s expert-led, results-based, rational evaluation of projects is a sensible way to produce a steady stream of high-quality, can’t-go-wrong scientific research. But it is exactly the wrong way to fund lottery-ticket projects that offer a small probability of a revolutionary breakthrough. It is a funding system designed to avoid risks – one that puts more emphasis on forestalling failure than achieving success. Such an attitude to funding is understandable in any organisation, especially one funded by taxpayers. But it takes too few risks. It isn’t right to expect a Mario Capecchi to risk his career on a life-saving idea because the rest of us don’t want to take a chance.
Fortunately, the NIH model isn’t the only approach to funding medical research. The Howard Hughes Medical Institute, a large charitable medical research organisation set up by the eccentric billionaire, has an ‘investigator’ programme which explicitly urges ‘researchers to take risks, to explore unproven avenues, to embrace the unknown – even if it means uncertainty or the chance of failure’. Indeed, one of the main difficulties in attracting HHMI funding is convincing the institute that the research is sufficiently uncertain.
The HHMI also backs people rather than specific projects, figuring that this allows scientists the flexibility to adapt as new information becomes available, and pursue whatever avenues of research open up, without having to justify themselves to a panel of experts. (General H.R. McMaster would surely recognise the need to adapt to changing conditions on the ground.) It does not demand a detailed research project – it prefers to see the sketch of the idea, alongside an example of the applicant’s best recent research. Investigators are sometimes astonished that the funding appears to be handed out with too few strings attached.
The HHMI does ask for results, eventually, but allows much more flexibility about what ‘results’ actually are – after all, there was no specific project in the first place. If the HHMI sees convincing signs of effort, funding is automatically reviewed for another five years; it is only after ten years without results that HHMI funding is withdrawn – and even then, gradually rather than abruptly, allowing researchers to seek out alternatives rather than sacking their staff or closing down their laboratories.
This sounds like a great approach when Mario Capecchi is at the forefront of our minds. But is the HHMI system really superior? Maybe it leads to too many costly failures. Maybe it allows researchers to relax too much, safe in the knowledge that funding is all but assured.
Maybe. But three economists, Pierre Azoulay, Gustavo Manso and Joshua Graff Zivin, have picked apart the data from the NIH and HHMI programmes to provide a rigorous evaluation of how much important science emerges from the two contrasting approaches. They carefully matched HHMI investigators with the very best NIH-funded scientists: those who had received rare scholarships, and those who had received NIH ‘MERIT’ awards, which, like other NIH grants, fund specific projects, but which are more generous and are aimed only at the most outstanding researchers. They also used a statistical technique to select high-calibre NIH researchers with a near-identical track record to HHMI investigators.
Whichever way they sliced the data, Azoulay, Manso and Zivin found evidence that the more open-ended, risky HHMI grants were funding the most important, unusual and influential research. HHMI researchers, apparently no better qualified than their NIH-funded peers, were far more influential, producing twice as many highly-cited research articles. They were more likely to win awards, and more likely to train students who themselves won awards. They were also more original, producing research that introduced new ‘keywords’ into the lexicon of their research field, changing research topics more often, and attracting more citations from outside their narrow field of expertise.
The HHMI researchers also produced more failures; a higher proportion of their research papers were cited by nobody at all. No wonder: the NIH programme was designed to avoid failure, while the HHMI programme embraced it. And in the quest for truly original research, some failure is inevitable.
Here’s the thing about failure in innovation: it’s a price worth paying. We don’t expect every lottery ticket to pay a prize, but if we want any chance of winning that prize then we buy a ticket. In the statistical jargon, the pattern of innovative returns is heavily skewed to the upside; that means a lot of small failures and a few gigantic successes. The NIH’s more risk-averse approach misses out on many ideas that matter.
It isn’t hard to see why a bureaucracy, entrusted with spending billions of taxpayer dollars, is more concerned with minimising losses than maximising gains. And the NIH approach does have its place. Recall the work by the Santa Fe complexity theorists Stuart Kaufman and John Holland, showing that the ideal way to discover paths through a shifting landscape of possibilities is to combine baby steps and speculative leaps. The NIH is funding the baby steps. Who is funding the speculative leaps? The Howard Hughes Medical Institute invests huge sums each year, but only about one twentieth of 1 per cent of the world’s global R&D budget. There are a few organisations like the HHMI, but most R&D is either highly commercially-focused research – the opposite of blue-sky thinking – or target-driven grants typified by the NIH. The baby steps are there; the experimental leaps are missing.
We need bureaucrats to model themselves on the chief of Britain’s air staff in the 1930s: ‘firms are reluctant to risk their money on highly speculative ventures of novel design. If we are to get serious attempts at novel types … we shall have to provide the incentive.’ That is the sort of attitude that produces new ideas that matter.
Unfortunately, such bureaucrats are rare. So far, we have discovered two vital principles for promoting new technology. First, create as many separate experiments as possible, even if they appear to embody contradictory views about what might work, on the principle that most will fail. Second, encourage some long-shot experiments, even though failure is likely, because the rewards to success are so great. The great weakness of most government-funded research is that both these goals are the antithesis of government planning. Bureaucracies like a grand plan, and they like to feel reassured that they know exactly how that plan is to be achieved. Exceptions, such as the Spitfire, are rare.
Traditional government funding has an important part to play in encouraging ideas that matter, especially if more money can be awarded following the failure-tolerant model of the Howard Hughes Medical Institute. The market also clearly plays a critical role in developing new ideas and bringing ideas out of government-funded labs and into practical products we enjoy in everyday life.
Yet the problem of encouraging expensive, world-changing innovations remains daunting. Government officials will always tend to avoid risks when spending large sums of public money, while the patent system will rarely inspire costly, long-term research efforts from private firms. Neither approach is likely to combine the two elements essential to encourage significant innovation in a complex world: a true openness to risky new ideas, and a willingness to put millions or even billions of dollars at risk. These two elements are fundamental to twenty-first-century innovation, yet they seem mutually incompatible. They are not. In fact the way to combine them has been around, if often forgotten, for more than three centuries.
The year 1675 marked the foundation of one of the first and most famous government agencies for research and design. The Royal Observatory was founded with the aim of improving navigation at sea, and in particular of solving the ‘longitude’ problem of figuring out how far east or west a ship at sea was. (The latitude problem was far more easily solved, by measuring the length of the day, or the elevation of the sun or stars.) For a great naval power such as Great Britain, with trade routes stretching across the world, the significance of a ship’s captain being unable to figure out his location could hardly be overstated. And the Royal Observatory today gladly associates itself with the sensational breakthrough that solved the conundrum. Its original site in Greenwich, East London, is bisected by what the Observatory still proudly describes as ‘the Prime Meridian of the World’ – Longitude 0° 0’ 0”.
There is an inconvenient tale behind this happy association, however. The Royal Observatory’s own astronomers failed hopelessly to solve the problem for almost a century, while ruthlessly undermining the man who did.
Dissatisfaction with the Royal Observatory’s performance had come to a head in 1707, with its experts still apparently clueless after more than three decades of research. One foggy night Admiral Sir Clowdisley Shovell, wrongly believing that his fleet was further west of the English mainland, wrecked four ships on the Isles of Scilly. Sir Clowdisley’s miscalculation led to more deaths than the sinking of the Titanic. The British parliament turned to Sir Isaac Newton and the comet expert Edmond Halley for advice, and in 1714 passed the Act of Longitude, promising a prize of £20,000 for a solution to the problem. Compared with the typical wage of the day, this was over £30 million pounds in today’s terms.
The prize transformed the way that the problem of longitude was attacked. No longer were the astronomers of the Royal Observatory the sole official searchers – the answer could come from anyone. And it did. In 1737, a village carpenter named John Harrison stunned the scientific establishment when he presented his solution to the Board of Longitude: a clock capable of keeping superb time at sea despite the rolling and pitching of the ship and extreme changes in temperature and humidity. While it was well known that knowing the correct time back in London could enable a navigator to calculate longitude using the sun, the technical obstacles to producing a sufficiently accurate clock were widely thought to be beyond human ingenuity. Harrison, spurred on by a fabulous prize, proved everyone wrong.
It should have been a salutary lesson that prizes could inspire socially beneficial ideas from unexpected sources. Unfortunately, the Royal Observatory’s experts took it as a lesson that prizes could embarrass the likes of them. The Astronomer Royal, James Bradley, and his protégé Nevil Maskelyne, went to extraordinary lengths to deny Harrison his prize while they struggled to make progress with an alternative, astronomical method of determining longitude. Bradley used his authority first to delay sea trials of Harrison’s latest clock, and then to send the clock – along with Harrison’s son William – into a war zone. When the clock passed this test with flying colours, losing a mere five seconds in an eighty-one-day journey to Jamaica, they insisted on more tests. After Maskelyne himself became Astronomer Royal in 1765, he impounded Harrison’s clocks for ‘observation and testing’, transporting them on a rickety wagon over London’s cobblestones to Greenwich. Oddly, they didn’t work so well after that.
It is true that Harrison did himself few favours – he was not so much a stubborn genius as an irascible one – but it is hard to avoid the conclusion that he was unfairly rebuffed and perhaps even cheated.* Harrison’s clocks did eventually become the standard way to find longitude, but only after his death.
Still, the longitude prize had inspired a solution, and the prize methodology was widely imitated. In 1810 Nicolas Appert, a chef and confectioner also credited with the invention of the bouillon cube, was presented with a 12,000-franc prize by Napoleon for inventing a method of preserving food that is still used in canning factories today. Unfortunately, the prickly reaction of the Observatory’s scientific establishment was widely imitated, too. In 1820 a French aristocrat, Baron de Montyon, bequeathed his fortune to the Académie des sciences with instructions that it be used to fund two annual prizes, one for ‘making some industrial process less unhealthy’, and one for ‘improving medical science or surgery’. The Académie was less than impressed with these irksome stipulations. If prizes were to be given out, they reasoned at first, surely some of de Montyon’s money should be spent on administrative support for those prizes, not to mention printing costs? In years when no prize was handed out, they started to use the money to buy library books and experimental equipment – all of which ‘might be necessary in the judging of competitions’.
A decade after De Montyon’s death, the Académie was scarcely even pretending to respect his will, looting his legacy to fund whatever projects it pleased. Ultimately the Académie began to turn down bequests for prizes, insisting on its right to make grants to favoured projects or people instead.
France was not alone. Across Europe and the United States, scientific societies shifted from chiefly awarding prizes to mostly handing out grants, or even employing researchers directly. What prizes remained tended to be handed out retrospectively and on a subjective basis – the most famous being the Nobel prizes – rather than, as with the Longitude prize and the Food Preservation prize, pre-announced with the aim of encouraging some future solution. Despite their early successes, innovation prizes were firmly supplanted by direct grants. Grants, unlike prizes, are a powerful tool of patronage. Prizes, in contrast, are open to anyone who produces results. That makes them intrinsically threatening to the establishment.
Finally, after almost two centuries out of fashion, prizes are now enjoying a renaissance – thanks to a new generation of entrepreneurs and philanthropists who care more about getting solutions than about where they come from.
Netflix is a mail-order film rental company which recommends films to its customers based on what they’ve previously rented or reviewed on the company website. The better the recommendations, the happier the customer, so in March 2006 the founder and chief executive of Netflix, Reed Hastings, met some colleagues to discuss how they might improve the software that made the recommendations. Hastings had been inspired by the story of John Harrison, and suggested offering a prize of $1m to anyone who could do better than Netflix’s in-house algorithm, Cinematch.
The Netflix prize, announced in October 2006, struck a chord with the Web 2.0 generation. Within days of the prize announcement, some of the best minds in the relevant fields of computer science were on the case. Within a year, the leading entries had reduced Cinematch’s recommendation errors by more than 8 per cent – close to the million-dollar hurdle of 10 per cent. Over 2,500 teams from 161 countries and comprising 27,000 competitors entered the contest. The prize was eventually awarded in September 2009 to a team of researchers from AT&T.
The use of prizes is catching on again, and quickly. Another company, Innocentive, has for the last decade provided an exchange where ‘seekers’ can offer cash to ‘solvers’. Both sides are anonymous. The problems are like the small ads on the world’s least romantic lonely-hearts website: ‘A technology is desired that produces a pleasant scent upon stretching of an elastomer film’ ($50,000); ‘Surface chemistry for optical biosensor with high binding capacity and specificity is required’ ($60,000).
Then there are more glamorous prizes, such as those under the aegis of the non-profit X Prize Foundation. The Archon X Prize for genomics will be awarded to the team that can sequence 100 human genomes within ten days at a cost of $10,000 per genome. That is unimaginably quicker and cheaper than the first private genomic sequencing in 2000, which took nine months and cost $100m for a single human genome. (Craig Venter, the director of that effort, is one of the backers of the new prize.) But it is the kind of leap forward that would be necessary to usher in an era of personalised medicine, in which doctors could prescribe drugs and give advice in full knowledge of each patient’s genetic susceptibilities. Another prize will be awarded to the manufacturer of a popular mass-production car that has a fuel efficiency of 100 miles per gallon.
The prize-giving model is the same each time. The X Prize Foundation identifies a goal and finds sponsors; it announces a prize and whips up the maximum possible enthusiasm, with the aim of generating far more investment than the prize itself; the prize achieved, it hands out the award with great fanfare and moves on to set other challenges. The prize winner is left with intellectual property intact, and may capitalise on the commercial value of that intellectual property, if any commercial value exists.
‘One of the goals of the prize is to transform the way people think,’ says Bob Weiss, vice-chairman of the X Prize Foundation. ‘We were trying to create a sea-change.’
They have certainly made an impact. And others have trodden a similar path. There is, for example, an ‘Mprize’ for creating long-lived mice, with the hope, eventually, of lengthening human life too. And the Clay Mathematics Institute, a nonprofit body set up in 1998 by a Boston businessman, is offering million-dollar prizes for the solution of seven ‘Millennium’ problems in mathematics. (Not everybody responds to such incentives. The first such prize was awarded to the reclusive Russian genius Grigory Perelman. He ignored it.)
But all these prizes are dwarfed by an ambitious scheme that promises to unleash the true potential of innovation prizes. Five national governments and the Bill and Melinda Gates Foundation have put $1.5 billion into a prize called an ‘advanced market commitment’ to reward the developers and suppliers of a more effective vaccine against pneumococcal diseases such as pneumonia, meningitis and bronchitis. The reason a prize is needed is because even with a patent, no pharmaceutical company could expect to reap much reward from a product that will largely benefit the very poor. Pneumococcal infections kill nearly a million young children a year, almost all of them in poor countries.
As John Harrison could have attested, the problem with an innovation prize is determining when the innovator has done enough to claim his reward. This is especially the case when the prize is not for some arbitrary achievement, such as being the fastest plane on a given day – remember the Schneider Trophy, which inspired the development of the Spitfire – but for a practical accomplishment such as finding longitude or creating immunity to pneumococcal meningitis. Harrison was caught up in an argument between proponents of the clock method and the astronomical method. Similar arguments could emerge today. One pneumococcal vaccine might be cheap and fastest to market; another might be more reliable and have fewer side-effects. Who is to decide who wins the prize? Or have both won, or neither?
For this reason the vaccine prize takes the form of an agreement to subsidise heavily the first big orders of a successful vaccine. The developers do not reap their rewards unless they can persuade governments or citizens of poor countries to buy the vaccine – albeit at a bargain price – and they will receive their money slowly or quickly, in part or in full, depending on how the market responds. The prize also partly replaces the pricing-power that comes with any patent, because if the drug company wants to collect the prize it has to agree to offer the drug cheaply.
Given that only the very largest pharmaceutical companies spend more than $5 billion per year on research and development, a $1.5 billion prize should be taken seriously on hard-nosed commercial grounds alone. And it has worked: at the end of 2010, children in Nicaragua received the first prize-funded vaccines for pneumococcal disease.
There is more to come. The next target is a vaccine for malaria, which might require a prize of $5 billion to generate commercial interest. Prize enthusiasts think that even an HIV vaccine may be possible, and speculate about a fund of $10 billion to $20 billion, three times the total annual research spending of the largest drugs companies. This is serious money. But the wonderful thing about prizes is that they don’t cost a penny until success is achieved. This allows the ultimate combination: a completely open field, where failures are tolerated and the boldest, riskiest idea could succeed, alongside huge sums of money that are spent only when the problem is solved.
On 21 June 2004 – seven decades after Reginald Mitchell was overturning the conventional wisdom about what flying machines could do – an outlandish-looking aeroplane with a single, impossibly long thin wing and the name ‘White Knight One’ taxied down a runway in the Mojave Desert. White Knight One had been developed by the brilliant aircraft designer Burt Rutan, a genius in the mould of Mitchell, in the Galapagan isolation of a tiny desert town with a scattering of fast-food joints and gas stations and a vast parking lot for disused commercial airliners. (Says Rutan, ‘Innovation is what we do because there’s nothing else to do in Mojave.’) Slung under that eggshell-wing, between White Knight’s catamaran-style twin hulls, was a stubby little appendage, SpaceShipOne. Inside it sat a 63-year-old man named Mike Melvill. The age of private space flight – and with it the potential for space tourism – was about to dawn.
On the face of it, innovation prizes deserve credit for this epochal event. White Knight was one of two dozen competitors trying to win the Ansari X Prize, created by a non-profit foundation. (Some were unlikely challengers: one team was proudly sponsored by ‘the Forks Coffee Shop in downtown Forks’.) A few months later, when White Knight had flown two qualifying missions in quick succession, Rutan’s team secured the $10 million prize.
But that’s far from the whole story. We can also credit philanthropy: Paul Allen, the co-founder of Microsoft and one of the world’s richest men, bankrolled Rutan’s work for reasons reminiscent of the HHMI: he liked the idea and believed in the experimenter’s talent. Or we could equally thank hard-nosed commercialism: Rutan teamed up with Sir Richard Branson’s Virgin Group, which is determined to turn space tourism into a profitable business. Virgin Galactic has since commissioned a larger ship, SpaceShipTwo, with bigger windows and room to float around.
Take a longer view, and it’s government that deserves a pat on the back for the dawn of private space flight. Back in the 1950s, the X-15 plane funded by NACA – the short-lived predecessor of NASA – flew at a height of 106 km, at the edges of space itself, after hitching a lift on a B-52 bomber. This method of getting things into space fell into disuse, however, after President Kennedy focused attention on the goal of getting to the Moon, a task for which multi-stage, ground-launched rockets were the obvious choice. The price we paid was a loss of pluralism: a promising avenue for reliable, low-cost satellite launches – air-launched satellites – was largely abandoned until the combination of profit, prizes and philanthropy came along to revive the technology and turn it into something with real-world value.
In short, the whole unlikely project of putting a man into space with private money succeeded on the back of an untidy jumble of intellectual influences and a tangled web of funding sources. It’s a jumble we should embrace, because it has delivered many other good things. The internet resulted from a project funded by Pentagon pen-pushers, but it took dorm-room innovators to unleash its potential; satellites and GPS, the Global Positioning System, were devised with government backing, but it’s unlikely that any bureaucrat would ever have brought in-car navigation systems to market.
The lesson is that pluralism encourages pluralism. If you want to stimulate many innovations, combine many strategies. Prizes could, in theory, replace the patent system – governments could scrap patent protection but offer prizes for desirable inventions. But to explain that idea is to see its limitations. How could the government know enough about the costs, benefits and even the very possibility of an innovation to write the rules and set the prize money for a competition? We know we need an HIV vaccine, but nobody knew we needed the internet until we had it. We couldn’t have established a prize for inventing the World Wide Web.
Prizes go a long way towards plugging the inevitable gaps left by bureaucrats less wise than Henry Cave-Brown-Cave and scientists less brave than Mario Capecchi, but they should add to rather than replace other methods of funding and encouraging innovation. The Millennium prizes are likely to be awarded to mathematicians who are already receiving public funding. The Schneider Trophy didn’t fund the development of the Spitfire, but it proved Reginald Mitchell’s quality and inspired Lady Houston’s contribution at just the right moment. The pneumococcal vaccine funding may impose pricing conditions on pharmaceutical firms, but it does not invalidate their patents, which can still earn money in other markets or royalties from subsequent technologies. Trial and error can be messy, and so, too, can the tangle of institutions needed to encourage it.
However we hand out the credit for Mike Melvill’s flight, it must have been a journey to remember. White Knight took off at 6.47 a.m. and over the next hour climbed to a height of almost nine miles, higher than any commercial airliner could reach. White Knight then released Melvill and his craft, which glided for a moment before Melvill fired its rocket engine. SpaceShipOne curved sharply upwards until travelling nearly vertically. It accelerated past the speed of sound within ten seconds; after seventy-six seconds, the engine shut down automatically. The ship, already over 30 miles or 50 kilometres up, continued to hurtle through the ever sparser atmosphere at over 2000 miles an hour until it reached, just barely, the 100-kilometre mark that is accepted to be the point at which space begins. When first reaching the brink of space, weightless for a few moments at the top of his craft’s arc above the desert, Mike Melvill fumbled past his oxygen tubes to pull a handful of M&Ms out of his left breast pocket. He released them and they drifted and bounced in all directions, floating around his head, breaking the silence as they clicked against the portholes of the ship.
*Supporters of the Hurricane grumble to this day that the Spitfire grabbed too large a share of the glory. The cheap, easy-to-build and effective Hurricanes did indeed outnumber Spitfires in the early months of the war, but it was the Spitfire’s design that won the plaudits.
*The Board of Longitude never gave Harrison his prize, but it did give him some development money. The British parliament, after Harrison petitioned the King himself, also awarded the inventor a substantial purse in lieu of the prize that never came. The sad story is superbly told by Dava Sobel in her book Longitude, although Sobel perhaps gives Harrison too much credit in one respect: it is arguable that by producing a seaworthy clock, albeit a masterpiece, he did not solve the longitude problem for the Royal Navy or society as a whole. To do that, he needed to produce a blueprint that a skilled craftsman could use to produce copies of the clock.