Arthur C. Clarke once posited, “Any sufficiently advanced technology is indistinguishable from magic.” I think I know what he meant. As a nonscientist, a completely untechnical person, I have always regarded semiconductor devices—microchips if you will—as magical. After all they are just tiny pieces of…stuff; all solid-state, no moving parts. Yet pump an electric current through them, and they…do things. Like amplify sound, or memorize information. And, for as long as I can remember, semiconductor devices have kept getting better. Or, in the case of light emitting diodes, brighter.
With LEDs, I have similar feelings, only more so. Hook a light emitting diode up to a battery and you can actually see what it does. I was delighted to discover that LED researchers themselves are not immune to this sense of wonder. For example, John Kaeding, one of Shuji Nakamura's postgrads, is showing me around Shuji's basement lab at UC Santa Barbara. Under a microscope, Kaeding touches two probes to a tiny dot on a sliver of wafer, producing a bright green glow. “It's a magical moment,” he muses. “We know how these things work but still, there's something about it that seems magic.”
“The best thing about LEDs, you can see the result, and that's the magic,” Warren Weeks, an ace crystal grower who formerly worked for the US blue LED specialist Cree, tells me as we sit sipping drinks on the roof of a hotel in his hometown of Charleston, South Carolina. “I've also worked on radio-frequency devices and they're not nearly so much fun—you've got to hook them up to an oscilloscope to find out what they're doing. With an LED, you apply the voltage and you can really see the brightness.”
In the past, in order to produce colored lights like the red-amber-green ones in traffic signals, you had to stick a filter in front of a white incandescent lightbulb. Filtering is cheap but inefficient, losing up to 80 percent of the light in the process. LEDs by contrast produce light that is colored to begin with, spiking at a specific wavelength. Unlike the often insipid-looking colors produced by filtered incandescents, especially old ones, LEDs shine with a pure, “saturated” hue (i.e., one not diluted with white).
On my desk in front of me as I write is a bright blue LED that Shuji Nakamura gave me on my second visit to Nichia in 1995. I have it connected, via a resistor, to a 9-volt battery. When I examine the device closely, most of what I see is package, not diode. The package is a solid transparent bell-shaped epoxy sheath about the size of the eraser on the end of a pencil. It serves to encapsulate the device and, through its dome lens, to disperse the light. Inside the plastic casing, about halfway down, are two vertical legs sculpted from silvery metal. These are terminals that connect the LED to its power supply. One—known as the anvil, due to its appearance—contains a recess into which the LED chip is glued. The recess is cup-shaped so as to throw the light upward. The other terminal is known as the post. It is just possible to perceive the two hair-thin wires that connect the chip to its terminals. The chip itself is almost impossible to see. When lit, it is too bright; when not lit, too small. Most LED chips are less than a millimeter square, about the size of a grain of sand.
When Asif Khan was showing me around his impressive lab at the University of South Carolina, I asked to see what gallium nitride—the basic material from which bright blue, green, white, and ultraviolet LEDs are fabricated—looks like. Flipping open a little plastic box, he used tweezers to pick up a round wafer. It was about the diameter of an Oreo cookie, but much thinner and with one edge lopped off. The wafer was made of sapphire. We associate sapphire with blue, but that color comes from naturally occurring impurities in the crystal. Man-made sapphire wafers such as the one Asif showed me contain no impurities, and hence are completely transparent. When he tilted the wafer to catch the light, iridescent patterns of pink and green became faintly visible. These indicated the presence of thin films of gallium nitride and related compounds. Invisible devices grown on a transparent substrate? Magic, to be sure.
The whole thing seems implausibly delicate, most unlike the schematics that I am familiar with from leafing through the technical literature. These typically depict LEDs as layer upon layer of materials, each of a slightly different chemical composition, the whole somewhat resembling a club sandwich. It turns out that many of the layers are only a few atoms thick, hence their apparent insubstantiality. Only when the wafer is etched so that metal electrodes—most commonly thin films of gold—can be sputtered (sprayed) on does it lose this aura of invisibility. On two-inch wafers, the current industry standard, you can fabricate approximately twenty thousand LED chips; on three-inch wafers, to which the industry is now moving, about forty-five thousand. There is still considerable variance in device performance across the wafer, hence there is plenty of room for LED manufacturers to make cost-reducing improvements to the production process for many years to come.
Via the electrodes, every chip is tested using a wafer probe. An X-Y grid map is generated, then the wafer goes to a breaker, or a laser separator, to be sawed up into individual chips. The map is fed to a chip sorter, which sorts the chips at high speed into different bins by brightness, wavelength, or voltage. This process is in itself mind-boggling—how do you sort grains of sand?—but it need not concern us here. Suffice it to say that the best devices go to high-end applications like automotive headlamps or computer backlights, while the off-spec stuff ends up in cheapo-cheapo applications like decorative jewelry and Christmas tree lights. LEDs are packaged in various ways by specialist firms, many of them located in China.
What exactly are LEDs, these tiny specks of magic material, and how do they work?
A modern light emitting diode is simultaneously one of the simplest and the most sophisticated of electronic devices: simple in the sense that it is a diode, a device that by definition conducts electricity in one direction only. This property makes diodes useful for converting alternating current to direct current, a process known as rectification. Unlike a transistor, which has three terminals, a diode has but two.
In its most basic form, a light emitting diode consists of two layers made of the same material. One is negatively charged, that is, doped with a tiny dose of impurities to give it an excess of electrons; the other, positively charged, doped to give it an excess of what electrical engineers are pleased to call “holes.”
Holes are represented schematically as particles, but in reality they are not. In fact, holes are merely the absence of electrons. Gallium nitride pioneer Herb Maruska suggests a helpful analogy. Think of an atom as an egg carton, with some eggs missing. The eggs are electrons; the sites where there are no eggs, holes. A single egg can move much faster around eleven empty spaces than a hole surrounded by eleven eggs. This explains why electrons are always much more mobile than holes.
On the application of a voltage, excess electrons flow across the junction between the layers from the negative side of the LED. Once they reach the positive side, some of the electrons combine with some of the holes. They annihilate one another, giving up the ghost in the form of a photon, i.e., light.
The sophistication of an LED lies in the fact that, in order to get the devices to emit bright light, you have to optimize them. Instead of a single positive-negative junction, you have to build what is known in the jargon as a “double heterojunction” (aka “heterostructure”). This ponderous locution simply means a sandwich consisting of two layers of a slightly different material that are used to confine the active, or light emitting, layer. This layer, in its thinnest form, is known as a quantum well.
Early LEDs could only produce one photon for every thousand electrons. The point of quantum wells is to improve the chances of electrons bumping into holes. The wells are extremely thin, just a few atoms deep. These tiny, almost two-dimensional trenches serve to trap the mobile charge carriers. Unable to escape from the trench, particles and absences of particles are forced to combine. If you like, you can imagine electrons as drops of liquid falling down the well and holes as bubbles rising up it.
(What makes a well quantum? As with anything to do with quantum mechanics, if like me you are untechnical, this is a question that it is probably better not to ask. In essence, “quantum” means something that is very small indeed.)
A light emitting (or laser) diode may contain any number of quantum wells, but the optimum number for mopping up all the mobile charge carriers in an LED seems to be three. This means that modern devices consist of at least ten layers: quite a sandwich. To grow quantum wells of atomic-level thinness and with sufficiently abrupt transitions between one layer and the next requires, as we shall see in the next chapter, extremely sophisticated computer-controlled equipment.
It helps, from the crystal grower's point of view, if you are good at visualization. “To be a good materials scientist, you have to be able to see the atom, you have to kind of be the atom,” explains Warren Weeks. “You have to say, Hey—what's this temperature going to do to a little atom on the surface of the wafer? And you get a feel for how chemical bonds break apart, for temperatures and gas flows, you can kind of visualize the atoms in the gas phase, because the reaction is going from gas to solid, so it's kind of strange.”
Light emitting (and laser) diodes are made from materials called semiconductors. The name comes from the fact that semiconductors sit midway in the spectrum of materials between conductors, like metal, and insulators, like glass. The best-known and most common semiconductor is silicon. It has four bonding electrons in its outermost shell, known as the conduction band, unlike conductors, which have eight, and insulators, which have none.
In addition to elemental semiconductors like silicon, it is also possible to synthesize compound semiconductors out of two elements, one having three free electrons, the other five, like gallium arsenide, gallium phosphide, or gallium nitride. The one with five (arsenic, phosphorus, nitrogen) donates an electron to the one with three (gallium), so that both have four, just like the elemental semiconductors. In addition to these so-called three-five compounds, you can also synthesize two-six compounds like zinc selenide.
To give an excess of electrons or of positively charged holes, to make the material negative- or positive-type, trace amounts of impurities known as dopants are added to the mix. In gallium nitride devices, silane—a derivative of silicon, which has four bonding electrons—is typically used to make n-type material (a semiconductor with extra electrons), while magnesium, which has two bonding electrons, supplies the holes.
All light emitting (and laser) diodes are made from compound semiconductors. Why go to the trouble of using a complicated material when a simpler, much cheaper one like silicon is available? The answer is that silicon does not normally emit light. It suffers from what is known as an indirect bandgap.
The bandgap is the amount of energy it takes to jolt an electron up from the valence band (where it is bound) to the conduction band (where it is free to move, and hence combine with a hole). The bandgap also determines the energy of the photon produced by the combination, hence the wavelength of the light emitted.
The higher the energy, the shorter the wavelength. Gallium arsenide, which emits infrared light at around 885 nanometers, is a narrow-bandgap material; gallium nitride, which on its own emits ultraviolet light at around 365 nanometers, is a wide-bandgap material. Nanometer wavelengths, produced by quantum wells just a few atoms thick? That explains why it is sometimes asserted that LEDs are the world's first ubiquitous nanotechnology. The degree of purity the materials have to exhibit is staggeringly high. Impurities on the order of two or three parts per million are enough to stop the show, as are nanoscale cracks in the crystal lattice.
In direct-bandgap semiconductors like gallium arsenide and gallium nitride, electrons zip across the gap between valence and conduction bands and combine with holes unobstructed. In an indirect-bandgap semiconductor like silicon carbide, the interaction is much more complicated. So much so that it requires the crystal lattice to shudder. Combinations still take place, but most do not produce light. Indeed, it is hundreds of times easier for light emitting combinations to occur in a direct-bandgap semiconductor than in an indirect one. Hence, as we shall see in the next chapter, Shuji Nakamura's decision to eschew silicon carbide in favor of gallium nitride. Though a blue light emitter, SiC has an indirect bandgap, thus by definition cannot produce bright light.
So much for colored light emitters. But how do you get white light, which by definition is colorless, out of colored LEDs? The answer is, you can do this in two ways by adding lights of different colors. The first way involves mixing light from red, green, and blue LEDs (amber is sometimes also added). This delicate balancing act is relatively expensive. The second way is simpler and much cheaper. You take complimentary colors, a blue light emitting diode and a yellow (yttrium aluminum garnet, or YAG) phosphor coated on the inside of the LED's plastic casing. The YAG phosphor converts the light from the LED. The resulting blend is perceived as a cool white. Adding a second phosphor that emits in the red region produces a warmer white. Most of today's white LEDs are made this way. However, the efficiency of warm white devices is often much lower than that of cool white ones. Much research today goes into developing phosphors with better light conversion efficiency.
Man's fascination with luminescence dates back to ancient times. Aristotle and Pliny the Elder recorded light emitted by fungus and by the scales of decaying fish. Electroluminescence was first observed in 1907 by Henry Round of New York. He noticed a “curious phenomenon,” namely, that by applying a voltage to crystals of sandpaper grit—carborundum, aka silicon carbide—he could produce a yellowish light. The first report of blue electroluminescence, also in silicon carbide, dates back to 1923. Such naturally occurring light emitters were the direct ancestors of the dim blue silicon carbide LEDs made by Nichia's American archrival, Cree.
In the mid-1950s, at the dawn of the semiconductor era, RCA's Rubin Braunstein observed infrared light emission from the then brand-new compound semiconductor, gallium arsenide. But the first person to fabricate a visible-spectrum LED and, moreover, to recognize the significance of what he had achieved in terms of its momentous potential applications, was a feisty thirty-four-year-old electrical engineer from the coalfields of southern Illinois named Nick Holonyak.
Holonyak made this first visible red LED in the fall of 1962, with funding from the US Air Force. He was then working at General Electric's Solid-State Device Research Laboratory in Syracuse, New York. In the February 1963 issue of Reader's Digest, Holonyak told an interviewer, “We believe there is a strong possibility of developing [the LED] as a practical white source.” The article went on to predict, “If these plans work out, the lamp of the future may be a piece of metal the size of a pencil point, which will be practically indestructible, will never burn out, and will convert at least ten times as much current into light as does today's bulb.” Prescient words.
Ironically, Holonyak would shortly leave GE, the company Edison founded, after his boss more or less showed him the door. Holonyak had been fooling around in domains that were of no interest to GE management. He returned to the University of Illinois, where he remains, still active to this day. Thus, at a time when GE was way ahead of everybody else, the company turned its back on LEDs, the technology that would eventually displace the lightbulb, GE's original raison d’être. “You can look at that and say, Are those people crazy that they didn't see that the semiconductor had a future in light emission?” an exasperated Holonyak would ask me thirty years after leaving GE. “Why didn't GE pursue it? I mean, they gotta be nuts, when you think about it.” But building businesses from scratch is evidently not GE's forte.
The invention of the light emitting diode was a by-product of a race to see whether, in the wake of the recent invention of the ruby laser, it would be possible to produce a semiconductor equivalent. Four groups participated in the race, including one from IBM and Holonyak's. It was ultimately won by Holonyak's colleague and friend at GE, Bob Hall. Having won, however, Hall promptly lost interest in semiconductor lasers, and went off to do something else.
The race to build the semiconductor laser was not motivated by any idea of what such a device would be useful for. The first laser diodes were mere laboratory curiosities, capable of operating only under highly restricted conditions. These included being dunked in liquid nitrogen and zapped with short pulses of current. Indeed, for many years the laser would be dismissed as “a solution seeking a problem.” Not until the 1980s, with the appearance of CD players and fiber-optic communications, would the tiny devices finally come into their own.
Early laser research was curiosity driven, in an era when big corporations like AT&T, GE, IBM, RCA, and Westinghouse were able to recruit the cream of the scientific crop. They lured scientists and engineers, not with stock options—an incentive that in 1962 had yet to be thought up—but with hefty salaries and freedom to pursue their instincts, wherever they led, unconstrained by commercial pressures. For researchers in the 1960s, the central labs of corporations were much like the universities where they would normally have worked. Corporate scientists published papers, attended conferences, even took sabbatical years just like professors. Today, when research horizons in the corporate sector are mostly measured in months, and blue-sky research is once again the exclusive domain of academe, such freedom seems very old-fashioned. Back then, however, it was the norm.
Holonyak was more down-to-earth than many of his peers. Unlike Bob Hall at GE, he would never drop research on light emitting devices but would continue to work on them throughout his long career. Holonyak had grown up poor, the son of immigrants from Eastern Europe, from what today is western Ukraine. His father was a coal miner. When little Nick was five, his dad gave him a pocketknife. With it the youngster learned how to make things for himself—slingshots, scooters, whatever: “You didn't go ask for things, you made them.”
In 1953, when transistor pioneer John Bardeen invited Holonyak to become his first postdoc at the University of Illinois in the then brand-new field of semiconductor research, Nick jumped at the chance. “We started in a bare room. We had to build everything: benches, all of our equipment…. And [we] learned everything from scratch.” This ability to build anything would stand Holonyak in good stead when it came to fabricating devices. Much as, almost forty years later, Shuji Nakamura's skill with his hands would help him to scoop the world with his bright blue breakthrough.
The first LEDs were made of gallium arsenide, which as we have seen emits infrared light. That was good enough for his peers. But Holonyak insisted that, as he put it, “Our concentration should be on the visible spectrum, because that's where the human eye sees.” Accordingly, in order to widen the material's bandgap, Holonyak added a dollop of phosphorus to the mix. He chose an unconventional growth method known as vapor phase epitaxy. “People told me that, had I been a chemist instead of an electrical engineer, I would have known that growing a crystal this way was impossible,” he would recall many years later. “No one in their right mind would have tried it.” Nonetheless, Holonyak's unorthodox method worked. The result was gallium arsenide phosphide, or GaAsP, the world's first-ever semiconductor alloy. It emitted a dull red light.
GE offered these first visible diodes for sale, via the Allied Radio component catalog, at $260 each. Lasers cost $2,600, an indication of the perceived difference in value between the two types of diode. For many years, LEDs were seen by researchers as the poor relation. Lasers, with their more complex structures (needed to amplify the light) and well-defined beams, seemed much sexier. Today, however, in terms of their applications and production volumes, LEDs are by far the more important device.
Holonyak's main contribution to the field would not be commercial products, but outstanding PhD graduates. At last count, he had trained over sixty doctoral students. First among them was George Craford, a self-effacing country boy from the corn state of Iowa who found physics more to his liking than the arduous family business of farming. He encountered Holonyak during a tour of Holonyak's lab at Illinois. “I saw him stick this little LED, a red speck of light that you could hardly see, into a Dewar flask of liquid nitrogen and, suddenly, there was a bright light that made the whole Dewar glow.” Beguiled by this magical demonstration, Craford signed up with Holonyak as his thesis adviser.
Over the years, professor and PhD would remain in close contact, speculating about the future of LEDs and how their devices would eventually replace Edison's lightbulb. “Nick and I have been talking about this stuff for twenty-five, thirty years,” Craford told me in 1994. “We've dreamed about it happening; the whole time we've been wondering—if you can replace tubes with transistors, why not tungsten lamps with LEDs? That thought has been there more or less since the beginning, although it seemed fairly preposterous early on.”
The first firm to commercialize LEDs, using technology transferred from Holonyak's lab, was the chemical company Monsanto. “They were in the fertilizer business; they basically had a phosphorus mine,” Craford recalled. “And they thought, Gee, there must be a use for phosphorus in all this emerging electronics stuff. That's how they got into making compound semiconductors, which is a pretty bizarre chain of events.”
Craford joined Monsanto in 1967. Two years later, he achieved a breakthrough, doping a device with nitrogen to produce the world's first yellow LED. “The red LEDs we were selling at that time, you basically wanted to be in a dark room to get the full effect. But this thing, boy, it really lit up in that dark room. It was a yellow device and it was vastly brighter than anything we had seen before.” The fact that the human eye is more sensitive to yellow than to red light helped. But Monsanto's marketing people were not impressed. “They came back with the feedback, Gee, our customers are using red, they like red, and they don't much care if it gets brighter, unless it's cheaper. If you can make it cheaper, that's good, but any other stuff is pie-in-the-sky.”
The first applications for red LEDs were as on/off indicators and seven-segment alphanumeric displays in calculators and digital watches. As a leading maker of calculators and other scientific instruments, Hewlett-Packard took great interest in the new technology, hiring Holonyak as a consultant and initiating a collaboration with Monsanto.
The third major US manufacturer of first-generation LEDs was AT&T. The phone company needed miniature, long-lasting, low-power indicator lamps for its multi-line telephones. On such phones, the lamps show which lines are being called and which are busy. A second application was to light up the keypad, so that phones could be used in dark places such as bedrooms.
Bell Laboratories developed its own technology, based on the indirect-bandgap compound, gallium phosphide. In addition to red, GaP LEDs could also be made, with the addition of a small amount of nitrogen, to emit a yellowish-green light. Some forty years on from their commercial debut, green gallium phosphide and red gallium arsenide phosphide LEDs are still manufactured and sold in huge quantities, for example, as the digital displays on clock radios.
For battery-powered calculators and watches, however, LEDs were simply too dim to see in daylight and used too much power. By the end of the 1970s, they had been replaced by liquid crystal displays. In telephone keypads, however, LEDs continue to be used, the difference being that these days, they are mostly blue or white, not green or red, and made from gallium nitride.
A new market like LEDs attracts hungry firms. Big Japanese manufacturers including Matsushita, Sharp, and Toshiba soon piled in. A vicious price war ensued. When the smoke cleared, the Americans—notably Monsanto and Texas Instruments—had quit the field. The last remaining US LED standard bearer was Hewlett-Packard, which George Craford joined in 1979. HP begat Agilent, which in 1999 formed LumiLEDs Lighting, a 50/50 joint venture with Philips of the Netherlands. In 2005 Philips bought out its American partner, and LumiLEDs became a wholly owned subsidiary of the Dutch firm. In the interim, however, a feisty new US challenger had sprung forth, in the shape of a start-up called Cree Research, based in North Carolina. Its specialty was blue LEDs. This firm has an important role to play in the saga of Shuji Nakamura and the solid-state lighting revolution. We shall learn more about Cree in chapter 6.
Among Japanese LED manufacturers there was also an unfamiliar name. Stanley Electric was a specialist maker of lamps for car companies like Toyota. Stanley's director of R&D, Toru Teshima, had long been haunted by the notion that a new and better form of lamp than the incandescent lightbulb might come along and destroy the company's core business. The appearance of the LED confirmed Teshima's worst fears. At his urging, the company threw itself into developing the new technology. By 1982 Stanley was producing the brightest red LEDs that anyone—including Craford, who collected a couple of them from Teshima on a visit to Tokyo—had ever seen. They were so bright it hurt your eyes to look at them, the first high-brightness LEDs.
Stanley's primary target was the car market. This had long been the goal of LED makers. Monsanto had been so optimistic about the future of LEDs in automotive applications that in 1973 the company placed an advertisement in the Wall Street Journal showing a car with LED headlights. In fact, it would be twenty years before LEDs came to be used in the exterior lights of a car. And even then, being red, they would be at the back of the vehicle, not the front, in the form of high-mount center brake lights. The first production car equipped with these LEDs—made by Stanley—was the 1994 Nissan Fairlady.
High-brightness LED brake lights were not only smaller, lighter, longer-lasting, and more robust than the incandescents they replaced; they were also safer. Out on the highway at sixty miles an hour, they turned on instantaneously, giving the driver of the vehicle behind a precious extra car-length's worth of reaction time. At the time of this writing, high-brightness white LEDs are being used as daylight running lights in upscale European cars such as the Audi A8. When I visited Craford in mid-2005 at LumiLEDs’ headquarters on the edge of Silicon Valley near San Jose Airport, he was confident that this was just the beginning. “Headlights will follow,” he predicted confidently. “It's a question of, Is it two years, or three years? But certainly within this decade, you're going to have LED headlights; the whole car is going to be LEDs.”
The reason for the improvement in LED brightness was the move to a more complex device, the double heterostructure. This, as we have seen, consists of two layers of slightly different material confining electrons and holes in a very thin layer, which in its ultimate form is known as a quantum well (a term that Holonyak claims to have coined).
The idea for the heterostructure came from Herbert Kroemer, an exact contemporary of Holonyak, who left his home in East Germany in 1948, at the time of the Berlin Blockade. In 1963, in the immediate aftermath of the great race to build a semiconductor laser, Kroemer was working at Varian Associates in Palo Alto, California. There, he attended a talk on the early lasers given by a colleague. Recall that these worked only when dunked in liquid nitrogen and zapped with short pulses of current. To be useful, lasers would have to work continuously, at room temperature. This, the speaker reported, had been investigated by experts and ruled out as fundamentally impossible. “That's a pile of crap,” Kroemer growled, with characteristic bluntness.
It was immediately obvious to him that all you had to do was trap the electrons by having a material with a wider bandgap confine a material with a narrower bandgap. This flash of insight would contribute to Kroemer winning the Nobel Prize for Physics thirty-seven years later. He wrote a paper on heterostructures and submitted it to Applied Physics Letters. The paper was rejected. (It was subsequently published, in another, less-read journal.) Then came the final irony. Varian refused Kroemer permission to work on his proposed new laser. What was the point? There would never be any applications for such a device. “There go compact discs and fiber optics,” Kroemer snorted, decades later. And bright blue LEDs he might have added, since they, too, are based on double heterostructures. Today, in a nice coincidence, Kroemer works at the University of California at Santa Barbara's Engineering Sciences Building, just a few doors down the corridor from Shuji Nakamura.
It was left to others to pursue semiconductor laser technology: notably, Zhores Alferov, co-winner of the Nobel Prize with Kroemer, and Izuo Hayashi and Morton Panish, who built the first room-temperature, continuous wave laser at Bell Labs in 1970. Seven years later, Holonyak and his students used liquid phase epitaxy to grow the first quantum wells. Shortly afterward, Russell Dupuis, another of Holonyak's graduates, grew the first quantum well lasers using a new and improved growth method called metal organic chemical vapor deposition. MOCVD, as we shall see in the next chapter, was the method that Nakamura would also adopt to make his bright blue LEDs.
In November 2003, in a ceremony at the White House, President George W. Bush honored Holonyak, Craford, and Dupuis with the National Medal of Technology, the highest honor the United States can bestow on its inventors. Despite such high-profile recognition, however, the Bush administration was not prepared to put its money where its mouth was.
In 1999 a proposal to launch a large-scale US national R&D initiative on solid-state lighting had been floated. The idea was to accelerate the transition from conventional lighting to energy-efficient LEDs. In addition to companies, the initiative would involve universities and government institutes like Sandia National Laboratories. The program would run for ten years with $50 million in annual funding. Senator Jeff Bingaman, a Democrat from New Mexico, sponsored a bill in the Senate that would create such an initiative, dubbed the Next Generation Lighting Initiative. This was subsequently lumped into the Bush administration's huge, bloated, controversial, oil industry-friendly Energy Policy Act of 2005. Funding for LEDs had been slashed to a measly $5 million a year.1 “That's not really enough to make an impact,” Craford told me. “The Bush government is more into trying to find more oil than trying to save energy.”
While it may have missed the boat on funding for research and development of core technologies, there is still plenty the government can do to accelerate the market for solid-state lighting by promoting the adoption and growth of energy-saving lights through advocacy, incentives, and rebate programs.
More on government LED initiatives in part 4. Meantime, let us return to the story of Shuji Nakamura.
1. It has since been increased, but not by much.