CHAPTER 4

Channeling Chance

What is hidden and unknown and cannot be discovered by scientific research will most likely be discovered by accident, if at all, by the one who is most observing of everything related thereto.

—CHARLES GOODYEAR

 

 

AN ACCIDENTAL INVENTION is a paradox of sorts. On one level, almost everything associated with the invention of something new seems to occur by happenstance. On another level, however, it seems that very little is accidental about accidental inventions. These mistakes tend to happen to the right people in the right place at the right time. We all know how important serendipity is for the success of almost everything, but we rarely explore this dual nature of chance.

Even the three princes of Serendip weren’t merely lucky. According to the sixteenth-century Persian fairy tale that gave birth to the word serendipity, a three princes hailing from the fictional province of Serendip always found themselves riding their horses past things that they weren’t seeking but that they ended up turning into brilliant insights. If you read between the lines, you’ll notice that the princes were always traveling to interesting places and that they were always on the lookout for chance wisdom. “One sometimes finds what one is not looking for,” said Alexander Fleming, who supposedly stumbled across some funny-looking bacteria-killing mold in his laboratory one day in 1929. It turned out to be penicillin, an antibiotic that changed the course of medicine and won Fleming a Nobel Prize. Disagreeing with such expressions of modesty was Louis Pasteur. “Did you ever observe to whom these accidents happen?” he said. Then he famously added, “Chance favors the prepared mind.”1

DROPPING THE CHIP

The story of Bernie Meyerson is a case in point. Meyerson is a big bear of a man with a bulldog face, and he speaks in his New York accent about how he almost dropped out of college to become a cabinetmaker. Instead of showing up for his junior year, he went into business with his brother and some associates and opened The Cabinet Shop, which operated a factory just outside New York City in the mid-1970s. “We’d start with a tree and run it through an evil machine we called Oliver, which would eat the tree and turn it into boards,” Meyerson recalls.2 In contrast to goods for sale at Ikea and Home Depot, all the kitchen cabinets and bedroom furniture made at The Cabinet Shop were custom built and finished in exquisite detail. “We did some fine cabinetry, let me tell you,” he says.

Then one day, Meyerson was walking down the street in upper Manhattan on his way to visit his girlfriend when he was lucky enough to run into his former City College physics professor. Bill Miller was a brilliant scientist who had once worked alongside Robert Oppenheimer, of Manhattan Project fame.

“What are you doing now?” Miller asked.

“I’m making cabinets.”

“You’re in the wrong line of work,” said the professor.

Miller knew that Meyerson had been one of his brightest students, someone who seemed destined to become an engineer or a scientist. In fact, Meyerson had grown up surrounded by Radio Shack parts. As a child, he played pranks with radios and conducted electromagnetic experiments that he entered in science fairs, and he graduated from the Bronx School of Science, the city’s prestigious magnet school. Now Meyerson was supposed to be taking a year off from college, trying to figure out what he wanted to do with his life. But Miller could tell that cabinetmaking was luring the student away. The professor convinced him to return to college the next fall. Meyerson again immersed himself in his studies, and he continued at City College for his graduate work in solid-state physics.

One day in 1979, while doing a lab experiment, Meyerson had what seemed to be a minor mishap. He was handling some one-inch-square silicon wafers. Following normal procedure, Meyerson put a wafer into a furnace to purify the material. Afterward, to clean it, he dipped it in hydrofluoric acid. The chip was then supposed to react with the oxygen in the air to form a microscopic layer of silicon dioxide, also known as silica or window glass. The thin glass surface would serve to protect the chip.

But as Meyerson was removing the wafer from the solution using a pair of tweezers, he accidentally dropped the chip onto a dirty metal hood, which had some “grunge” on its surface, he recalls.

When he rinsed the grungy chip with water, he noticed something odd. The chip didn’t get wet. “It should have behaved like windshield glass,” he says. Yet this substance didn’t retain even a bead of moisture. Confused, he put the chip into a beaker of water for ten minutes, left the room, and then came back to remove it. To his surprise, it was still dry. He thought, “That’s impossible.” Needing to get back to his work, he “filed it away under scientific inconsistency,” he says.

Perhaps his work as a cabinetmaker gave Meyerson an eye for fine detail that others might have missed. In any case, he knew that what he had just seen didn’t make sense. He knew that it contradicted what he had read in physics textbooks. Nevertheless, Meyerson didn’t immediately see what use his accidental discovery would be. “End of episode,” he says.

Three years later, now working as a research scientist at IBM’s Thomas J. Watson laboratories in Yorktown Heights, New York, Meyerson saw the happy accident come back into play. Like everyone else in the semiconductor industry, he was looking for ways to make faster and more powerful integrated circuits to drive the next generation of computing and communications devices. As always, that meant packing more circuits into less space on a fingernailsized chip. And as always, accomplishing that would require some sort of breakthrough in the process of manipulating microscopic materials. The fact that the silicon chip remained dry in water had remained stuck in Meyerson’s mind, and he set out to discover why that was so. He began thinking about how the answer might change the future of the technology.

Meyerson’s chance encounter with his professor, along with his mishap in the laboratory, led to an opportunity. This accidental confluence of events would lead to a new kind of semiconductor that would result, twenty-five years later, in nearly $30 billion in annual revenue for IBM and other chipmakers.

“I can trace it all to dropping the silicon wafer as a grad student,” Meyerson says. “It enabled this cascade of things to happen.” But before we get into what the actual breakthrough idea was, we need to understand a few things about chance and how inventors throughout history have gone about channeling it.

EXPLOITING SERENDIPITY

It isn’t difficult to find examples to illustrate how serendipity can be exploited by inventors for commercial gain. History is replete with mishaps that led to great products, industries, and fortunes, especially when it comes to the accidental creation of new materials.

One day in 1876, for example, Swedish chemist Alfred Nobel badly cut his finger on a piece of glass in his laboratory. He applied collodion, a soothing ointment, to the wound, but the pain kept him awake that night. He began wondering about the properties of this gel and whether it might fill a special need in the lab. When he tested it, the results were impressive. It turns out that the combination of collodion and the volatile substance nitroglycerine formed the basis for his invention of gelatinous dynamite, one of several commercially successful explosives that made Nobel a millionaire. Nobel, of course, later donated much of his resulting fortune to establish an annual series of prizes in his name.

In another case, a DuPont research chemist named Roy J. Plunkett was experimenting with the refrigeration gas tetrafluroethylene, also known as Freon, one day in 1938. One of the pressurized cylinders of the substance seemed to malfunction. The gas failed to release even though a colleague had opened the valve. The two men set it aside to examine later. When Plunkett sawed open the cylinder, however, he found that the gas had somehow solidified into a mysterious white powder. Upon testing it, he found it to be more slippery than any other known polymer. DuPont later named it Teflon. First used to coat gaskets inside the first atomic bombs and then to line some of the first NASA spacesuits, the Space Age material was later applied to billions of dollars’ worth of pots, pans, and muffin trays.3

In an even more famous case of accidental discovery, an early radar pioneer named Percy L. Spencer was walking through one of the laboratories at the Raytheon Company one day in 1946. He paused next to a magnetron tube, the heart of a radar system. According to legend, he suddenly noticed that a candy bar in his pocket was melting. Instead of throwing it away, washing his hands, and forgetting about it, Spencer took note. Pretty soon, he was aiming the tube’s microwave radiation at a bag of unpopped corn kernels. Sure enough, according to his account, he quickly had fresh popcorn in his hands. The first industrial-sized microwave ovens, known as Radar Ranges, were on the market by 1953. They weighed hundreds of pounds, didn’t cook food very well, and were widely ridiculed. But slimmed-down models eventually became standard equipment in homes and restaurants.

One day in the early 1950s, Swiss engineer George de Mestral came back from a walk in the woods and noticed that his jacket was covered in cockleburs: sticky round clusters of seedpods. Instead of being annoyed and throwing them away, he took a closer look. He noticed that the spikes on the burrs ended in tiny hooks. Eight years later, de Mestral succeeded in developing an artificial fastener crafted from nylon that mimicked the effect. Combining the words velvet and crochet, he came up with the name Velcro and formed a company to market his invention.

Dr. James Schlatter was a chemist developing a new anti-ulcer treatment at the drug company Searle. One day in 1965, as he was heating a mixture of chemicals, he accidentally knocked over the glass flask. Some of the powdery substance spilled onto the outside of the flask and stuck to his fingers. This mishap probably wouldn’t have been of any consequence had Schlatter not licked his fingers a few minutes later before picking up a piece of paper. It was a lucky move. When he noticed the extremely sweet taste, he went back to sample more of the substance in the beaker. Tests confirmed that this powder, known as aspartame, was two hundred times as sweet as sugar, with none of the bitter aftertaste of saccharin and other artificial sweeteners. The resulting product, NutraSweet, made billions of dollars for the corporation before the patent ran out in 1998.

All five of these inventions seem to have come about by chance. But what is seldom noticed is that these kinds of happy accidents probably happen every day to an unknown number of people all over the world. In the vast majority of cases, these events would seem insignificant or would even elude observation. The inventors, however, did not let these situations slip by them. Could they have been waiting for such accidents to happen? In a sense, they were. Since history is filled with many accidental discoveries, inventors who know this tend to keep their eyes open for the unexpected.

This means that luck may come your way only if you are ready to welcome it when it does. “The first rule of discovery is to have brains and good luck,” suggests mathematician George Polya. “The second rule of discovery is to sit tight and wait till you get a bright idea.”4 Harold “Doc” Edgerton, the MIT-based inventor of underwater cameras, stroboscopes, and sonar improvements, is more specific in his formulation of how to anticipate chance: “By combining hard work, careful awareness, perseverance, and unconventional thinking, a scientist could find himself or herself in the right place at the right time to experience serendipity.”5

Successful inventors seem to make their own luck. They simply look much harder for clues. They stand ready to embrace the odd occurrences that may lead to something big. In other words, they learn to channel chance. They are on the lookout for random inputs that can generate a surprising new output. Harvard creativity scholar David Perkins puts it this way: “Chance is an engine of insight.” Not always, of course. Most of the time, an accident or a random occurrence will turn out to be only that: something meaningless that offers no lessons.6

But when you realize that chance can be channeled, you can see that the examples we’ve just discussed weren’t pure luck; rather, they were extraordinary opportunities in disguise. Alfred Nobel, in fact, had already invented dry dynamite sticks five years before he cut his finger. His brother and five other men had previously died in a nitroglycerine explosion—an accident of a more serious kind—and Nobel had since been on a constant search for new materials to combine with the unstable substance. His invention of gelatinous dynamite, which later evolved into what we know as plastic explosives, was the result of a minor accident that played into his existing search pattern. Incidentally, Nobel had noble intentions. Gelatinous dynamite was designed for civil engineering projects such as creating railroad tunnels in mountains, and not for use in weaponry.

The other accidental inventors deployed the same kind of deliberate reaction to random events. Roy Plunkett was working in a lab where the mission was to develop useful chemicals and materials. True, he could have ignored the mishap that led to Teflon, but he was observant enough to realize that when a known material behaves in a strange way, finding out why can lead to something new. Similarly, Percy Spencer was not a narrowly focused defense industry employee. Rather, he was a prolific inventor, with one hundred twenty patents to his name. He jumped on this new microwave application immediately. The morning after his serendipitous discovery, he was back in the lab using microwaves to blow up eggs. In the same way, George de Mestral didn’t simply glance casually at the cockleburs on his coat; he immediately put them under a microscope to study their tiny hooks and their behavior in detail. Finally, James Schlatter’s inadvertent success with aspartame wasn’t a fluke. As it turns out, other artificial sweeteners, including saccharin about one hundred years earlier, were the result of remarkably similar industrial accidents.

STUMBLING INTO SUCCESS

Some inventors have no right to be successful. An inventor sometimes approaches the problem at hand with little or no theoretical knowledge and yet somehow hits upon a breakthrough anyway, confounding experts and expectations. But is it really sheer serendipity?

Consider the strange story of Charles Goodyear and his world-famous invention. Goodyear was born in 1800 in New Haven, Connecticut, and on his twenty-first birthday was named a partner in the family hardware store, A. Goodyear & Sons. By then, factories all around the world were producing rubber products, but their applications were severely limited because the material became too soft to use in the summer and too hard to use in the winter. Discovered in 1735 by French explorers who pillaged trees in Peru, rubber was given its name by famed scientist Joseph Priestley, who noticed that it could be used to “rub out” written errors. It was a substance born of plunder and blunder, and it seemed destined to be made popular by someone who made more than his fair share of mistakes.

Goodyear, who at a young age fancied himself an inventor, came across the rubber problem by chance. One day, he walked into a shop near his family’s store to purchase a rubber life preserver. After taking a look at the air-intake valve on the tube, he told the shopkeeper he would be able to improve it. When he returned a few days later with an idea for how to do so, the shopkeeper told him he’d probably find it more lucrative to do something to improve the rubber itself, rather than the valve. Goodyear took the suggestion seriously.

In the 1830s, speculators were buying rubber harvested from India and promoting the “miracle product” as never before—for use in covering wagon wheels, for water-protective boots and trousers, for floatation devices, and for dozens of other things being dreamed up by American and European entrepreneurs. The excitement over this special India rubber set off an investment frenzy, with new rubber companies and retailers cropping up everywhere. But the India rubber boom came to an abrupt end in the hot summer of 1835. The industry melted down when everything made from rubber suddenly began turning into a gooey mush. The odor was so foul that the material had to be buried.

Some speculators thought that the heat of that summer caused a fluke occurrence, but the same thing happened the next year. So many investors lost their pants in the rubber bust that dozens of banks crashed along with the fledgling industry, and that in turn caused unrelated companies to go belly-up. The Goodyear family’s hardware store was one of the thousands of businesses to go bankrupt around that time.7

In the heat of the frenzy, Charles Goodyear had dedicated himself to finding a way to “cure” rubber: to protect it from temperature swings and make it into the durable, versatile material he believed it was destined to be. “No one knew any more about rubber or the chemistry of rubber than he did, and he knew nothing,” wrote author Wilson Mitchell.8 Goodyear later suggested that he wouldn’t have taken up the problem if he had known how difficult it would be: “I was blessed with ignorance of the obstacles I had subsequently to encounter,” he wrote. If Goodyear was to find success in this field, he would have to stumble across it, because he truly didn’t know what he was doing.

Goodyear first began to cure rubber by using things he found around the house. He tried mixing it with salt and pepper, and when that didn’t work, he tried chicken soup. He experimented with witch hazel, cream cheese, and ink. When he tried magnesia, the results were so much better than anything else that he “laughed with joy,” according to his journal. He made book covers, piano covers, shoes, and slacks using the mixture. He was briefly praised as the man who saved the rubber industry.

With his first profits, Goodyear bought a new house in the industrial town of Woburn, Massachusetts, and set up a laboratory there. But when the next summer came around, the new material began to melt on schedule. People wearing Goodyear’s rubber shoes were sticking to the ground, and those wearing his rubber trousers had to carefully peel off their pants.

With the failure of his product, Goodyear lost all his money. Neighbors thought he was a pleasant but hapless madman. Perhaps he should have given up at this point. Instead, he tried to add quicklime to the magnesia-based formula but discovered that “the weakest acid, even apple juice, would destroy the product.” He discarded the magnesia and used pure quicklime but found that it dissolved the rubber gum completely. He thought he had hit upon a happy accident when he buried in his backyard a sticky, smelly rubber shoe cured with the mysteriously named “aqua fortis.” Goodyear didn’t even know exactly what was in this chemical mixture, but when he later dug up the shoe, he found that the smell and stickiness had gone away. But he was unable to repeat the experiment with any success.

He then tried a product cured with nitric acid and sulfuric acid that he thought was good enough to sell in a newly opened store on Broadway in New York. The U.S. government was impressed enough with the “acid-cured” product to place an order for one hundred fifty rubber mailbags for postal workers. After collecting $5,000 for the sale, Goodyear left on a summer vacation. Once again, the product began melting, and again he lost everything he had. He sold his house, pawned his possessions, moved his family in with his brother, and rarely had enough money to buy food, but he continued to wear a rubber outfit whenever he was in public. If anyone was in need of some serendipity, it was Charles Goodyear. He had been at work on the problem around the clock for nearly a decade and had gotten nowhere.

Then one day, while working in his brother’s kitchen, Goodyear inadvertently left a slab of rubber on top of the wood-burning stove. Because he knew well that heat melts rubber, he had never thought of doing this deliberately. But when the stove singed the outside of the substance, it produced a remarkable protective layer that prevented the rubber from melting further. It was a completely counterintuitive solution to the problem. “I was surprised,” Goodyear wrote, “to find that a specimen, being carelessly brought into contact with a hot stove, charred like leather.”9

Yet this was the accident that led to his patented vulcanization process. In his 1844 patent application, Goodyear gave the following description: “My principal improvement consists in the combining of sulfur and white lead with the india-rubber, and in the submitting of the compound thus formed to the action of heat at a regulated temperature, by which combination and exposure to heat it will be so far altered in its qualities as not to become softened by the action of the solar ray or of artificial heat. . . . [N]or will it be injuriously affected by exposure to cold.”10

Goodyear acknowledged that his breakthrough wasn’t exactly the result of theoretical knowledge. “I admit that this was not the result of scientific investigation,” he wrote. But he was certainly practiced enough in observing variations in his favorite substance that he was able to recognize the breakthrough when it happened. The stove-singed layer that protected the rubber was only a few millimeters thick, but he saw how consequential that layer was. He may have known almost nothing about chemistry, but his powers of observation were excellent. He was keenly anticipating his lucky break, and when it arrived, he didn’t let it pass him by.

Goodyear was flooded with orders for his new vulcanized rubber. He paid off all his debts and opened factories that eventually employed tens of thousands of workers. But he priced his product too low and spent lavishly on advertising and promotion, a business formula that put him back in debt. His great fortune and fame came after his death in 1860 at the age of sixty. The world-famous Goodyear Tire & Rubber Company was actually set up after the Civil War by the late inventor’s associates, who eventually grew wealthy selling rubber bicycle tires, insulation for electrical wires, and later, automobile tires.

Was Goodyear lucky? Eventually he was. But he was also persistent in pursuing his good fortune and skilled enough to spot the breakthrough.

ENCOUNTERING THE ELEPHANT

Now let’s return to the story of Bernie Meyerson. Like Goodyear, Meyerson was faced with a problem with his materials. Silicon, which is refined from ordinary beach sand, was found to be an ideal semiconductive material. Another element, germanium, was also found to have this core property. Semiconductors were so named because they were somewhat good and somewhat bad at conducting electricity. They are semigood, which was good enough to serve as the foundation for the first transistors when they were invented at Bell Telephone Laboratories in 1947 and 1948.

Silicon can conduct electrons one second and then block them in the next. This is critical because when you’re integrating dozens, hundreds, thousands, or millions of electric circuits on a silicon chip, what you need is the ability to control each circuit independently.11 If the electricity jumps between circuits across the chip—a defect known as tunneling—the entire invention is useless. The finer your control, the more powerful you can make the chips.

But silicon is also an excellent “garbage collector,” in that the surface tends to react with things you don’t want it to. To kill any junk on the surface, engineers would bake the silicon wafers in high-temperature ovens at about 1,000 degrees centigrade. Such temperatures prevented the use of more sensitive alloys; they might improve the performance of the chip but couldn’t stand such heat. As it turns out, the main junk that engineers wanted to bake away was the native oxide layer, the same substance that Meyerson suspected did not exist. If it really was there, the chip he had dropped and rinsed three years earlier would have gotten wet. But if the oxide layer wasn’t present, he thought, perhaps such high temperatures wouldn’t be needed, and that might enable the use of more versatile, low-temperature alloys.

What Meyerson had in mind was combining, or “doping,” silicon with another element into a material that would give much higher performance. Like Goodyear, Meyerson was focused on an ultrathin layer on the outside of the substance. But in contrast to Goodyear, Meyerson had the opportunity to research the underlying science. “You can either stand on the shoulders of giants,” he says, using Newton’s famous phrase, “or you can try to be a giant on your own.”12 (In all fairness to Goodyear, there was then little published science in his chosen field.) Choosing the stand-on-shoulders approach, Meyerson studied how his predecessors had dealt with the problem. He read that physicists had already tried to grow an alloy of silicon combined with germanium.

One of them was Herbert Kroemer, a German-born physicist who was working at RCA Laboratories and later at Varian Semiconductor. In a 1954 paper, Kroemer proposed a silicon germanium alloy, calling it an example of a “heterojunction bipolar transistor.” But he was never able to get these things to operate. “They didn’t work at all,” says Meyerson. “His physics was dead correct, but the materials were dreadful. I had to figure out why it didn’t work.”13

First, Meyerson needed to verify that the accident he had observed as a grad student was no accident. Not wanting to bias the experiment, he had someone else re-create it. He sent an IBM chemist named Reed McFeeley to Brookhaven National Laboratory with pure silicon wafers and a potion of what he told McFeeley was a “magic elixir.” The elixir wasn’t magic at all. It was ordinary water mixed with 10 percent hydrofluoric acid, the same stuff Meyerson had used in his accidental grad school experiment.

As per Meyerson’s instructions, McFeeley baked the chip in an oven, dunked it in the elixir for 10 seconds, put it in a vacuum chamber, and then went to lunch. When he returned, he dropped the chip into a beaker of water and called Meyerson. After he pulled it out, he saw that it was indeed bone dry. “He thought I had done something magic to it,” Meyerson says. But Meyerson swore that wasn’t the case. Instead, he broke the news to McFeeley for the first time: “All the chemistry you’ve ever read in this field is wrong.”

If there was no oxide on the surface, then what was on the surface of the chip? Meyerson suspected that it was a layer of hydrogen, produced by the reaction with the hydrofluoric acid. McFeeley tested it and confirmed that this was true. “This was not a small effect,” says Meyerson. “This is like opening your closet and discovering there has been an elephant standing there for quite some time that you simply haven’t noticed.”

Of course, this could be an example of a corporate inventor tooting his own horn. But the results Meyerson was able to produce afterward serve as evidence of the fact that this was indeed a true breakthrough.

LUCKING INTO A PAYOFF

Attempting to keep his lucky streak alive, Meyerson took his fortunate finding and ran with it. He recruited a skunk works team of Watson Labs researchers that eventually grew to about two dozen people. They were able to produce silicon germanium chips baked at only 500 degrees centigrade, less than half the temperature previously thought possible. The low-temperature process for creating these new chips involved depositing the circuits on the silicon germanium using chemical vapors in an ultrahigh vacuum. By 1985, IBM had filed for the first patents on the breakthrough that put these devices at least ten years ahead of any known competition. In March 1990, the new technology received its first wide attention in a story published in the New York Times.14 A couple years later, the company was able to produce 120-gigahertz microprocessors that yielded more than triple the theoretical maximum performance of any semiconductor technology then on the market.

Meyerson was especially lucky that organizational politics didn’t kill his project. All along the way, he never had the full support of IBM’s top brass. His team wasn’t fully funded or even officially acknowledged, and most team members were juggling other projects deemed by management to be of higher priority. Even after the breakthrough was published, patented, documented, and announced, there was no plan at IBM to commercialize the technology. That’s when Meyerson had the idea to take the technology outside the company, proposing it to other companies that might see a need to use it in their own products. This was something that IBM had not done in the past. But Meyerson was able to leap over layers of management and make his case directly to Lou Gerstner, IBM’s new CEO. In carefully prepared demonstrations, Meyerson showed Gerstner how these new chips performed compared with pure silicon or other alloys such as gallium arsenide. “Data wins,” Meyerson concludes from the experience.

As luck would have it, there was a ready market for these new chips in the exploding wireless telecommunications industry. Within a few years, IBM was making the new silicon germanium chips for Analog Devices, Alcatel, Tektronix, Hughes Electronics, National Semiconductor, Northern Telecom, and Harris Corporation. Everything from digital cell phones to network routers to cellular base stations was suddenly based on this new generation of chips made from silicon germanium, also known as siggy, after the chemical symbol SiGe. By 2000, siggy sales topped more than $1 billion per year in revenue for IBM, not including the millions in fees for the licensing of IBM’s patents to other manufacturers and the billions more that IBM’s partners were generating.

But for Meyerson, the crowning chance encounter came in 1999 when he was presenting a paper on the breakthrough at a conference in Japan. In his talk, he gave credit for the original idea of the heterojunction chip to Herbert Kroemer, a man whom he had read about and occasionally corresponded with but had never met in person. Meyerson didn’t realize that Kroemer himself happened to be in the audience. Overjoyed to hear his work referenced in this way, Kroemer came forward to introduce himself afterward, and the two went out to lunch together. As often happens in the annals of science and invention, a technology that is first proposed by one individual is actually created by another person or group years later. Only in retrospect can one see clearly how the entire process of science and invention leads to revolutionary products. As Swedish philosopher Søren Kierkegaard once said, “Backwards understood be only can but forwards lived be must life.”

Meyerson’s work is exactly the sort of thing that the Nobel Prize committee looks for. Alfred Nobel launched the scientific prizes with the mission of awarding those who make discoveries that prove to have broad practical value. In a real sense, Meyerson demonstrated that one of Kroemer’s theories did have that kind of impact. As fate would have it, Kroemer found himself in Stockholm a year later, in December 2000, accepting the Nobel Prize in physics for his discovery of the heterojunction bipolar transistor. He shared the prize with Russian physicist Zhores Alferov as well Jack Kilby, formerly of Texas Instruments, one of the inventors of the integrated circuit. Silicon Valley legend Robert Noyce, whose work at Fairchild Semiconductor coincided with Kilby’s breakthrough, certainly would have also joined them on the stage had he not died in 1990. In his official Nobel interview, Kroemer acknowledged that he was forced by his corporate bosses at the time to abandon his work and leave the application of his theory to someone else. “I wasn’t allowed to work on heterostructures,” he said, “because I was told that it would never have any applications.”15

In other words, Kroemer was leaving the completion of his effort to chance and fate, not knowing who would pick up on it or even whether someone would. “That’s why I refuse to make predictions about the future,” Kroemer said, “because I think the outcomes of science and technology are opportunistic rather than deterministic.”

In retrospect, it seems a good bet that Kroemer’s work would one day pay off. But who could have predicted that the payoff would begin with a former cabinetmaker trying to recover from a minor mishap in a grad school lab? If not for that fortunate mistake, coupled with Meyerson’s persistence in channeling chance, this invention might never have come to be. Or, at the very least, it probably would have been completed by someone else at a different time and place.

CHANNELING THE FUTURE

If serendipity is bound to show its face sooner or later, how long should you wait for it? Should you go about your business as usual? Should you constantly be on the lookout for luck, always searching for the unanticipated occurrence that will yield a breakthrough insight? What if this behavior takes years, or the good part of a career, to pay off (as in the case of Charles Goodyear or Bernie Meyerson)?

It’s clear that serendipity and chance, good luck and bad luck, accidents and coincidences are always going to come along at one time or another. The trick is to observe and leverage the more meaningful of these arbitrary occurrences into opportunities. That’s how chance is channeled.

Meyerson could have spent his life crafting furniture rather than inventing things that generate billions of dollars of revenue for IBM. It’s not that there is anything wrong with woodworking. But Meyerson’s career change goes to show that chance encounters can change a life and that happy accidents can change an industry and the world.

These days, as the Watson laboratories’ chief technologist responsible for worldwide telecommunications circuit design, Meyerson leaves plenty to chance. Soon, he says, human engineering of computers will reach a limit, and humans will have to create computers that design other computers with little or no human intervention. Circuits will have to know when to spontaneously generate new circuitry in a way similar to the way the human body grows new cells.

It’s called intelligent design, and no one knows exactly how we’ll get there. But the future inventors of this technology may be wandering the halls of Watson or another lab right now. Some of these researchers might specialize in cognitive science, others in nanotechnology, still others in micromechanics, and others in biotech. Perhaps one of these people will make a mistake and bump into someone at lunch who recognizes it as something more than just an accident.