9

Diamonds and Rust

Memory as Free

Any one of the diminishing ranks of people who remembers daily life with audio tape (reel-to-reel, 8-track, cassette), videotape (videocassette), or computer memory tape knows well the shared limitations of those technologies.

Eight-track tapes had superb sound reproduction—better than today’s MP3 files. Videocassettes did, too, and their fast-forward and reverse operations were more precise—and often easier—than today’s DVDs. And computer memory tape reels held massive amounts of data—more than their successors could match for years.

But tape, in any form, had one gigantic, infuriating, and ultimately fatal limitation: It was linear. That is, all of the memory stored on magnetic tape was sequential: The next item to be stored was encoded right after the last one. And what this meant in practice was that the average time to find anything on a particular tape was one-half the length of that tape. That is, if you were lucky, the memory segment you were looking for was directly adjacent to where you were on the tape. At worst—usually when you were trying to set the mood on a date—it was at the other end of the tape. And heaven forbid if you initiated your search in the wrong direction.…

What made all of this particularly frustrating was that the world was still happily using a technology nearly a century old—the phonograph—in which it was a simple matter to locate a memory by merely picking up the needle and dropping it elsewhere on the surface of the record—a process that might take minutes cycling through a tape. It was the story of papyrus scrolls all over again.

Tape makers tried to overcome this inherent weakness by placing multiple tracks running in parallel on the tape, so that the operator could save time by jumping from one track to the next, but that also meant the information content (i.e., the width) of each of those tracks was now reduced.

Still, with some modifications in iron-oxide density, faster head speeds, better spindle motors, and denser tracks, the magnetic tape industry might have managed to keep up with the larger world of audio entertainment, television, and computers—had everything else remained intact. But those industries did nothing of the sort. On the contrary, these were some of the fastest-growing industries in business history.

The situation became especially acute in the case of computers (indeed, audio and video tape might have otherwise gone on for another generation as the medium of choice for consumers). Because the evolution of computers was taking place so quickly, and the amount of data that needed to be stored in memory was growing so exponentially, memory storage seemed about to be overwhelmed.

To understand why this was the case, we need to take a quick look at the history of computers to this point.

COLLECTING BITS

Early computers like ENIAC didn’t need much memory. They were primarily performing discrete operations, such as large-scale computation. Their operators essentially entered the raw data using switches—or, when more throughput was required, they used the nineteenth-century technologies of paper tape, punched cards, or typewriters rewired to send signals. Output was largely the same thing, with the typewriters now converted to printers.

In the late 1940s, as computers became more sophisticated and began to assume more tasks in statistics, finance, and testing and measurement—that is, as the quantities of data going both in and out of computers grew—the architecture of these machines began to change as well.

Computer architecture has three basic components: input/output, which brings data into the computer and brings out results; logic, which is the central, computational part of the computer; and memory. In the early days of computing, with input and output comparatively simple and memory mostly taking the traditional form of “printouts” on paper or tape, much of the industry’s concentration was focused on logic and the central processor. Because this operation was designed to be very fast, it was largely done with great banks of vacuum tubes. Tubes, essentially variations of De Forest’s triode, were very hot and short-lived (the technicians for the ENIAC ran around in bathing suits inside the huge computer, changing burned-out tubes every few seconds), but they had no equal for speed. One of the first assignments given to ENIAC, for example, was to solve a problem related to nuclear physics. The problem, estimated to take one hundred scientists a year to answer, was solved by ENIAC in two hours.1

Computer memory, meanwhile, was secondary, and when tube memory proved too expensive and unreliable, computer companies just stuck with paper printouts to deal with what were becoming huge “batch” operations. That’s why the invention of magnetic tape memory was welcomed by the industry. With tape, these giant new processing events—such as payroll—could be output onto big reels and saved for later printing as needed.

However, computer memory was not just growing in importance but also scope. By the 1960s, the industry had consolidated into what was known in the United States as “IBM and the Seven Dwarfs” (Burroughs, Univac, Control Data, NCR, Honeywell, General Electric and RCA) for their comparative size, as well as Olivetti and Siemens in Europe, and Hitachi, Fujitsu, and NEC in Japan, among others. This was the era of “Big Iron,” as exemplified by the world-dominating IBM 360 Series. By now there were—if one looked at the data-processing world’s entire data pathway—six different kinds of memory relating to computing.

First, there was all of the memory, in the form of raw data coming off test and measurement instruments in laboratories; payroll, tax, financial, and personnel records being created throughout organizations; and statistics, such as from the latest census, that had to be stored on forms, paperwork, and field notes before being entered into the computer.

Second, there was internal memory—called “random access” or RAM, because it was kept in an undifferentiated area inside the computer and given an address where it could be located.

Third, was “cache” memory. The central processor of computers worked at a very fast pace defined by the frequency of its internal signal—called its “clock speed.” Because there was no way that memory could keep up with this pace, the computer would look ahead at operations to come and import the needed data into the cache—a kind of data waiting room—so it would be there when needed.

Fourth was “read-only” memory, or ROM. Computer users discovered early on that there were certain operations that were constantly in use, and that reloading them into the computer was a waste of time. The result was the creation of a region of memory designed to permanently hold these programs in a secure place in the computer where they could be easily accessed but not easily replaced or modified. Early computer programmers had also learned pretty quickly that, rather that writing all of the steps of a program in the most fundamental computer assembly code, they could develop “languages” (among the most famous being COBOL, created by future rear admiral Grace Hooper of the U.S. Navy) and tools to simplify their work and to create a sort of memory library of programs to build upon.

Fifth was archival memory. In a typical operation, the results from the computation process went back to RAM waiting to be downloaded as output. However, in the early days of computing there wasn’t enough memory to go around—hence the appeal of tape memory as a place to store information peripherally to the operation of the computer itself. Today the last regular use of magnetic tape with computers is as long-term information backup and archive for personal-computer memory security.

Sixth—and often not counted in the process—was output memory. That was what happened to all of that processed output when it was eventually put to use in offices, laboratories, and classrooms. Despite all of the talk of the impending “paperless office,” the real result of this explosion in computer output was mountains of paper printouts that were difficult to access and use.

A NEW SPIN ON MEMORY

That was a lot of memory—and not a lot of good solutions for managing it. By the 1950s, the need for reliable, fast, and electronic memory had become desperate. There were several candidates, and all were put to use.

One of these was magnetic core memory. This technology, invented separately by IBM’s Frederick Viehe and Harvard’s Way Dong-woo and An Wang (the last of whom later became a computer industry tycoon), consisted of tiny magnetic metal rings—the “core”—woven together on a grid of metal wires. Each of these rings, addressed through the warp and weft wires of the grid, could then be turned on and off (1 or 0) as a single bit of memory.

Core memory was not easy to manufacture but became cheaper after computer companies found low-pay workers around the world to do the tedious stringing—most famously, Scandinavian seamstresses who had been left unemployed by the automation of the local textile industry. Core memory was fast and had the unique advantage of retaining its data even after the power was turned off. But it was very difficult to test for failures, and most important—a weakness that eventually led to its demise—it didn’t scale well. Every added bit meant an added ring—an unsustainable model when memory reached millions of bits.

The second solution had an unpromising start but proved to be both enduring and remarkably adaptable. Drum memory, originally designed for audio recording, was invented in 1932 by punched card–maker Austrian Gustav Tauschek. He likely saw it initially as an audio recording device, and shrewdly went back to the iconic drum-cylinder format of the early phonographs and kinetoscopes. When IBM bought his company, the patents went with it and became a long-term initiative in Big Blue’s labs.

By the 1950s, the drum-memory design that emerged involved “painting” the outside wall of a spinning drum with ferromagnetic material—typically iron oxide (rust)—then stacking an array of read-write heads up against that wall. The drum was then spun rapidly and the heads created multiple tracks. This, of course, was very similar to magnetic tape; you still had to wait for the rotation of the surface to get to the right spot, but the sheer number of heads and tracks cut the search process considerably.

But at its best, drum memory barely matched the capacity of magnetic tape, and its access times were much slower than core memory. Still, it pointed the way to a far better solution. That project, too, began at IBM, but the work was taking place in San Jose, California. That location proved important because the creation of this new technology—disk memory—would require all of the creativity, improvisation, and contempt for rules for which Silicon Valley was already becoming known.

Rey Johnson hadn’t planned to be one of the century’s great contributors to artificial memory. He had, in fact, started out as a high school science teacher in Michigan. But he was a born problem solver, and when he designed an electromechanical device that could automatically “read” pencil marks on standardized multiple-choice test forms, he thought the maker of those forms, IBM, might be interested. But IBM wasn’t interested. Johnson shrugged and went back to teaching.

That was 1932. Two years later, as standardized testing took off and began to swamp the small army of human graders, Big Blue took a second look. This time IBM offered Johnson a job as an engineer in its Columbia University and Endicott, New York, laboratories. There, Johnson spent twenty years becoming an acknowledged expert in punched card–memory printers and sorters.2

The year 1952 was a landmark year in the story of computers. As legend has it, a survey had estimated that the entire U.S. market for mainframe computers was seventeen machines. IBM’s new CEO, Tom Watson Jr., son of the founder, took an enormous risk and ordered his company to build nineteen—the first to be delivered in 1952. By then, IBM knew it had made one of the smartest decisions in business history.

But now the company had a whole new set of problems. IBM alone was producing 16 billion punched cards per year—and customers were complaining not just of the storage problems for all of that thick stock paper, but that many of these stacks of cards had to be loaded every day for the same purpose, when in theory the same standard programs could somehow just be loaded into the computer and downloaded automatically as needed. Concerned about this dawdling pace of computer memory development, IBM sent Rey Johnson to San Jose to open a new company research laboratory. Johnson recalled:

I was told that my flair for innovative engineering was a major consideration in my selection to manage the new laboratory. During eighteen years with the IBM Endicott laboratory, I had had responsibility for numerous IBM products—test scoring, mark sensing, time-clock products, key punches, matrix and nonimpact printers, and random card file devices. By 1952, I held over fifty patents, some of them fairly good. To be given freedom to choose our projects and our staff made the San Jose laboratory an exciting opportunity, especially since funding was guaranteed—at least for a few years.3

Santa Clara Valley was mostly orchards in those days, and San Jose an agricultural town, but the company was shrewd enough to recognize that something important was taking place just below the area’s bucolic surface. Stanford University was there. So was Hewlett-Packard. And the beginnings of a NASA facility at Moffett Naval Air Base. Ex-soldiers, who had seen the region on their way to the Pacific, were now moving west, armed with their GI Bills. And not least, the Lockheed brothers, having made their fortune in Burbank with airplanes, were now making plans to move back to their childhood home to build rockets and missiles. IBM’s new San Jose laboratory, just outside of downtown in a warehouse near a popular barbecue joint, was but one of many new start-ups being set up along the new Bayshore Freeway south from Palo Alto.

Johnson’s assignment was to gather a team and then investigate all likely new forms of high-volume, high-speed computer memory.

Naturally enough, he and his team began with drum memory, pursuing an alternative design of a rotating drum coated on the inside, in which a single read-write head mounted on an armature raced up and down across the tracks. But the results were unsatisfactory. So the team moved on to magnetic tape loops, magnetic plates, magnetic tape-strip bins, magnetic rods, and even revisited magnetic wire. In the end, the team settled on magnetic disks because of their high surface area, easy rotation, and multiple points of access.

Fortuitously, at almost that very moment a request for a bid came in from the U.S. Air Force Supply Depot in Ohio. The depot had been using a mainframe computer to manage its massive inventory, and despite the obvious advantages in using such a powerful machine to keep track of hundreds of thousands of items, the operators had become increasingly disappointed by the experience. The problem once again was lag time: Because of the “batch” nature of contemporary computing, huge lists of arriving and departing items piled up before the computer’s records could be updated. This meant in practice that at any given moment the depot’s records were now both more and less accurate than before. What the depot wanted was a way to add and subtract inventory on its computer in “real time.”

Johnson and his team, which was looking at similar applications at grocery stores in the Bay Area, believed they had the answer—and set about trying to build a prototype of this “disk” memory. Wrote pioneering tech journalist George Rostky, “When management back East got wind of the project, it sent stern warnings that [the disk project] be dropped because of budget difficulties. But the brass never quite caught up with the cowboys in San Jose.”4

DUCK AND COVER

Even if the theory was good, the actual physical engineering of such a new technology was extremely complicated. The disks, which were aluminum and two feet in diameter, were heavy and they had to be perfectly centered on a tiny spindle. In the initial test the designers were prepared to duck if the spindle snapped and shot a homicidal metal Frisbee around the lab. But it held. Better yet, the entire test construct of 120 disks on a single shaft separated by quarter-inch spacers stayed intact even when rotated at 3,600 rpm.

Then there was the matter of getting the iron-oxide paint—the same as used on the Golden Gate Bridge—to smoothly and evenly coat the surfaces of the disks. One solution—to pour the paint in the center of each disk and let centrifugal force spread it out over the disk’s surface, just like the paint wheels in carnivals—was tried and abandoned. In the end, another engineer on the team found the solution. He showed up at the lab one morning with one of his wife’s silk stockings. Filling a paper cup with the precise amount of paint, he sprayed it evenly through the stocking … and achieved just the right thickness. A variation of that technique became the standard for the industry for years to come.

The final challenge was finding a way to make the read-write heads “float” over the surface of the disks close enough to read the magnetic record below without actually dragging across it and damaging the iron-oxide coating. It was a young UC Berkeley grad student, Al Hoagland, who came up with the solution of pumping air through nozzles in the read-write head to create an air cushion between the two surfaces.

On February 10, 1954, Rey Johnson and his team hooked a keypunch machine to the prototype and entered data onto the disk drive; then they reversed the process and had the same data printed out on punch cards. With characteristic plainspokenness, Johnson wrote into his lab notes: “This has been a day of solid achievement.”

In early 1955, IBM ordered fourteen machines.

The resulting commercial product, the IBM 305 RAMAC, introduced in September 1956, was a milestone in the story of computing, helping to make possible the first real-time mainframe computers. In turn, in the decades to come, they would lead first to minicomputers and workstations, then personal computers and, in the form of servers, the Internet.

Compared to the modern hard-disk drive, which may stuff a trillion bits on a sliver of a disk the diameter of a silver dollar in a case the size of matchbook and weighing less than an ounce, the RAMAC was not just primitive but positively gargantuan. The size of a small closet, it contained fifty 24-inch-diameter disks and literally weighed a ton, had to be moved with a forklift, and was delivered via cargo planes. It held a total of 40 million bits. And it was puny compared to the 1961 Bryant Computer disk drive, which held twenty 39-inch platters producing so much centrifugal force that the system had to be bolted to the cement floor to keep it from “walking” across the room.

It also cost $150,000 in 1956 dollars. But for customers—from the first (Chrysler’s MOPAR division), to nearly the last (the 1960 Winter Olympics)—RAMAC was worth every penny in the cost savings that came from processing turnaround times that had been improved by an order of magnitude.

Johnson went on to lead his team to further improvements of the RAMAC design. In the late 1960s, he entered the story of memory one more time: As a consultant for Sony Corporation he was asked to find a way to make video more available for schools and kids. Concluding that the problem was that Sony’s one-inch videotape was too heavy and unwieldy for small hands, he took a spool of the tape, cut it in half lengthwise to a half-inch width, and then encased the tape in a plastic holder that made it accessible. And thus Rey Johnson, the inventor of the disk drive, also became the inventor of the videocassette.

MYSTERY DISK

By the early 1960s, IBM and competitors such as NCR were racing to create ever-smaller, faster, and denser disk systems. By now they were the size of dishwashers and even featured removable disk packs. And the influence of these new systems was profound. IBM’s latest generations of disk drives were crucial to the operation of its new 360 Series mainframes, which, along with its successor the 370 Series, was arguably the most influential and dominant computer line in history.

But as in the early 1950s, IBM also knew that it was headed for a new development wall if it couldn’t discover yet another quantum leap in memory design. Once again, the answer came from its San Jose laboratory, now under the direction of Ken Haughton. The plan for this new drive, ultimately called the Model 3340, was to create a configuration of two removable thirty-megabyte modules. History has often assumed that IBM gave the development of this drive the code name “Winchester” because of the nearby Winchester Mystery House, located on Winchester Boulevard in San Jose. But in fact it was Haughton who saw the planned configuration and said, referring to the famous rifle, “If it’s a 30-30, then it must be a Winchester.”5 When the 3340 proved to be a technological breakthrough of historical importance, the code name stuck—and even today is used to describe hard-disk drives with similar technology.

The key to Winchester technology was the use of a very light read-write head that could be flicked back and forth across the tracks on a disk almost faster than the eye could see. And the key to doing this was the team’s realization that if the head was sufficiently light and slightly curved like a wing there would be no need for the injected air; instead it would produce enough lift to “fly” 18 millionths of an inch above the surface of the disk (and land in specially designated landing zones on the disk). With Winchester technology, the Model 3340, introduced in March 1973 for the 370 Series, a user could find any record on the surface of the disk in no more than 25 milliseconds.6

Two years later, IBM advanced this technology still further by introducing “thin film” heads for these drives. This photolithographic technique put the entire wiring of the read-write on a tiny sheet of film, which allowed the flying head to be even lighter and smaller. The combination of Winchester technology with thin film heads was first introduced in the IBM 3380 of 1980.

By then, a whole new disk technology had emerged, most of it led by a “dirty dozen” (as they were called by IBMers) of memory specialists who walked out of the bureaucratic and conformist Big Blue to start their own companies.

The most important of these entrepreneurs was Alan Shugart Jr. Shugart had a long and successful career at IBM and had then jumped for a few years, from 1969 to 1973, to one of Ampex’s biggest competitors, Memorex. Now, having gathered a team and found venture capital money, Shugart started his own company, which he called Shugart Associates—and set out to compete with IBM. That might have seemed the ultimate in career insanity a decade earlier, but IBM, now buried in antitrust lawsuits, was intentionally staying out of new industries and leaving alone new competitors it once might have crushed. IBM 360 architect Gene Amdahl had shown it could be done back in 1970, when he started his own eponymous mainframe computer company.

Three years later, Shugart wanted to pursue the same path—that is, to build a complete computer system, including a central processor, memory, and printer—but unlike Amdahl, he wanted to target the small business market with a low-cost machine. The timing was perfect: IBM, HP, Wang, DEC, and Data General were all pursuing similar “mini” computers and workstations for both business and scientific applications. Shugart had some clever ideas for how to compete in this market, but as he soon learned, he didn’t have near enough capital to compete with these giants. And by 1977, he was out of money, without a finished product to take to market. He would later say that Al’s Law of Business Number One was: “Cash is more important than your mother.”7

Needless to say, this situation resulted in a confrontation between the company’s founder and his investors. Shugart wanted to keep going; the venture capitalists wanted out. As is always the case in such fights, the money won. Shugart always claimed he walked out in frustration; the investors always claimed he was fired. He would later say, “Actually, I don’t know if I got fired or if I quit. A friend told me later that, for a person in my position, the difference between firing and quitting is about five microseconds.”8

The result was the same, and Al Shugart always said that the most painful experience of his life was having to drive every day past the company headquarters bearing his name knowing he was barred from ever again being allowed inside.

In short order, the investors sold the company to Xerox, which changed the subsidiary’s name to Shugart Corporation. Xerox was notorious in those days for never failing to miss a good opportunity; just up the road in Palo Alto, at its research center, Xerox was about to invent not only the personal computer but also the mouse and the windows-type operating system … only to fail to follow-up on any of them. But with Shugart Corporation it actually spotted a prize and pursued it.

A FLEXIBLE SOLUTION

About the time Shugart Associates was being founded, IBM had embarked on a new program to come up with a cheaper, removable version of its now hugely successful Winchester disk drives. The company shrewdly turned to the other major memory medium—magnetic tape—for inspiration. With the disk-drive paradigm before them, the solution to the problem of sequence with tape now seemed obvious: Just cut the coated film (now Mylar) into a disk instead of ribbon. Now, if you put this disk into a more rigid holder to keep it from flexing, and rotated it like a 45 rpm record, you’d have a low-cost memory medium that could be read with a Winchester-type head, it would be light and removable, and it would cost just a few bucks.

IBM introduced the first 8-inch-diameter version of this flexible disk—it was soon nicknamed “floppy” for obvious reasons—and accompanying drive in 1970 and it was an immediate hit. The disks could be filled with data and removed and filed like folders—at a fraction of the cost of a hard-disk drive. Many small-business customers eventually stuck with the floppies and eschewed hard disks altogether.

By the time Al Shugart and his team were designing the Shugart computer, 8-inch floppies were ubiquitous in the computing world. But they were also showing their limitations. In particular, the very first microcomputers—the immediate precursors of personal computers—were beginning to appear for scientific and electronic-design applications. And it was hard to be “micro” when the peripheral floppy disk drive was the size of a large telephone book.

It was to meet this as-yet-unmet demand for a smaller floppy drive that in 1976 Shugart president Don Massaro and sales director Jim Adkinsson met with a major client to ascertain what he wanted in the next generation of floppies. In particular, they asked, what size do you want the disk to be? Because this was a sales call, and the three were meeting in a bar, the client pointed at a cocktail napkin and said, “That big.” Massaro and Adkinsson took the napkin, measured it, and, finding a pair of scissors, cut out a matching square of cardboard on the way back to the office. They specifically tested it to make sure it was slightly too large for a shirt pocket—they were apparently concerned that the disks would get bent if carried around that way—and then presented this new 5.25-inch size to Al Shugart … who approved it immediately.

*   *   *

Within a year, Al Shugart was gone, but the new stripped-down, Xerox-owned Shugart forged ahead as the first 5.25-inch 360KB flexible-disk-drive company—and made a fortune. Existing computer users loved the new smaller size because it made it easy to carry programs from one machine to another or to quickly download work from memory for filing. But even more important, the personal computer revolution was now under way. Apple introduced the landmark Apple II in 1977—and within months had scores of competitors. For this first wave of personal computers, hard drives were out of the question: Not only were they as big as the computers themselves, minus the displays, but they typically also cost as much as the rest of the computer—all for about ten megabytes of memory storage. In the new 5.25-inch floppies, the PC makers found the perfect storage medium.

By 1978, Shugart Corporation had more than ten competitors, all building comparable drives. The race was now on to see how much data could be stuffed onto each disk; and that would be a function of the density of surface on the individual disk and the speed of the read-write head.

ODD MAN OUT

But the most influential figure in the disk-drive industry was out of the game. Unemployed, other than a few pickup consulting jobs, Al Shugart moved over the hill to Santa Cruz and the life of a prosperous beach bum: “I bought a house on a cliff overlooking the ocean—wonderful place, pool and everything.”9

With some partners, Shugart bought a bar in Santa Cruz and ended up spending part of his time slinging drinks or cleaning up at closing time.

“I had a good time. I bought a fishing boat and was fishing for salmon and albacore, and selling it.… My day started overlooking the ocean, hearing the water and so forth. I didn’t have to be at work at eight o’clock in the morning, so I therefore could miss the traffic. I would go at ten or I could go at five.…”10

Al fished commercially out of Santa Cruz, and in time moved up to San Francisco Bay, where he would often deliver his daily catch to Fisherman’s Wharf. The tourists, seeing the stocky man with the shock of graying hair hauling a load of fish over his shoulder, had no idea that they were looking at the man who already changed their work and would soon transform their lives.

“I always thought that I enjoyed life more than everybody else. So it doesn’t bother me if somebody drives by in a Mercedes and I’m in an old fishing boat. I’m sure that I was enjoying life more in my old fishing boat.… I never felt sorry for myself. I think about the bar and the fishing boat sometimes.”11

It all sounded good, and the sunrises over the Bay Bridge were beautiful … but for a born entrepreneur like Al Shugart, it was privately excruciating not to be back in the game, especially as he began to elaborate a vision of where computer memory needed to go next. Finally, in 1979, he teamed up with a group of industry veterans, including his old partner at Shugart Associates, Finis Conner, and founded a new disk-drive company. They called it Shugart Technology, only to hear from Xerox that the new start-up could not be named after its own founder. So they changed it to Seagate Technology and set up shop in the Santa Cruz Mountains, halfway between Al’s beloved Santa Cruz and his despised Silicon Valley.

Shugart and Conner shared a common vision of where they thought computer memory needed to go in the personal-computer era—and it was back to hard disks, with their immensely greater storage capacity. The trick, they realized, was to figure out how to put a Winchester hard-disk drive into the now-established shallow-form factor of the current 5.25-inch floppy drive.

In 1980, Seagate introduced the ST-506, the first hard disk to fit into the standard PC disk-drive bay. It held five megabytes—ten times that of the standard floppy of the era—and was soon followed by a version that held ten megabytes. When IBM chose the Seagate drive for its IBM PC XT, the first personal computer from the company to use a hard disk, Seagate’s fortunes were made. By 1993, the company shipped its 50 millionth drive; by 2008, it was one billion drives—and the 56,000 employee company had annual revenues of more than $10 billion.

But Seagate wasn’t the only competitor chasing the fortunes of supplying hard-disk memory for the hottest consumer electronics product of the age. By the time this new disk-memory race was raging, Silicon Valley—and more important, Silicon Valley’s venture capital industry—had matured into the most efficient incubator of new entrepreneurial start-ups the world had ever seen. A smart team with a good product idea could almost always find not just the capital they needed but also the personnel, the manufacturing, and the marketing they needed to ramp up fast.

It all came together at the beginning of the 1980s, setting off the biggest new company land rush high-tech had yet seen. Within twenty-four months after Seagate’s announcement, an estimated 250 new 5.25-inch hard-disk-drive companies had been founded—all of them pursuing a dominant share of the market. Of course, that was impossible, and by 1997, an estimated 210 of those companies were already out of the business and most of them shuttered for good.12 By the new century, the number of competitors was less than a dozen, with Seagate still standing as the world’s largest independent disk-drive company.

DATO

By then, though, Al Shugart was gone again. In July 1998, he officially resigned all of his positions at Seagate—“I was fired,” he said a few years later. “The board told me it was time for change. That was the only reason I was given.”13

The only man to ever be fired from two billion-dollar companies that at some point bore his name, he now founded Al Shugart International, a boutique angel-investment/executive-consulting firm of less than a dozen employees that was largely a platform for Shugart to pursue anything that interested him. In a valley of characters, he was one of the most famous, and this last phase of his career allowed him to indulge his opinions and eccentricities without worrying about disapproving boards and unhappy shareholders.

“I have always been an independent cuss. That’s part of being an entrepreneur. The only two companies I’ve ever been fired from were the two companies I started.”14

Shugart took to wearing Hawaiian shirts, as did his staff—mostly pretty women. He reveled in being named a dato, a Malaysian honorific, and the respect it earned him in that crounty. A natural libertarian (“I object to politics generally”), he nevertheless ran his dog, Ernest, for Congress from the Monterey area in 1996 in order to shake up local voters from what he thought was widespread apathy. It drew national attention. So did Shugart’s more serious initiative to officially add “None of the Above” to all California state ballots. He was also a pioneering supporter of simplified tax forms and campaign finance reform … all in an effort, he said, “to get more people to get more active in politics.”

“I think I’m doing some good. If I didn’t think I was doing some good, then I wouldn’t like it. If the politicians don’t like it, then I know I’m on the right track.”15

Looking back in 2001, he mused, “I really enjoyed success, and not just my success. I like to do things well. Doing things well in the disk-drive business was very challenging, but we did things well. But I like other peoples’ success too, and so when I see all these kids starting companies and becoming billionaires, I’m happy for them. That’s success.”16

Al Shugart, the most indelible figure of the computer disk-memory industry, died in December 2006 of complications from heart surgery. One of his last public images was a Christmas card showing the dato himself, grinning and surrounded by his staff, all wearing Hawaiian shirts.

The disk-drive industry continued, of course, and to even greater glory, but henceforth, robbed of its only celebrity, it would be all but anonymous.

The drives themselves grew ever faster, of greater capacity, and cheaper. In the race to keep up with the tireless demands of the personal-computer industry for more performance, some companies (such as Maxtor) tried the old trick, dating back to RAMAC, of stacking multiple disks inside a single player. In just eleven years, from 1980 to 1991, disk-memory technology advanced at a staggering pace: Al Shugart’s Seagate ST-506, with its 5.25-inch disks and five megabytes of memory, had cost $1,500; less than a dozen years later, multiple companies were building 2.5-inch disk drives, containing one hundred megabytes of memory for half that price. A year after that, Hewlett-Packard raised the ante with a 1.3-inch disk drive—the size of a quarter—in a case not much bigger than a large postage stamp.

These breakthroughs had two important effects. First, it effectively killed the floppy disk as a standard memory format. Floppies, struggling to keep up all through the 1980s, introduced a 3.5-inch version in a more rigid plastic case that ultimately reached a 1.44-megabyte capacity. But though there were later attempts to create a higher-capacity version, the game was up by the mid-1990s, when Apple Computer, which had been the last holdout against an internal hard-disk drive, finally made the move. The second important effect of the newer, smaller, and higher-capacity hard disks was that they (along with flat-panel displays) made possible the revolution in laptop computing, smart phones, and other consumer products.

FREEDOM COMES FREE

Memory, which during the era of mainframes and magnetic core had been one of the most expensive and largest parts of the computer, had now become one of the cheapest and smallest. RAMAC’s memory had cost users $150 per megabyte to rent per month; by 2000 that had been reduced to a purchase price of just $0.02. A decade later, it had fallen—on a two-terabyte, 3.5-inch, five-platter disk drive—to less than one-thousandth of a cent per megabyte. In 2011 it was possible to purchase a Seagate internal, 2.5-inch hard drive with a capacity of 750 gigabytes, for less than $100—a device so small it could fit easily into a handheld game player.

Meanwhile, it was slowly dawning on big Internet service companies, such as Google, that their giant server facilities around the world contained trillions of bytes of disk-memory storage that were essentially being unused most of the time. So they began to devise new services for this amorphous cloud of unused storage that their users could access essentially without cost. The first, and most famous, of these cloud applications were GoogleMail—Gmail—in 2004, and GoogleMaps a year later. Soon, new Internet cloud services were available from numerous companies, ranging from low-cost to more secure, but more expensive, versions.

This staggering drop in prices made possible by disk storage saw artificial memory become essentially free for the first time since the Renaissance memory artists—and this time, everybody could take advantage of the opportunity. That in turn created a paradigm shift in the relationship between human beings and stored knowledge that has only begun.

For one thing, “free” memory only accelerated the complexity arms race in software and applications that had been going on almost from the introduction of the first commercial computers. Even as computers had grown smaller and more personal, their growing processing power and memory size enabled them to add ever more performance—computation, bookkeeping, networking, word processing, spreadsheets, desktop publishing, games, personal communications, multiplayer games, communications, streaming video, 3-D graphics, and on and on—that only whetted the desire of consumers for even more.

PC owners in 1980 dreamed of owning a five-megabyte hard disk but wondered if they’d ever need that much storage; their children took the terabyte disks—capable of holding every written word in the world in Imperial Rome—that came with their laptops and worried that they might run out of memory for all of their games, videos, and photos. Free memory meant free rein for one’s imagination for creating computer experiences.

At the other end of the scale, all of this free memory, combined with ever-faster processing speeds, liberated scientific researchers to imagine and then tackle tasks that were once almost beyond human imagination, such as modeling every air molecule in a storm or the neutrons in an atomic explosion, creating virtual realities that were indistinguishable to the human eye from the natural world, practicing medicine on fully functional “virtual” patients, mapping the entire human genome—or modeling the operation of memory in the human brain.

MISSING LINKS

But it was in between these two extremes of Big Science and small consumers within the everyday operation of companies, agencies, and universities, that the biggest and most important effect of free memory took place.

Almost from the moment there were two computers in the world, their operators have wanted them to talk to each other. There are, after all, a number of reasons for having computers “network”: It cuts out the expensive and time-wasting middlemen of card and paper printers and readers; it allows computers to “talk” at the rocket speeds at which they operate; and it makes possible the sharing of tasks to cut down overall operating time. All of these advantages were best captured by Robert Metcalfe—himself the legendary coinventor (at Xerox in 1973) of the landmark networking protocol Ethernet—when he noted that the value of a computer network seems to increase at the square of the number of connections on that network. That is, the bigger the network, the very much bigger its usefulness.

It was an implicit understanding of this principle that had led researchers as early as the late 1930s to experiment with remote accessing of computers. The pioneer of this field was George Stibitz, a Bell Labs researcher who had already played a key role in applying Boolean algebra to computer circuits. On September 11, 1940, at a meeting of the American Mathematical Society at Dartmouth College, Stibitz used a teletype machine to direct some computational work on his self-made Complex Number Calculator computer back at his office in New York.

By the late 1950s, the U.S. military was using primitive networking to share data from its many radar-control systems, while a pair of mainframe computers, owned by American Airlines, were linked together to create SABRE, the forerunner of the modern airline reservation system.

This was followed by a series of breakthroughs in the mid-1960s that would make global networking possible at last. The first of these, in 1964, came out of Dartmouth, where a team of researchers created “time-sharing”—the ability of multiple remote users (typically armed with an acoustic coupler modem for their telephone and a teletype machine) to take turns accessing a remote computer. Time-sharing would prove to be the inspirational first experience that many of the pioneers of the personal-computer industry—notably Steve Wozniak—would have with “home” computing. It was in a quest to duplicate this childhood experience that many fabricated their first computers.

At about the same time, Joseph Carl Robnett “Lick” Licklider, a brilliant multidisciplinary scientist, presented a paper, which he entitled “The Intergalactic Computer Network,” to the employees of the U.S. Department of Defense Advanced Research Projects Agency (ARPA), which he would soon join as a director. The title of the paper was meant to be a joke, but its core message wasn’t. And among the young scientists it inspired was program director Lawrence Roberts, who turned his team to the task of creating such a global network. Meanwhile, working in parallel on a way to cluster data into “packets,” transmit them over the shortest available network pathways, and reassemble them at the target location, were Paul Baran at the RAND Corporation and Donald Davies at the UK’s National Physical Laboratory.

It was Davies who coined the term “packet switching.” And when, in 1969—the annum mirabilis of the digital age—ARPA (now DARPA) set out to tie together the computers at government agencies, research laboratories, and universities into a common network called Arpanet, it was Licklider’s vision, Roberts’s networking architecture, and Baran’s and Davies’s packet switching that made it all work. Their inventions would enable that network to grow in the 1980s with the help of a final critical invention, by Robert Kahn and Vinton Cerf, of the Internet Protocol Suite—TC/IP—into the Internet.

It would also be Baran’s and Davies’s packet switching that would make possible the global cellular telephony industry. Baran would go on to become a Silicon Valley legend, founding four companies that were each valued at more than $1 billion. He was working on his newest company the day he died of cancer, at age eighty-four, in 2011.*

DISK TO DISK

For the billions who today use the Internet on a daily basis, the Net seemed to spring fully formed on their computers in the mid-1990s. But the forgotten decade preceding that global rollout was vitally important. And it depended heavily on the ongoing race to build ever more powerful disk drives.

In operation, the Internet requires a hierarchy of computers—both small and large—to function. The small computers, mostly PCs and smart devices, act as the access points to the Net; the large computers sit at the crossroads of the data flowing around the network and manage the traffic. That’s the simple version. A more accurate description is that the disk memory at the individual nodes (managed by special software in the PCs) communicates with big disk drives (enterprise hard disks), managed by specialized computers (servers) organized by the scores in big warehouses (server farms) and communicate with other specialized computers designed to manage the flow of data, not process it (routers).

Thus, another way of looking at the rise of the Internet in the 1980s is that it is the story of getting sufficient processing and memory power into home and office computers via small, inexpensive disk drives; building full-size disk drives powerful enough (high-capacity, very fast access speeds, and 24/7 reliability) to manage huge data flows; inventing the new routers and other hardware—the best-known manufacturer being Cisco—needed to manage this infrastructure; and developing the standards, software, and applications required to make all of this work smoothly and, for end users, intuitively.

For the end-user experience, there were several key players who built upon the work of their predecessors. The first was Tim Berners-Lee, a scientist at the European CERN nuclear research center in Geneva, Switzerland, who, in 1991, first proposed the simplified Internet addressing architecture that became the World Wide Web—and made the Internet at last accessible to consumers. A year later, a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana–Champaign embarked on a project to create a graphical “browser” to simplify access to the growing number of websites on the Internet. The result, Mosaic, was introduced in 1993. Almost immediately, a team of code writers from the Mosaic project, led by Marc Andreessen, teamed with workstation tycoon Jim Clark to start Netscape.

Netscape’s Navigator browser proved so popular that the company became a business superstar like Apple, Intel, and Ampex before it—which was enough to capture the attention of the biggest software company in the personal computing world: Microsoft. Bill Gates and his team in Everett, Washington, who by now utterly dominated PC operating systems, had been caught flatfooted by the Web and Netscape Navigator. So they set out, by any means necessary (including bundling its own new browser into the Microsoft Windows operating system), to crush Netscape.

Microsoft succeeded, at the cost of a federal investigation. But it would fail to do the same with the next great Web application. This was the “search engine,” which appeared in many forms in the 1990s in response to the need to manage the growing list of thousands of new websites cropping up every month.

Nearly all of these early search engines had a fatal flaw, however. Many prioritized their searches weighted by their advertisers. Others, such as the otherwise hugely successful Yahoo!, gave priority to its own select list of sites.

In the end, it was two Stanford students, Sergey Brin and Larry Page, who came up with a search engine that was organized only by the congruence of sites to the search question, and by number of visitors. This proved to be the magic recipe for search, and Brin and Page’s company, Google, was founded in 1998. Their single most important executive decision was to hire Eric Schmidt, one of Silicon Valley’s most brilliant technologists, to run the company. Schmidt, who had been beaten in the marketplace twice by Microsoft (at Sun Microsystems and Novell), came up with a strategy to hold off Gates and company—and managed to pull it off. A decade after its founding, Google still owned more than a 90 percent share of the world’s Internet searches. Google became the defining firm of the era, one of the most valuable in U.S. industry, and the template for the dot-com boom that began at the end of the 1990s and firmly established the Web as an inextricable part of the lives of most people on the planet.

THE LAST TRACK

Jon Rubinstein, Apple Computer’s chief of hardware engineering, faced what seemed an impossible challenge. Apple’s cofounder, the brilliant and mercurial Steve Jobs, had returned to the company after a twelve-year hiatus just a few years before and had quickly revitalized Apple with a series of astonishing new computer designs. But now, in 1996, Jobs wanted to turn the company’s attention toward other emerging opportunities in the consumer hardware business.

The public often thinks of Steve Jobs as a genius inventor like the young Tom Edison—something neither Jobs nor Apple did anything to correct. He never was one, in fact, and usually depended upon others of greater technical facility—Steve Wozniak, Jef Raskin, and others—to do the creating. Instead, Jobs was more like the older Edison: an impresario of invention—perhaps the greatest ever—setting out a vague idea of what a new product should be; creating an environment that supported risk-taking, attention to style, and the user’s experience; and then using his own reputation and charisma to give Apple unmatched marketing power.

One of the secondary effects of the disk-memory race and the rise of the Internet in the 1990s is that it made it possible for the first time for users to easily swap very large chunks of memory—games, images, and, most important for college students, music files. Swapping music files violated copyright laws, but students, driven by the technical imperative that “any good new technology will find its users,” flouted the law by the millions, especially when new websites—most notably Napster—emerged to simplify the process.

Soon the music industry was suing Napster while the FBI was arresting some of the more egregious music file pirates. But as with Prohibition seventy years before, the craze only went underground … and grew. Steve Jobs watched this trend and, while other big companies kept their distance, he saw a gigantic opportunity. Jobs was no stranger to illegal activities—he and Steve Wozniak had begun their tech careers as sellers of illegal telephone hacking equipment—so he had a unique perspective on how the piracy mess would resolve itself, and he planned to put Apple right in the middle of that solution.

Jobs knew that to do so would require a two-part strategic play. On the one hand, he had to co-opt the increasingly paranoid and litigious music industry, which was watching its once-hugely profitable business being crippled by a new technological paradigm, the MP3 music file, and an entire generation of young bootleggers circumventing the rules of copyright and the marketplace. His solution would be to use his leverage as chairman of the motion picture company Pixar and as the most famous figure of the consumer digital revolution to propose a compromise: the creation of a legal online shop for downloadable music files, which he would call “iTunes,” that would charge a fee for songs that, while low by music-industry standards, would also be cheap enough to convince millions of young people to abandon their criminality and turn to a legitimate source.

But that was only half of the strategy. Even as Apple positioned itself as the key content supplier for digital music files, it also wanted the hardware business as well—a brand-new industry whose clunky, oversized products were perfect targets for the Apple style.

And that’s where Jon Rubinstein entered the story. As Jobs explained it, he wanted a device small enough to fit into a shirt or jeans pocket, with an elegant touch control and small but crisp screen, a headphone jack, a nonremovable battery that could be recharged on a tiny dock, and a standard Apple FireWire interface to enable the device to download from an Apple computer (and also recharge from it). All of that would be tough, but doable, thought Rubinstein. But then the clincher: Jobs wanted this device to cost only a couple hundred dollars, while still able to put—as Jobs would eventually say—“A thousand songs in your pocket.”17

Rubinstein gulped on that last bit. He had followed the memory industry long enough to know that there wasn’t a single micro hard-disk drive in the world capable of fitting into a case that size while still having the gigabytes of memory needed to hold that much information. The only good news was that Jobs, who had been notorious in the past for backing the wrong type of memory (for example, the experimental laser-based “magneto-optical” memory drive in the NeXT computer), had left the choice up to Rubinstein.

So as he started the program, one of the first tasks Rubinstein set for himself was to find a manufacturer that would be willing to build a drive to these unique specs: a disk of less than two inches in diameter, in a drive of no more than two inches wide with proprietary connectors, capable of holding one gigabyte of memory. Rubinstein knew that this was asking the almost impossible, but he assumed the Apple name would at least spark some interest among the two dozen disk-drive companies in the world.

Boy, was he mistaken. He was dismissed, laughed at, met with stunned silence, and even on one occasion hung up on by a major manufacturer who thought it was a prank call. After having no luck with the first tier of manufacturers, a desperate Rubinstein started calling the also-rans. In the end, the only company to show interest was the big, diversified Toshiba of Japan. Big as it was, Toshiba was the least likely of suppliers: It was only in the disk-drive business (mostly through a relationship with Fujitsu) in support of its personal computers and servers, and it had zero reputation in disk memory for innovation.

Rubinstein realized, however, that Toshiba was the only game in town, and the Japanese giant got the contract. Happily for Rubinstein, Toshiba delivered the new little minidisks on time.

The resulting product, the result of Steve Jobs’s vision and Jon Rubinstein’s pragmatism, was, of course, the Apple iPod—the first great product of the new age of consumer electronics, setting the stage for the iPhone and iPad to follow. It was the start of perhaps the greatest run of landmark new consumer products since Edison himself. Introduced in late October 2001, the iPod got off to an almost invisible start thanks to slow delivery to retailers, an economic recession, and the distraction of the 9/11 terrorist attacks in New York City. But by the end of the decade, the iPod was a phenomenon of historic proportions: As of October 2011, 320 million iPods in various models and configurations had been sold by Apple Computer.

But what had been seen as the ultimate triumph, the zenith, of the hard-disk drive would prove in time to be the beginning of the end of its era. The early generations of the iPod “classic” design would continue to use the Toshiba 1.8-inch drive. And when Apple decided to downsize the device with the iPod “mini,” it too would contain a hard disk—this one just an inch in diameter—from Hitachi and Seagate.

But that was it. In September 2005, Apple introduced the iPod Nano—a tiny MP3 player that was half the size of the original, yet still contained up to four gigabytes of memory. But there would never again be a disk drive in the iPod or any of its successors. Now the memory of choice would be “flash” memory chips. After fifty years of chasing magnetic memory, semiconductor memory had (for most applications) caught up at last.

IN THE CHIPS

The history of the semiconductor industry is the best known in tech, probably because it is the most venerable, the technology is fundamental to everything else in electronics, and most of all, because it contains the most remarkable characters.

But within that larger tale lies a number of other, less-well-known narratives, not least that of semiconductor memory.

A quick history: The semiconductor revolution, the defining technology driver of the twentieth century and beyond, began with a lecture in 1940, given at Bell Laboratories in New Jersey. There, the speaker, researcher Russell S. Ohl, began by showing a small slab of silicon with a wire attached at each end. Ohl then shone a flashlight onto the middle of the silicon … and to the amazement of the assembled scientists, electrical current suddenly passed through the glass, normally a natural insulator. A circuit had opened, Ohl explained, because the silicon wasn’t pure, but rather “doped” with impurities like boron and phosphorous from the third and fifth columns of the periodic table.

Ohl went on to explain that when the energy from the flashlight beam had hit the center of the slab, these dopants had given off electrons in such a way that the silicon had become a conductor—a kind of “gate” that closed again when the light was switched off. Because of these attributes, Ohl called this doped silicon a “semiconductor.”

Two of the scientists in the audience, John Bardeen and Walter Brattain, walked out of Ohl’s demonstration convinced that they’d seen a possible answer to the biggest practical electronics challenge of the day: creating a replacement to the vacuum tube, which was becoming too delicate, too slow, too hot, and too energy hungry for the growing number of tasks and environments in which it was being used. Bardeen and Brattain agreed that if they could create a functional, solid-state on-off switch using this new semiconductor technology, they would revolutionize electronics.

But before they could get started, World War II broke out and both men were assigned to other, more immediate concerns. Ironically, the war demonstrated more than ever—on the battlefield, in warships, in airplanes, in jungles, in deserts, and in snow—that the world desperately needed a replacement for the vacuum tube.

With the war’s end, Bardeen and Brattain finally got back to their project, and over the next two years they labored to create a workable semiconductor circuit. Ultimately, facing some recalcitrant problems with the physics of the device, they turned to one of their compatriots for help. That scientist, William Shockley, was considered not only the most brilliant scientific mind at Bell Labs, but some said one of the greatest since Newton. It was with Shockley’s help that Bardeen and Brattain finally built a working circuit on December 23, 1947. It looked like a tiny arrowhead of quartz embedded into a slightly larger sliver of germanium, with both components trailing wires. Electricity flowing into the quartz arrowhead acted as a valve on electricity passing through the germanium sliver, turning it on and off in the ones and zeros of computing’s Boolean algebra.18

This was the transistor, often hailed as the most important invention of the twentieth century, and when it began to appear in commercial applications in the early 1950s, the little germanium junction was hidden under a metal cap (“can”) atop a tripod of “leads” that were extensions of those three controlling wires. Bardeen, Brattain, and Shockley were rightly awarded the Nobel Prize.

In the story of memory, the invention of the transistor can be compared to that of the book, or even printing, in terms of its influence. And what made it astounding was that it was so simple. As Gordon Moore, one the most famous figures in the history of electronics, would point out, part of the miracle of the semiconductor device was that it was literally so elemental. Made of one of the most common substances on the planet, silicon sand, it was forged out of fire, rusted with oxygen, and purified with water—hearkening back to the pre-Socratic Greek philosophers and their belief that the universe was composed of fire, earth, water, and air.19

That is to say, the heart of the transistor was as tough and enduring as the rock it was made from, which meant that, unlike De Forest’s tube, it could endure great heat and cold, the pressure at the bottom of the ocean, and the vacuum of outer space—even, sometimes, the radiation of an atomic bomb. Left alone, it was almost immortal, vulnerable only after centuries to the effect of cosmic rays. Almost as important, it required little power to operate and gave off comparatively little heat.

The transistor—replacing tubes in everything from mainframe computers to test and measurement instruments to portable radios—transformed the world of electronics by making possible smaller, more durable, and more efficient devices than ever before. It also set off a gold rush of new and old companies chasing the potentially unlimited wealth to be made as a transistor manufacturer.

One of these new transistor companies was founded in 1956 by Bill Shockley himself. He had left Bell Labs and come home to Palo Alto to be close to his ailing mother and to start his own company, Shockley Semiconductor Laboratory. There Shockley planned to improve upon the Motorola Company’s newly discovered use of silicon as a replacement for the more expensive germanium.

Such was Shockley’s reputation that, when he put out word that he was looking for top talent for his new company, he was deluged with applications. In the end, he selected the eight most talented young physicists, chemists, and electronics engineers from around the United States to join him and build the world’s finest transistors.

But while Shockley may have been a great scientist, he was a terrible boss, and he only grew crazier (with his racist IQ theories and “genius sperm bank”) as the years went on. It wasn’t long before his eight young scientists had had enough of his paranoia and belittlement and conspired to resign en masse and start their own company.

Eventually, in a search process that would create the modern venture capital industry, the “Traitorous Eight” (as Shockley called them) would find an investor in the defense contractor Fairchild Camera and Instrument, after the leader of the Eight, Robert Noyce, made an impressive and impassioned speech to Sherman Fairchild about the potential for silicon chips. It would be Noyce who would not only lead the new Fairchild Semiconductor but devise the technology that would soon make the little Mountain View, California, operation into the most important company of the postwar world.20

THE PLANAR TRUTH

By then, 1957, change was in the air in the semiconductor world. Five years earlier, British scientist G. W. A. Dummar had predicted:

It seems now possible to envisage electronic equipment in a solid block with no connecting wires. The block may consist of layers of insulating, conducting, rectifying and amplifying materials, the electrical junctions connected directly by cutting out areas of the various layers.21

Almost from the day of Fairchild’s founding, the Traitorous Eight—especially Bob Noyce—were already pondering how to make Dummar’s vision real. But Fairchild Semiconductor wasn’t the only enterprise pursuing advanced transistor technology. Another was Texas Instruments. There in the sweltering summer of 1958, when veteran employees were allowed to leave the office in the heat, a new employee named Jack Kilby was required to stay. Bored, he decided to write down in his journal some notes on his solution to Dummar’s ideas. By the end of that summer, he built a prototype for this germanium circuit and earned a patent for his design.22

At almost the same time, Noyce and his team at Fairchild were pursuing their own vision of this circuit, this time in silicon and based on a very different Noyce design. Like Kilby, Noyce understood that if this circuit could be made as a flat “sandwich” of silicon or germanium and metal conductors, it might ultimately be possible to put more than one such transistor on a single chip and link them together—that is, into an integrated circuit. Noyce divided his small staff into two teams, one under Jean Hoerni, the other under Gordon Moore, and set them to work finding a viable way to fabricate this design.

It was Hoerni who came up with the solution, which he called the “planar process,” and it would define semiconductor fabrication for the next half-century. The breakthrough of the planar process was not just that it achieved the flat structure needed for the integrated circuit but that it did so with a manufacturing process that was most akin to printing. A thin wafer of silicon was coated with a photoreactive material much like that used by Talbot in photography more than a century before, and then exposed by ultraviolet light passing through a stencil-like mask containing the image of one layer of circuitry. The unexposed photoresistor was then washed away with acid, and the remaining image was cooked into place in an oven. This process was then repeated with the next layer of circuitry—and so on, up to twenty or more such layers. Then a layer of metal conductor was plated onto the surface of these many layers, reaching down through holes to the lower layers, to create the equivalent of interconnecting wires.23

What made the planar process so important was this same photolithography technique could be used to put not just one but multiple interconnected circuits on the surface of a silicon wafer.

The first Fairchild planar transistor—called a “mesa transistor”—looked like a tiny bull’s-eye. IBM bought 150 of them, as much to study as anything else. But within two years, Fairchild had managed to stuff four transistors on the surface of a chip … and the number was soon doubling at a breakneck pace.

History would recognize Noyce and Kilby (the latter winning a Nobel Prize after Noyce’s early death) as the coinventors of the integrated circuit—the transistor’s descendant and rival for the title of Invention of the Century. But it was Hoerni’s breakthrough with the planar process that made the subsequent “computer chip” revolution possible.

By the early 1960s, Fairchild, armed with its IC technology, was the hottest young company tech had seen since Ampex a decade before. It was also perhaps the greatest collection of young entrepreneurial talent ever assembled. They were brilliant, talented, young, and wild—and the hard-drinking, skirt-chasing, rule-breaking Fairchild soon gained a reputation, that still stands, as the wildest company in Silicon Valley history. An eternal “what-if” in high tech is to ask what would have happened if that original Fairchild crew had managed to stay together, given that its employees would go on to create a trillion-dollar semiconductor industry as well as play key roles in other industries of almost equal size, including computer games, cellular telephones, displays, and personal computers.

But Fairchild was just too volatile to remain intact for long … and when, in 1967, the parent company refused to grant stock options and allow its California employees to share in the riches they’d created—and wasted much of that division’s profits on failed ventures—Fairchild Semiconductor shattered, scattering talent all over the Valley that eventually coalesced into an estimated one hundred new chip companies, including Intel, National Semiconductor, Advanced Micro Devices (AMD), and Zilog. With this explosion of new chip companies, the modern Silicon Valley was born. Visiting reporter Don Hoeffler, noticing all of these new chip companies, gave the place its name.

CHANGE AS LAW

By the time of Fairchild’s great hemorrhage of talent, the semiconductor industry had already begun to divide into separate market sectors, largely congruent with the separation of operations within computer architecture. Thus, one part of the semiconductor industry pursued the logic chips used in computer processors (this was Fairchild’s specialty); memory chips, used to provide on-board storage for regularly used information that didn’t go out to the disk drive; input-output (I/O) chips, often containing both digital and analog circuits to manage the flow of data in from terminals and other sources and out to printers and other networked computers; and linear or analog chips (such as diodes and resistors) that handled the flow of electricity around those other chips.

Of these four types, only memory chips seemed to progress at a steady rate. They were more monolithic in their design, lending themselves to greater miniaturization, and they didn’t require the complex fabrication of I/O chips or the burst of individual design genius typically found in the linear world. And they were more universal in application than logic chips, whose primary market at the time was mainframes and minicomputers.

So systematic was the onward march of memory chips that it caught the attention of Gordon Moore at Fairchild. In 1965, having been asked to write an article for Electronics magazine, Moore sat down with a piece of graph paper and began to plot the performance of chips versus the date of their introduction. He chose memory chips because of their success, and quickly switched to logarithmic paper when he realized just how fast the progress had been. Moore had only a few data points—the integrated circuit was only seven years old at this point, and the most powerful memory chips at the time only held about 64 transistors—but a trend was already clear. Even then, Moore was stunned to see that the points on his graph were arrayed in a straight line. Integrated circuits, it seemed, were doubling in performance (capacity, miniaturization, price) every eighteen months. If this trend continued—and there was no reason it wouldn’t, Moore wrote—this endless doubling (like the grains of rice on the chessboard in the Chinese tale of the clever man requesting payment from the emperor—2, 4, 8 grains, and so on) would result in unbelievable gains in the years ahead. Moore predicted that by 1975 a single memory chip might hold 64,000 transistors.24

History proved Moore’s prediction to be uncannily accurate. By then, this doubling of chip performance every eighteen months (it would later slow to twenty-four months) was being called “Moore’s Law.” In fact, it wasn’t really a scientific law—like, say, Metcalfe’s description of the power of growing networks—but rather a kind of social contract between chip makers and their customers … and eventually with humanity … to maintain this doubling as long as possible with every bit of investment, management focus, and creativity it could bring to bear. The world, in turn, tacitly agreed to buy each of these succeeding generations of chips at a premium price and use them to drive newer and more powerful generations of consumer, industrial, and military products.

This unlikely relationship, between the chip industry and everyone else, has proven to be—in terms of advancing human wealth, health, and innovation—one of the most fruitful in history. Gordon Moore had hoped that his law might last a decade. Now, a half-century later, with semiconductor companies still struggling to maintain its momentum, it can be said that the global economy has now become the very embodiment of Moore’s Law:

Today … it is increasingly apparent that Moore’s Law is the defining measure of the modern world. Every other predictive tool for understanding life in the developed world since WWII—demographics, productivity tables, literacy rates, econometrics, the cycles of history, Marxist analysis, and on and on—have failed to predict the trajectory of society over the decades … except Moore’s Law.

Alone, this oddly narrow and technical dictum—that the processing speed, miniaturization, size and cost savings of integrated circuit chips will, together, double every couple years—has done a better job than any other in determining the pace of daily life, the ups and downs of the economy, the pace of innovation and the creation of new companies, fads and lifestyles. It has been said many times that, beneath everything, Moore’s Law is ticking away as the metronome, the heartbeat, of the modern world.25

Moore’s Law did for the semiconductor industry something that had never happened before—it determined the pace of change for a generation ahead. And that in turn enabled entrepreneurs to develop new products and start new companies with the sure knowledge that the underlying technology and the not-yet-existent market would be waiting for them when they arrived. That proved to be true with the calculator and digital watch, the personal computer, the computer game, cellular telephony, the Internet, digital audio and video, medical devices, intelligent control of machines, virtual reality, and on and on. At this very moment, thousands of entrepreneurial teams around the world are devising business plans based upon the continued rule of Moore’s Law.

THE INVENTION OF INVENTIONS

The explosion of Fairchild, and the resulting diaspora of semiconductor talent (the “Fairchildren”) around the region not only created the modern Silicon Valley but also established a field of ferocious companies whose competitiveness guaranteed that Moore’s Law would get a roaring start. The most famous of these chip companies was Intel Corporation, founded by Noyce and Moore in 1968. This pair soon grew into a troika by the celebrated executive and scientist Andrew Grove.

Intel set out to use a new semiconductor technology (MOS) to become the world’s leader in the fabrication of memory chips—and quickly reached that goal. But within a year, a major new opportunity appeared that the company, despite every effort, could not ignore. The electronic calculator boom was just peaking and was about to kill off all but the best-run competitors. One of these also-rans, the Japanese company Busicom, feared it would be one of the losers and decided to roll the dice on a radical new design. It approached Intel in October 1969 with the notion of putting multiple types of semiconductor circuits—logic, I/O, and memory—on a single chip. It had never before been tried, but like the IC itself a decade before, the idea was in the air.

Intel took the job, using its own scientist, Ted Hoff, to come up with the overall architecture of the chip (it would ultimately be four chips), which he based on the DEC VAX minicomputer, and Intel software expert Stan Mazor. To this team Intel added Masatoshi Shima, Busicom’s top scientist and, from Fairchild, the world’s most respected MOS designer, Federico Faggin.26

It was Faggin’s arrival in April 1970 as the development team leader that put the project into high gear, and by the end of that year the Intel team had created a four-chip set capable of handling all of the calculator’s operations, replacing three times as many traditional chips. Intel designated this set the Model 4004. It was the world’s first working microprocessor. Today, with more than 20 billion in use around the world, providing intelligence to everything from phones and computers to rockets and robots, the microprocessor is the third candidate, after its ancestors the transistor and integrated circuit, for the Invention of the Century.

*   *   *

But it didn’t start that way. The computer-on-a-chip approach was so radical that the 4004, and the later 8008, were initially met by customer skepticism. Companies around the world had converted from tubes to transistors to ICs pretty easily because they all did the same thing and they were all “discrete” (that is single-function, stand-alone) devices. But the microprocessor was a whole different way of seeing digital intelligence, and companies were wary of taking the risk. Intel itself considered abandoning the technology—not just because it wasn’t taking off as planned but because from the beginning it had been a distraction from the company’s core memory-chip business.

But then Faggin and his team created the Model 8008, a single-chip microprocessor … and suddenly the value of the microprocessor became clear both to established companies and new start-ups looking to leapfrog the competitor. The Model 8008 was followed by the Model 8080—the seminal device for all future microprocessors (the modern Intel and AMD chips are its direct descendants)—and when IBM picked a budget version of the subsequent 8086, the 8088, to put in its first IBM PC, the age of the microprocessor began.

Soon, Intel faced another dilemma. By the early 1980s, it had become the world’s leading manufacturer of microprocessors. But it was also still the leading memory chip company—a hugely profitable business. The company was becoming schizophrenic, with both the processor and memory businesses vying for dominance. Meanwhile, other microprocessor businesses, from new companies like Zilog to established giants like Motorola, had jumped into the game and competition was ferocious. At the same time, the Japanese electronics giants had thrown their fortunes behind building memory chips and were now producing devices of such quality and low price that they were embarrassing the U.S. semiconductor industry.

It was becoming increasingly obvious that Intel had to pick one business to pursue. It was also obvious that while most of the company wanted to pursue microprocessors (technology leadership, better profit margins, defensible market), the two men at the top, Moore and Grove, wanted to stick with the more proven memory market. In the end, they caved (and were embarrassed ever after for being so stubborn), and Intel went into the microprocessor business alone … and by the late 1990s was, based upon its stock price, the most valuable company in the world.

MEMORY MOVES EAST

Intel’s departure from memory essentially turned that industry over to the Japanese and their soon-to-be competitors in South Korea and Taiwan, and the chip companies in those countries scrambled to stake out their turf in the many different submarkets of the increasingly fractionated memory-chip world.

Memory chips now were available in two basic types: volatile, which meant that the chips had to be continuously powered, if only at a low level, to retain their memory contents; and nonvolatile, typically of lesser capacity, which retained their contents when turned off. Within these two categories there were also numerous memory types, such as SRAM and DRAM, PROM and EPROM. There were also experiments with other, more exotic technologies, such as magnetic “bubble” memory, but they proved impractical.

Volatile memory chips, invented first, ruled the technology world. The DRAM (dynamic random access memory), invented in 1966 at IBM, was long the gold standard on which Moore’s Law was tracked, and during shortages, such as in the late 1970s, actually created an underground black market for companies desperate for those chips to power their products. And it was an accusation of price-fixing on DRAMs by Japanese semiconductor makers in the early 1980s that led to the Japanese–U.S. trade war.

The fundamental weakness of dynamic memory was that it required some kind of electrical source—typically a battery in consumer devices—to keep the chips from self-erasing as they shut off. As with most high tech, this acceptable compromise soon became unacceptable with widespread use. The race was on then to develop memory chips that would retain their contents when turned off.

The solution came from the other side of chip memory use—read-only memory, that small part of a computer’s architecture that held permanently recorded programs to run the system and that would never be erased. Programmable read-only memory (PROM) chips had been invented in 1956 by Wen Tsing Chow of American Bosch-ARMA Corporation for the U.S. Air Force. PROMs worked by attaching a digital “fuse” to each transistor, locking down its position—open or closed—as the device was turned off. PROMs were expensive and difficult to use because of their permanence, so they were limited in their application to a very precise niche in the computing world.27

But that began to change in 1971, when Dov Frohman, an Israeli scientist (and later vice president) at Intel, invented the erasable PROM. The EPROM was both nonvolatile and easy to program, erase, and reprogram. In other words, it began to close the gap between the two worlds of memory chips. Gordon Moore would claim that the EPROM was as important to the development of the personal computer as the microprocessor was … and its success was one reason why Moore and Andy Grove were so resistant to taking Intel out of the memory business.

There was one more step: the electrically erasable PROM, or EEPROM. The EEPROM was also invented at Intel, in 1978, by George Perlegos, but was perfected elsewhere when Perlegos and other scientists left Intel to form Seeq Inc. The critical advantage of the EEPROM was that unlike the EPROM, which had to be removed from the device to be reprogrammed, EEPROM could be erased and recoded in situ, i.e., while it was still in the device, by electrical signals.28

IN A FLASH

The next step from EEPROM was a short but hugely influential one. Technologically, the EEPROM was the definitive solution to the challenge of usable memory chips, but in practice it had some serious obstacles, especially in consumer electronics: It was expensive and it was slow. In 1980, Fujio Masuoka of Toshiba, responding to the growing need by his company for a new kind of EEPROM to use in its consumer devices, redesigned the EEPROM to sacrifice some of its performance for improved erase and reprogram speed. In particular, instead of working with individual bytes of memory on the chip (needed by computer companies), Masuoka designed his new device to erase and program in large blocks of data—resulting in markedly improved response times. One of his colleagues, seeing this new EEPROM in action, dubbed it flash memory because its speed reminded him of a camera flashbulb.29

By 2006, flash memory—now used instead of the slower and more fragile disk drives in digital cameras, smart phones, electronic tablets, and the ubiquitous memory stick/“thumb drive”—had grown into a $20 billion industry, or one-third of the world’s entire memory-chip business. It had also made the Japanese and South Korean companies that built flash chips, such as Toshiba and Samsung, into major competitors on the world semiconductor scene once again.

In 2005, Toshiba, working with U.S. memory-card maker SanDisk, announced the first one-gigabyte flash chip. Later that year, Samsung announced its own chip with twice that capacity, proving that flash was exhibiting the characteristics of Moore’s Law. A year later, Samsung introduced a four-gigabyte flash chip, its capacity the equivalent of the standard small laptop disk drive.

The news stunned the electronics world: Chip memory had always been the smaller and more expensive counterpart of disk memory. But now an important technological and cultural threshold had been crossed. Very few consumer applications of technology required more than a few score megabytes of storage. With flash chips now offering a thousand times that much memory storage, who needed disk memory anymore? Sure, a disk could hold a trillion bits of data, but it was also slower than a chip, and because it was an electromechanical device full of spinning and moving parts, it was also more likely to eventually break down.

Apple by now had long since abandoned disk memory in its iPods and iPhones for flash without any apparent loss of performance in the eyes of consumers. Other companies were following suit. And then the turning point: In June 2006, Samsung announced the first line of PCs that substituted flash memory for a hard-disk drive. Dell Computers announced a comparable line a year later. And while some computer makers covered their bets by offering hybrid systems that combined a disk drive with an attached flash memory cache, it was clear that the era of magnetic memory was coming to a close.

The age of solid-state artificial memory had begun.