The race to build the Alto began one beautiful day in September when Chuck Thacker and Butler Lampson showed up at Alan Kay’s office door.
“Alan,” they said, “do you have any money?”
“Sure,” he replied. “I’ve got about $230,000 in my budget. Why?”
“How would you like us to use it to build your little machine?”
“I’d like it fine,” Kay replied. “But what’s the hurry?”
“Well, we were going to do it anyway,” Lampson replied. “But Chuck’s just made a bet that he can design a whole machine in just three months.”
For Kay, the appearance of his two colleagues from down the hall marked the end of a long, difficult summer.
The year had started with a glimmer of optimism. Kay had the feeling he might finally be within striking distance of turning some of his great ideas into reality. He had reworked his Dynabook concept into something he called “miniCom,” a keyboard, screen, and processor bundled into a portable, suitcase-sized package. Meanwhile, the software aces he had brought together as PARC’s Learning Research Group had turned his outline for a simplified programming language into real code, to which he gave the characteristically puckish name “Smalltalk.” (Most programming systems “were named Zeus, Odin, and Thor and hardly did anything,” he explained. “I figured that ‘Smalltalk’ was so innocuous a label that if it ever did anything nice people would be pleasantly surprised.”)
Kay’s team had already demonstrated Smalltalk’s implicit power by running rudimentary but dazzling programs of computer-generated graphics and animation on a video display system built by Bill English’s design group. Kay himself was a compulsive promoter, producing a steady stream of articles and conference abstracts, often illustrated with his own hand drawings of children in bucolic settings playing with their Dynabook, to proclaim the death of the mainframe and the advent of the “personal computer.”
By the spring of 1972 he was ready for the next step. Having drawn on Seymour Papert’s LOGO for some of Smalltalk’s basic ideas (although the two languages worked much differently under the surface), Kay was anxious to give it a Papert-style test run. That meant giving children, its idealized subjects, a shot at performing simple programming tasks on miniComs. He figured he would need about thirty of the small machines, to be built by the Computer Science Lab’s crack hardware engineers.
The only thing left to do was persuade CSL to take the job.
That May at a CSL lab meeting, Kay made his pitch. As the lab staff lounged in front of him in their beanbag chairs, he laid out the argument for building the world’s first personal computer. He understood this would mean pushing the envelope on display technology—the smallest screens used at PARC were still the size of household television sets, although systems in which digital bits controlled “pixels,” or dots on the display screen, had been tested by numerous researchers in the building. They would have to spend thousands of dollars on semiconductor memory to drive the miniCom’s high-performance graphical display, but they all knew the price was destined to fall sharply. In fact, there was hardly anything in the blueprint that would not be commercially accessible to the average user within ten years. And wasn’t that why they were here—to build the most capable system they could imagine, so far ahead of the curve that they could figure out what to do with it by the time the rest of the world caught up?
“We know everything,” he told his audience. “We know exactly how big the pixels are, we know how many pixels we can get by with, we know how much computing power we need. The uses for a personal gadget as an editor, reader, take-home context, and intelligent terminal are fairly obvious. Now let’s build thirty of these things so we can get on with it.” He regained his seat, confident as always of having made an incontestable case.
Then Jerry Elkind took the floor.
At CSL Elkind held the purse strings. No large-scale hardware project like Kay’s could be undertaken without his say-so. But Jerry Elkind and Alan Kay were like creatures from different planets, one an austere by-the-numbers engineer and the other a brash philosophical freebooter. Let others have stars in their eyes—Elkind was not the type to be beguiled by Kay’s romantic glow. As a manager he responded to rationales on paper and rigorous questions asked and answered, not hazy visions of children toying with computers on grassy meadows. He was a tough customer, demanding and abrasive. He asked too many questions and, more’s the pity, they were often good ones. As Jim Mitchell once remarked, “Jerry Elkind knows enough to be dangerous.”
At this moment he pronounced the words that most CSL engineers had learned to dread as his kiss of death.
“Let me play devil’s advocate,” he said.
He proceeded to pick apart Kay’s proposal in pitiless detail. The technology was speculative and untested, he pointed out. To the extent that the miniCom was geared toward child’s play, it fell outside PARC’s mandate to create the office system of the future. To the extent that it fell within that mandate, it was on entirely the wrong vector.
Perhaps Kay had not noticed, but PARC had not yet finished exhausting the possibilities of time-sharing. That was the whole point of building MAXC, which was after all a time-sharing minicomputer. As Kay recalled later, the sting still fresh: “He essentially said that we had used too many Green Stamps getting Xerox to fund the time-shared MAXC, and this use of resources for personal machines would confuse them.”
And what about the issue of PARC’s overall deployment of resources, Elkind asked. A major office computer program was already well under way in Kay’s own lab. Had Kay given any thought to how his project might fit in with that one?
Elkind was referring to POLOS, the so-called “PARC On-line Office System,” which was Bill English’s attempt to reproduce the Engelbart system on a large network of commercial minicomputers known as Nova 800s. He was correct in stating that POLOS ranked as PARC’s official entry in the architecture-of-information race. This was so in part because English had cannily put a stake in the ground with a round of purchase orders for the Novas, which committed Xerox to following through. The small, versatile machines were already proliferating at SSL like refrigerator-sized Star Wars droids.
But Kay considered POLOS irrelevant to his project. POLOS was explicitly a big-system prototype, an expensive luxury model as far removed from the homey, individualistic package Kay had in mind as a Lincoln Town Car is from a two-seat runabout. But under Elkind’s condescending assault Kay’s customary fluency deserted him. He sat mute while Elkind patronizingly dismissed his life’s work as a quixotic dream.
“I was shocked,” he said later. “I crawled away.” Once outside the room and beyond the hearing of his audience, he succumbed to his ordeal and broke down in tears.
A few days later, back in the Systems Science Lab, Kay sought out Bill English. To the extent Elkind thought of English and Kay as rivals for PARC resources, he was mistaken. In truth, English had become something of a father figure for Kay, whose academic training at Utah had left him with the impression that one acquired research funds simply by calling up ARPA and asking for money. Shortly after they both arrived at PARC, English had taken it upon himself to introduce Kay to such elementary corporate concepts as research budgets. (“I’m afraid I really did ask Bill, ‘What’s a budget?’” Kay recalled of the first lesson English ever gave him.)
Now English volunteered some further advice to his wounded young colleague. Among the PARC brass, Kay lacked credibility. All the way up to George Pake he was regarded tolerantly as a sort of precocious child, engaging enough in his place but profoundly in need of adult supervision. His reputation as a dreamer only made it easier for bureaucratic types like Jerry Elkind to dismiss his ideas without affording them serious scrutiny. English informed Kay, in essence, that his barefooted treks through Ideaspace would no longer do. He had to learn to develop written research plans, compile budgets, and keep notes—in short, to look and act like a serious researcher.
Kay took the advice to heart. Over the next few months he drafted a detailed plan for a music, drawing, and animation system to teach kids creative programming on Novas. He did not abandon his cherished miniCom, but recognized that he would have to reach the grail via a series of smaller steps and commit himself to a program of several years. The effort bore fruit. By summer’s end he had acquired a $230,000 appropriation to equip a bank of Novas with character generators that produced text and simple graphics for display on a high-quality screen. His small group of learning software specialists had been about to begin developing the programming environment for this jury-rigged system when Lampson and Thacker knocked at his door with a different idea.
Elkind, not for the last time, had been following a different vector from that of his principal research scientists. As it happened, many CSL engineers were convinced that time-sharing’s potential was thoroughly exhausted. Lampson and Thacker had thought hard about how to redistribute computer power so no one would have to share processing cycles with anyone else. They agreed with Kay that this meant building dozens of individual machines, not just one. This would take money. Not that funds were scarce at PARC; but they were scattered too widely for any single group to have enough to finance the massive engineering program they envisioned. What was required was a fiscal version of “Tom Sawyering,” in which they would collect contributions from every interested researcher and rake them together in one great pile.
Thacker and Lampson regarded Kay as a prime donor. For one thing, the architecture of his cherished Dynabook, or miniCom, or Kiddicomp—whatever he was calling the thing in its latest incarnation—corresponded neatly with their own visions of the ideal personal computer—for Lampson a suitcase-sized MAXC with a component cost of about $500; and for Thacker a computer with the Nova 800’s capabilities and ten times its speed.
The notions of all three intersected at one common goal: a fast, compact machine with a high-resolution display. “The thing had to fit in a reasonable sized box and it couldn’t cost too much,” said Lampson. “Small and simple was critical, because the whole point of it was to have one for everybody.” By combining the latest electronic components coming into the market with their own powerful intellects, they might just pull it off. Not the Dynabook in all its interactive glory, perhaps, but a giant leap in the right direction—in Kay’s words, an “interim Dynabook.”
Hearing their offer, Kay could barely contain his excitement—until he realized they might still face one important obstacle.
“What are you going to do about Jerry?” he asked glumly. Elkind still controlled the CSL budget. Lampson and Thacker had both been present the day he shot down the miniCom. Was there really any hope that he would see this new project any differently?
“Jerry’s out of the office for a few months on a corporate task force,” Lampson replied. “Maybe we can sneak it in before he gets back.”
“Can you get it done that quickly?”
“We’ll have to. Anyway, there’s another reason to move fast.”
“What is it?”
That was when they told him about Thacker’s bet.
“Bill Vitek was a vice president at SDS,” Thacker recalled later. “I had been down in El Segundo visiting SDS for some reason I don’t remember. We didn’t make a lot of friends there when we built MAXC, and the fact it took only eighteen months led them to think that somehow we had cheated, although they couldn’t quite figure out how.
“So I was arguing about that with Bill Vitek, and being a cocky and fairly arrogant guy I said, ‘You know, you can build a computer in three months if it’s small enough.’ And Vitek said, ‘Aw, bullshit!’ And I said, ‘Not bullshit at all!’ And we ended up betting a bottle of wine or a dinner, I don’t even remember which.
“But I do remember that I won that bet.”
Chuck Thacker started designing the Alto on November 22, 1972. He enlisted Ed McCreight to help with the engineering and completed the design before the end of February, beating Vitek’s deadline.
The original plan was to manufacture up to thirty Altos for distribution to the engineers in the Computer Science Lab and to Kay’s Learning Research Group (his seed money was allocated to finance the first ten). But from the moment of its birth the Alto created a sensation. As Taylor had long anticipated, the power of the interactive display spoke for itself. The Alto’s screen, whose dimensions and alignment replicated that of an 8½-by-11-inch sheet of paper, produced such a vivid impression that the lab’s modest construction plan was soon expanded. In the end Xerox would build not thirty Altos, but nearly two thousand.
The Alto was by no means the fastest or most powerful computer of its time. MAXC could blow it away on any performance measure in existence and for a considerable time remained the machine of choice at PARC for heavy-duty computation. Even without the burden of illuminating the full-screen display, the Alto ran relatively slowly, with a processor rate of less than 6 megahertz (the ordinary desktop personal computer as of this writing runs at a rate of 400 MHz or faster); the display slowed it further by a factor of three.
But the Alto’s great popularity derived from other characteristics. To computer scientists who had spent too much of their lives working between midnight and dawn to avoid the sluggishness of mainframes burdened by prime-time crowds, the Alto’s principal virtue was not its speed but its predictability. No one said it better than the CSL engineer Jim Morris: “The great thing about the Alto is that it doesn’t run faster at night.”
Then there was the marvelous sleekness of its engineering. To some extent this was an artifact of Thacker’s haste, for his tight deadline erased any impulse he might have felt to create a second system variation on MAXC. There was simply no opportunity for biggerism.
Instead, to save time and money Thacker and his team went entirely the other way. Alto was like a fine timepiece somehow assembled from pieces of stray hardware lying around the lab. Rather than design new memory components, they ingeniously reused boards that had already been built for MAXC. Ed McCreight revisited his own design of the MAXC disk controller and managed to strip out a few more circuits for the Alto. Even the display monitors were appropriated from POLOS, which had fallen so far behind schedule that its specially ordered video display terminals were still sitting around in boxes.
In almost every respect the Alto design was so compact and uncomplicated that during the first months, while prototypes were still scarce, engineers desperate to get their hands on one were invited to come into the lab and assemble their own. Ron Rider, who had joined PARC only a few months earlier upon graduating from Washington University, “had an Alto when Altos were impossible to get,” recalled one of the lab managers. “When I asked him how he got one, he told me that he went around to the various laboratories, collected parts that people owed him, and put it together himself.”
Of course Thacker did not really design the Alto from scratch in three short months. His wager with Bill Vitek was something of a sucker bet. Several basic elements of Alto’s design had been known to computer science for years, and others had been kicking around CSL ever since the completion of MAXC. During the summer of 1972 Thacker had even outlined for CSL the design points for a small machine in a ten-page memo entitled “A Personal Computer with Microparallel Processing.”
The philosophical core of the design came from Bob Taylor, who also supplied the machine’s name (Alan Kay never entirely ceased calling it the “interim Dynabook”). As Taylor constantly informed his top engineers, time-sharing’s success in making computing more accessible to the user quantitatively was only part of the equation: Nothing had yet been accomplished in terms of improving “the quality of man-machine interaction.” Finishing the job involved three steps: placing computing power in individual hands, delivering information directly to the eyeball via a high-performance display, and linking the computers together on a high-speed network.
As late as 1971, all three steps still seemed technically unfeasible. Computing power and memory were plainly too expensive to hand out in individual parcels, especially since they were consumed insatiably by the so-called “calligraphic” display tubes then in use with graphics-oriented computers. Moreover, because these displays laboriously constructed their images stroke by stroke, rather than by scanning a phosphor beam across a luminous surface thousands of times a second like television tubes, they were prone to annoying flicker. These were not qualities that would lend themselves to relaxed communication between man and machine. As for existing network technologies, they were either complex and slow or, like the ARPANET, required the installation of hundreds of thousands of dollars in specialized hardware.
Then there was Taylor’s habit of speaking in parables when he could not articulate his ideas in the precise argot of engineering. “When we were building MAXC, Taylor told Chuck and me a bunch of stuff we couldn’t understand at all at the time,” Lampson recalled in amusement. “We dismissed it as the ravings of a technically illiterate manager. But looking back on it two years later, it was crystal clear what he was trying to tell us to do: Build the Alto.”
What had changed by mid-1972 was their recognition of how quickly memory and computing power were sliding down the cost curve. “It was only when we realized that memory would get really cheap, when we understood Moore’s Law and internalized it,” recalled Thacker, “that it became clear that this is what Bob had been saying all along. All of a sudden it was perfectly sensible to build a computer that used two-thirds of its processor cycles and three-fourths of its memory to run the display. From then on it was all downhill. The engineering was easy. But getting that basic idea required understanding that eventually you’d have so much power in a machine that running the display wouldn’t require anywhere near two-thirds of it.”
Thacker still had to solve numerous problems in designing a serviceable personal computer that would be fast and compact without sacrificing versatility and power, and that would also have a display clear, sharp, and nimble enough to keep up with the processor without driving the user blind. In the end he found the crucial answers inside PARC itself.
His first inspiration was the concept of “microparallel processing.” The basic idea came from a singular aspect of MAXC’s operation—what Ed McCreight had described as “hijacking” the central processing unit. Thanks to a common bottleneck in computer architectures, the processor, or brain, of a typical machine shared access to the computer’s main memory with all the machine’s peripheral devices. Because only one device could be serviced at a time, the processor was often left idle while some other component temporarily monopolized the memory. “While the disk was accessing the memory, for instance,” Thacker said, “the processor essentially stopped because it was waiting its turn.”
Thacker’s inspiration was to shift the bottleneck from the memory to the processor itself. In his design, only the CPU, which after all was the most important component of the machine, would be permitted to address the main memory at any time. The CPU would take over the computing functions of all the peripherals—disk drive, keyboard, and display—deciding on its own when they needed servicing and for how long.
Thacker reasoned that if each of the computer’s routine tasks could somehow be ranked by urgency and funneled through the processor in appropriate order, he could keep the processor occupied almost full-time. If the ranking was correct, every task would be handled when it needed to be, no sooner and no later. Low-priority tasks could be interrupted for brief periods to make way for more urgent ones, then resumed later, when nothing more pressing was in the way. The gain in efficiency, speed, and hardware was potentially huge. Whole circuit boards that served as the ancillary brains of disk drives and other units could be dispensed with. The Alto’s CPU would be drafted into doing the thinking for all of them.
Thacker’s second crucial inspiration involved the question of how to power a high-performance display without busting the budget on memory. This was not trivial: He understood that the quality of the display would make or break his new computer.
Up until then, computer designers wishing to provide an interactive display faced two equally unappetizing choices: They could give the display little memory support, which led to flickering and slow performance, or they could provide backup memory through a character generator, which meant burdening the system with another and bug-prone peripheral the size of a washing machine.
Thacker struggled at length with the riddle of how to direct a suitable volume of information to the screen without adding excess hardware. The answer came to him one day while he was watching a demonstration of one of Kay’s graphics programs in the Systems Science Lab.
The demo utilized a character generator designed by a former Engelbatt engineer named Roger Bates (with Lampson’s assistance). This unit, which had thousands of dollars’ worth of memory inside, was a distant relative of the one Ron Rider would later build for the SLOT. It was designed to store custom fonts by allowing each character to occupy a small rectangular patch of memory until summoned to the screen. Most of the PARC engineers considered it a disappointment, largely because the designers’ ambition to reproduce book-quality pages on the POLOS screen turned out to be a tougher programming challenge than they anticipated.
Kay’s group was an exception. Bored with the idea of painting text on the screen but fascinated with the possibility of displaying images, they had appropriated the system—“perverted it,” in Lampson’s unpejorative phrase—to use for their graphics and animation programs by loading its memory not with print characters, but graphical designs. The result was a rudimentary black-and-white “bitmap”—a block of memory in which each bit corresponded to a dot on a display screen. Flip a given memory bit “on” and the corresponding dot lit up on the display; turn on these bits in a given pattern and you could map the same image to the screen.
As Lampson explained, “In the normal deal there would be a little bitmap for the character ‘A’ in the font memory and a bitmap for ‘B’ and ‘C’ and so on, and then the character memory would say display an ‘A,’ an ‘h,’ and an ‘a,’ and aha, you have ‘Aha.’ Alan said, ‘We’ll have a whole bunch of artificial characters numbered 1 through 500, and the character memory will say display 1, then 2, then 3.’ The result was to take the font memory and turn it into a bitmap”—that is, well before the lab had the resources to build a full-scale bitmap.
Kay’s group had only begun to investigate the potential of this new way of displaying information (although they had done enough to help persuade Thacker and Lampson of the need to equip the Alto with a high-resolution display). Among their first simple programs was one that could embed “icons,” or thumbnail-sized pictures, within blocks of text. Another was a painting system in which users wielded square “brushes” up to four pixels wide to draw or erase lines and curves on the screen.
Impressed as he was by these applications, Thacker was struck more by the underlying principle by which Kay’s system used the memory blocks. He realized that just as Kay’s team had turned the character generator into a simple bitmap, he could convert idle blocks of the Alto’s main memory into a bitmap for the display screen. Forcing the memory to perform this double duty would eliminate the need for a separate character generator. This required cutting a few corners, because the display would now have to compete with all of the machine’s other functions for memory blocks. When the Alto placed a text document on its screen, for example, it would economize by omitting from the bitmap any part of the page that lacked text, such as the white spaces between lines and at all four margins. Also, whenever there were competing demands for memory from data and display, the display lost. Users had to be alerted to expect a strange phenomenon: During a work session the image of the document they were writing or editing would gradually shrink, like a window shade rolling up from the bottom. The reason was that as the increasing volume and complexity of the data claimed more memory, less remained for the bitmap. The same phenomenon accounted for what happened whenever the Alto displayed a full-screen graphical image. On those occasions it tended to run agonizingly slowly, in part because so many processor cycles were consumed in painting the screen, but also because the display consumed so much memory there was barely enough left to keep the program percolating along.
Without this sort of artfulness the Alto display would not have been possible at all. Even within its limits it made severe demands on the machine; its resolution of 606 by 808 pixels meant that nearly a half-million bits needed to be refreshed thirty times per second. (Kay envisioned a one-million-pixel display for his Dynabook, but had to be satisfied with what he got.)
Once it was running, however, it made believers out of skeptics. Not the least important of these was Jerry Elkind.
Elkind had returned to PARC from his task force assignment in the late fall. Already uneasy at the necessity of reasserting his authority following a nearly six-month absence, he was even more put out to find that a full-scale skunk works had been launched behind his back to pursue a project whose value he questioned.
One peek into the basement workshop of Building 34 told him it might be too late to do much about it. Clearly the Alto had taken on a life of its own. But he also thought the important issues he had raised with Lampson, Thacker, and Kay remained unaddressed.
“Are we going to invest a major hunk of the lab’s resources and a lot of money in developing five or six prototypes of something we’re not sure will work?” he asked. For all that Thacker and Lampson assured him the finished product would be the epitome of cool, a glance at the schematics failed to ease his concerns—especially after he noticed the huge proportion of memory that would be devoted to maintaining the display.
“I don’t think I had the skills to appreciate what could be done with it without seeing it work,” he said later. “I certainly had questions about what the end result was going to cost and how many we could afford.” He instructed Lampson to give him some answers, in writing.
Lampson’s response was a December 19 memo entitled simply “Why Alto.” In three and a half sharply reasoned pages he furnished the project all the technical and philosophical justification it would ever need. While acknowledging that some of the “original motivation” for the Alto came from Alan Kay, an SSL engineer, he also portrayed it as a machine of tantalizing potential for everyone in the Computer Science Lab. The Alto would be capable of performing almost any computation a PDP-10 (that is, MAXC) could do. It would be more powerful than the video terminal system Bill English was designing for POLOS, with better graphics. It would run all the office system software being written in various labs at PARC with power to spare. And it would render the costly Novas obsolete.
Lampson pointed out that at $10,500 per machine the Altos would cost barely half what PARC had spent per CSL member in building MAXC. (With a full complement of memory, as it turned out, the first few Altos cost closer to $18,000. After the original design was reengineered for efficiency and a high-volume manufacturing program was put in place, however, that dropped down to about $12,000.) Lampson considered himself on firm ground in stating that the machine would be cheap enough to enable PARC to afford one for every member of the lab.
“If our theories about the utility of cheap, powerful personal computers are correct,” he concluded, “we should be able to demonstrate them convincingly on Alto. If they are wrong, we can find out why.”
By early April the first prototype was ready to start computing. Thacker and McCreight together had worked out the priority by which sixteen essential computing tasks would contend for the processor’s attention. This basically involved determining how quickly each task had to be completed before it failed, and how important it was for the rest of the machine. Transferring data between the disk and the memory was particularly critical, for instance, because without data in memory nothing else would work. Therefore disk operations earned the highest priority. Next came the display (actually three tasks—one to refresh the horizontal scan, one for the vertical, and a third to transfer display data into and out of memory). Any untoward delay here would mean rendering the screen unintelligible. Farther down the list came monitoring the local network (the Ethernet, being invented concurrently down the hall by Bob Metcalfe and David Boggs) and running the Alto’s basic program, a variant of the Nova’s.
Thacker and McCreight were so pleased with their task-switching scheme they started preparing a patent application, at which point they discovered to their great embarrassment that someone had got there first. The bearer of this jarring news was Wes Clark, who was the pioneer in question. Trim and lantern-jawed as ever, Clark served as senior consultant to the Computer Science Lab. During one of his regular consulting visits he had learned of the patent proposal. One day thereafter he showed up in the Alto workshop.
“This Alto stuff is pretty interesting,” he observed, deadpan. “I wonder if, in a few words, you could say what the relationship is to the TX-2 and in particular to the task structure of the TX-2?”
Neither Thacker nor McCreight knew much about Clark’s trailblazing thirteen-year-old machine. They looked at each other, perplexed.
“Well, ah, well, ah,” McCreight stammered out, “not very well.”
“Well, as it happens I have some copies of the TX-2 documentation here I could leave with you,” Clark said. “Why don’t I just come back and ask the question later?”
That night they pored over the papers in a state of shock. Clark’s TX-2, they recognized, had used almost exactly the same task-priority scheme as the Alto.
The next day Clark returned to find the two engineers profoundly ashamed at not having read the literature earlier.
“Wes,” said McCreight, “my only excuse is I was in the eighth grade at the time.”
The first two prototype Altos took shape in the basement workshop of Building 34. They came into the world naked and blind, as helpless as hatchlings, for the hardware had been built so quickly that the software to run it was still months from completion and its essential programs had to be bootstrapped in from the nearest Nova.
Any semblance of helplessness dissolved, however, the moment the screen lit up. The sight of black letters, figures, and symbols displayed in sharp relief against its glowing white background burned itself instantly into one’s consciousness. No one doubted that the Alto marked the omega to every thread of computer science that had come before and the alpha of a dazzling new world; and no one ever forgot the pure euphoria they felt the first time they saw an Alto running.
“It was like watching a baby waving its arms,” recalled John Shoch. “Waving its arms as if to say, ‘I’m alive! I’m alive!’”