CHAPTER 4

Multitouched

How the iPhone became hands-on

The world’s largest particle physics laboratory sprawls across the Franco-Swiss border like a hastily developed suburb. The sheer size of the labyrinthine office parks and stark buildings that make up the European Organization for Nuclear Research, better known as CERN, is overwhelming, even to those who work there.

“I still get lost here,” says David Mazur, a legal expert with CERN’s knowledge-transfer team and a member of our misfit tour group for the day, along with yours truly, a CERN spokesperson, and an engineer named Bent Stumpe. We take a couple wrong turns as we wander through an endless series of hallways. “There is no logic to the numbering of the buildings,” Mazur says. We’re currently in building 1, but the structure next door is building 50. “So someone finally made an app for iPhone to help people find their way. I use it all the time.”

CERN is best known for its Large Hadron Collider, the particle accelerator that runs under the premises in a seventeen-mile subterranean ring. It’s the facility where scientists found the Higgs boson, the so-called God particle. For decades, CERN has been host to a twenty-plus-nation collaboration, a haven that transcends geopolitical tensions to foster collaborative research. Major advances in our understanding of the nature of the universe have been made here. Almost as a by-product, so have major advances in more mundane areas, like engineering and computing.

We’re shuffling up and down staircases, nodding greetings at students and academics, and gawking at Nobel-winning physicists. In one stairwell, we pass ninety-five-year-old Jack Steinberger, who won the prize in 1988 for discovering the muon neutrino. He still drops by all the time, Mazur says. We’re pleasantly lost, looking for the birthplace of a piece of technology that history has largely forgotten: a touchscreen built in the early 1970s that was capable, its inventor says, of multitouch.

Multitouch, of course, is what Apple’s ENRI team seized on when they were looking for a way to rewrite the language of how we talk to computers.

“We have invented a new technology called multitouch, which is phenomenal,” Steve Jobs declared in the keynote announcing the iPhone. “It works like magic. You don’t need a stylus. It’s far more accurate than any touch display that’s ever been shipped. It ignores unintended touches; it’s super smart. You can do multi-finger gestures on it. And, boy, have we patented it.” The crowd went wild.

But could it possibly be true?

It’s clear why Jobs would want to lay claim to multitouch so aggressively: it set the iPhone a world apart from its competition. But if you define multitouch as a surface capable of detecting at least two or more simultaneous touches, the technology had existed, in various forms, for decades before the iPhone debuted. Much of its history, however, remains obscured, its innovators forgotten or unrecognized.

Which brings us to Bent Stumpe. The Danish engineer built a touchscreen back in the 1970s to manage the control center for CERN’s amazingly named Super Proton Synchrotron particle accelerator. He offered to take me on a tour of CERN, to show me “the places where the capacitive multitouch screen was born.” See, Stumpe believes that there’s a direct lineage from his touchscreen to the iPhone. It’s “similar to identical” to the extent, he says, that Apple’s patents may be invalid for failing to cite his system.

“The very first development was done in 1972 for use in the SPS accelerator and the principle was published in a CERN publication in 1973,” he told me. “Already this screen was a true capacitive transparent multitouch screen.”

So it came to pass that Stumpe picked me up from an Airbnb in Geneva one autumn morning. He’s a spry seventy-eight; he has short white hair, and his expression defaults to a mischievous smile. His eyes broadcast a curious glint (Frank Canova had it too; let’s call it the unrequited-inventor’s spark). As we drove to CERN, he made amiable small talk and pointed out the landmarks.

There was a giant brutalist-looking dome, the Globe of Science and Innovation, and a fifteen-ton steel ribbon sculpture called Wandering the Immeasurable, which is also a pretty good way to describe the rest of the day.

Before we get to Stumpe’s touchscreen, we stop by a site that was instrumental to the age of mobile computing, and modern computing, period—the birthplace of the World Wide Web. There would be no great desire for an “internet communicator” without it, after all.

Ground zero for the web, is, well, a pretty unremarkable office space. Apart from a commemorative plaque, it looks exactly the way you’d expect an office at a research center to look: functional, kind of drab. The future isn’t made in crystal palaces, folks. But it was developed here, in the 1980s, when Tim Berners-Lee built what he’d taken to calling the World Wide Web. While trying to streamline the sharing of data between CERN’s myriad physicists, he devised a system that linked pages of information together with hypertext.

That story is firmly planted in the annals of technology. Bent Stumpe’s much lesser known step in the evolution of modern computing unfolded a stone’s throw away, in a wooden hut within shouting distance of Berners-Lee’s nook. Yes, one of the earliest multitouch-capable devices was developed in the same environment—same institution, same setting—that the World Wide Web was born into, albeit a decade earlier. A major leap of the iPhone was that it used multitouch to allow us to interact with the web’s bounty in a smooth, satisfying way. Yet there’s no plaque for a touchscreen—it’s just as invisible here as everywhere else. Stumpe’s screen is a footnote that even technology historians have to squint to see.

Then again, most touchscreen innovators remain footnotes. It’s a vital, underappreciated field, as ideas from remarkably disparate industries and disciplines had to flow together to bring multitouch to life. Some of the earliest touch-technology pioneers were musicians looking for ways to translate creative ideas into sounds. Others were technicians seeking more efficient ways to navigate data streams. An early tech “visionary” felt touch was the key to digital education. A later one felt it’d be healthier for people’s hands than keyboards. Over the course of half a century, impassioned efforts to improve creativity, efficiency, education, and ergonomics combined to push touch and, eventually, multitouch into the iPhone, and into the mainstream.

In the wake of Steve Jobs’s 2007 keynote, in which he mentioned that he and Apple had invented multitouch, Bill Buxton’s in-box started filling up. “Can that be right?” “Didn’t you do something like that years ago?”

If there’s a generally recognized godfather of multitouch, it’s probably Buxton, whose research helped put him at the forefront of interaction design. Buxton worked at the famed Xerox PARC in Silicon Valley and experimented with music technology with Bob Moog, and in 1984, his team developed a tablet-style device that allowed for continuous, multitouch sensing. “A Multi-Touch Three Dimensional Touch-Sensitive Tablet,” a paper he co-authored at the University of Toronto in 1985, contains one of the first uses of the term.

Instead of answering each query that showed up in his email individually, Buxton compiled the answers to all of them into a single document and put it online. “Multitouch technologies have a long history,” Buxton explains. “To put it in perspective, my group at the University of Toronto was working on multitouch in 1984, the same year that the first Macintosh computer was released, and we were not the first.”

Who was, then? “Bob Boie, at Bell Labs, probably came up with the first working multitouch system that I ever saw,” he tells me, “and almost nobody knows it. He never patented it.” Like so many inventions, its parent company couldn’t quite decide what to do with it.

Before we get to multitouch prototypes, though, Buxton says, if we really want to understand the root of touch technology, we need to look at electronic music.

“Musicians have a longer history of expressing powerful creative ideas through a technological intermediary than perhaps any other profession that ever has existed,” Buxton says. “Some people would argue weapons, but they are perhaps less creative.” Remember Elisha Gray, one of Graham Bell’s prime telephone competitors? He’s seen as a father of the synth. That was at the dawn of the twentieth century. “The history of the synthesizer goes way back,” Buxton says, “and it goes way back in all different directions and it’s really hard to say who invented what.” There were different techniques used, he says, varying volume, pressure, or capacitance. “This is equally true in touchscreens,” he adds.

“It is certainly true that a touch from the sense of a human perspective—like what humans are doing with their fingers—was always part of a musical instrument. Like how you hit a note, how you do the vibrato with a violin string and so on,” Buxton says. “People started to work on circuits that were capable of capturing that kind of nuance. It wasn’t just, ‘Did I touch it or not?’ but ‘How hard did I touch it?’ and ‘If I move my fingers and so on, could it start to get louder?’”

One of the first to experiment with electronic, gesture-based music was Léon Thérémin. The Russian émigré’s instrument—the theremin, clearly—was patented in 1928 and consisted of two antennas; one controlled pitch, the other loudness. It’s a difficult instrument to play, and you probably know it best as the generator of retro-spooky sound effects in old sci-fi films and psychedelic rock tunes. But in its day, it was taken quite seriously, at least when it was in the hands of its star player, the virtuosa Clara Rockmore, who recorded duets with world-class musicians like Sergey Rachmaninoff.

The theremin inspired Robert Moog, who would go on to create pop music’s most famous synthesizer. In addition to establishing a benchmark for how machines could interpret nuance when touched by human hands, he laid out a form for touchpads. “At the same time, Bob also started making touch-sensitive touchpads to driver synthesizers,” Buxton says. Of course, he wasn’t necessarily the first—one of his peers, the Canadian academic Hugh Le Caine, made capacitive-touch sensors. (Recall, that’s the more complex kind of touchscreen that works by sensing when a human finger creates a change in capacitance.) Then there was Don Buchla, the Berkeley techno-hippie who wired Ken Kesey’s bus for the Merry Prankster expeditions and who was also a synth innovator, but he’d make an instrument only for those he deemed worthy. They all pioneered capacitive-touch technology, as did Buxton, in their aural experiments.

The first device that we would recognize as a touchscreen today is believed to have been invented by Eric Arthur Johnson, an engineer at England’s Royal Radar Establishment, in 1965. And it was created to improve air traffic control.

In Johnson’s day, whenever a pilot called in an update to his or her flight plan, an air traffic controller had to type a five-to seven-character call sign into a teleprinter in order to enter it on an electronic data display. That extra step was time-consuming and allowed for user error.

A touch-based air traffic control system, he reckoned, would allow controllers to make changes to aircraft’s flight plans more efficiently.

Johnson’s initial touchscreen proposal was to run copper wires across the surface of a cathode-ray tube, basically creating a touchable TV. The system could register only one touch at a time, but the groundwork for modern touchscreens was there—and it was capacitive, the more complex kind of touchscreen that senses when a finger creates a change in capacitance, right from the start.

The touchscreen was linked to a database that contained all of the call signs of all the aircraft in a particular sector. The screen would display the call signs, “one against each touch wire.” When an aircraft called to identify itself, the controller would simply touch the wire against its call sign. The system would then offer the option to input only changes to the flight plan that were allowable. It was a smart way to reduce response times in a field where every detail counted—and where a couple of incorrect characters could result in a crash.

“Of course other possible applications exist,” Johnson wrote. For instance, if someone wanted to open an app on a home screen. Or had a particle accelerator to control.

For a man who made such an important contribution to technology, little is on the record about E. A. Johnson. So it’s a matter of speculation as to what made him take the leap into touchscreens. We do know what Johnson cited as prior art in his patent, at least: two Otis Elevator patents, one for capacitance-based proximity sensing (the technology that keeps the doors from closing when passengers are in the way) and one for touch-responsive elevator controls. He also named patents from General Electric, IBM, the U.S. military, and American Mach and Foundry. All six were filed in the early to mid-1960s; the idea for touch control was “in the air” even if it wasn’t being used to control computer systems.

Finally, he cites a 1918 patent for a “type-writing telegraph system.” Invented by Frederick Ghio, a young Italian immigrant who lived in Connecticut, it’s basically a typewriter that’s been flattened into a tablet-size grid so each key can be wired into a touch system. It’s like the analog version of your smartphone’s keyboard. It would have allowed for the automatic transmission of messages based on letters, numbers, and inputs—the touch-typing telegraph was basically a pre-proto–Instant Messenger. Which means touchscreens have been tightly intertwined with telecommunications from the beginning—and they probably wouldn’t have been conceived without elevators either.

E. A. Johnson’s touchscreen was indeed adopted by Britain’s air traffic controllers, and his system remained in use until the 1990s. But his capacitive-touch system was soon overtaken by resistive-touch systems, invented by a team under the American atomic scientist G. Samuel Hurst as a way to keep track of his research. Pressure-based resistive touch was cheaper, but it was inexact, inaccurate, and often frustrating—it would give touch tech a bad name for a couple of decades.

Back at CERN, I’m led through a crowded open hall—there’s some kind of conference in progress, and there are scientists everywhere—into a stark meeting room. Stumpe takes out a massive folder, then another, and then an actual touchscreen prototype from the 1970s.

The mood suddenly grows a little tense as I begin to realize that while Stumpe is here to make the case that his technology wound up in the iPhone, Mazur is here to make sure I don’t take that to be CERN’s official position. They spar—politely—over details as Stumpe begins to tell me the story of how he arrived at multitouch.

Stumpe was born in Copenhagen in 1938. After high school, he joined the Danish air force, where he studied radio and radar engineering. After the service, he worked in a TV factory’s development lab, tinkering with new display technologies and prototypes for future products. In 1961, he landed a job at CERN. When it came time for CERN to upgrade its first particle accelerator, the PS (Proton Synchrotron), to the Super PS, it needed a way to control the massive new machine. The PS had been small enough that each piece of equipment that was used to set the controls could be manipulated individually. But the PS measured a third of a mile in circumference—the SPS was slated to run 4.3 miles.

“It was economically impossible to use the old methods of direct connections from the equipment to the control room by hardwire,” Stumpe says. His colleague Frank Beck had been tasked with creating a control system for the new accelerator. Beck was aware of the nascent field of touchscreen technology and thought it might work for the SPS, so he went to Stumpe and asked him if he could think of anything.

“I remembered an experiment I did in 1960 when I worked in the TV lab,” Stumpe says. “When observing the time it took for the ladies to make the tiny coils needed for the TV, which was later put on the printed circuit board for the TV set, I had the idea that there might be a possibility to print these coils directly on the printed circuit board, with considerable cost savings as a result.” He figured the concept could work again. “I thought if you could print a coil, you could also print a capacitor with very tiny lines, now on a transparent substrate”—like glass—“and then incorporate the capacitor to be a part of an electronic circuit, allowing it to detect a change in capacity when the glass screen was touched by a finger.… With some truth you can say that the iPhone touch technology goes back to 1960.”

In March 1972, in a handwritten note, he outlined his proposal for a capacitive-touch screen with a fixed number of programmable buttons. Together, Beck and Stumpe drafted a proposal to give to the larger group at CERN. At the end of 1972, they announced the design of the new system, centered on the touchscreen and minicomputers. “By presenting successive choices that depend on previous decisions, the touch screen would make it possible for a single operator to access a large look-up table of controls using only a few buttons,” Stumpe wrote. The screens would be built on cathode-ray tubes, just like TVs.

CERN accepted the proposal. The SPS hadn’t been built yet, but work had to start, so its administrators set him up with what was known as a “Norwegian barrack”—a makeshift workshop erected on the open grass. The whole thing was about twenty square meters. Concept in hand, Stumpe tapped CERN’s considerable resources to build a prototype. Another colleague had mastered a new technique known as ion sputtering, which allowed him to deposit a layer of copper on a clear and flexible Mylar sheet. “We worked together to create the first basic materials,” he says. “That experiment resulted in the first transparent touch capacitor being embedded on a transparent surface,” Stumpe says.

His sixteen-button touchscreen controls became operational in 1976, when the SPS went online. And he didn’t stop working on touch tech there—eventually, he devised an updated version of his screen that would register touches much more precisely along wires arranged in an x- and y-axis, making it capable of something closer to the modern multitouch we know today. The SPS control, he says, was capable of multitouch—it could register up to sixteen simultaneous impressions—but programmers never made use of the potential. There simply wasn’t a need to. Which is why his next-generation touchscreen didn’t get built either.

“The present iPhones are using a touch technology which was proposed in this report here in 1977,” Stumpe says, pointing to a stapled document.

He built working prototypes but couldn’t gin up institutional support to fund them. “CERN told me kindly that the first screens worked fine, and why should we pay for research for the other ones? I didn’t pursue the thing.” However, he says, decades after, “when businesses needed to put touchscreens on mobile phones, of course people dipped into the old technology and thought, Is this a possibility? Industry built on the previous experience and built today what is iPhone technology.”

So touch tech had been developed to manipulate music, air traffic, and particle accelerators. But the first “touch” based computers to see wide-scale use didn’t even deploy proper touchscreens at all—yet they’d be crucial in promoting the concept of hands-on computing. And William Norris, the CEO of the supercomputer firm, Control Data Corporation (CDC), embraced them because he believed touching screens was the key to a digital education.

Bill Buxton calls Norris “this amazing visionary you would never expect from the seventies when you think about how computers were at the time”—i.e., terminals used for research and business. “At CDC, he saw the potential of touchscreens.” Norris had experienced something of an awakening after the 1967 Detroit riots, and he vowed to use his company—and its technology—as an engine for social equality. That meant building manufacturing plants in economically depressed areas, offering day care for workers’ children, providing counseling, and offering jobs to the chronically unemployed. It also meant finding ways to give more people access to computers, and finding ways to use technology to bolster education. PLATO fit the bill.

Programmed Logic for Automatic Teaching Operations was an education and training system first developed in 1960. The terminal monitors had the distinctive orange glow of the first plasma-display panels. By 1964, the PLATO IV had a “touch” screen and an elaborate, programmable interface designed to provide digital education courses. PLATO IV’s screen itself didn’t register touch; rather, it had light sensors mounted along each of its four sides, so the beams covered the entire surface. Thus, when you touched a certain point, you interrupted the light beams on the grid, which would tell the computer where your finger was. Norris thought the system was the future. The easy, touch-based interaction and simple, interactive navigation meant that a lesson could be beamed in to anyone with access to a terminal.

Norris “commercialized PLATO, but he deployed these things in classrooms from K through twelve throughout the state. Not every school, but he was putting computers in the classroom—more than fifteen years before the Macintosh came out—with touchscreens,” Buxton says, comparing Norris’s visionariness to Jobs’s. “More than that, this guy wrote these manifestos about how computers are going to revolutionize education.… It’s absolutely inconceivable! He actually puts money where his mouth is in a way that almost no major corporation has in the past.”

Norris reportedly sunk nine hundred million dollars into PLATO, and it took nearly two decades before the program showed any signs of turning even a small profit. But the PLATO system had ushered in a vibrant early online community that in many ways resembled the WWW that was yet to come. It boasted message boards, multimedia, and a digital newspaper, all of which could be navigated by “touch” on a plasma display—and it promulgated the concept of a touchable computer. Norris continued to market, push, and praise the PLATO until 1984, when CDC’s financial fortunes began to flag and its board urged him to step down. But with Norris behind it, PLATO spread to universities and classrooms across the country (especially in the Midwest) and even abroad. Though PLATO didn’t have a true touchscreen, the idea that hands-on computing should be easy and intuitive was spread along with it.

The PLATO IV system would remain in use until 2006; the last system was shut down a month after Norris passed away.

There’s an adage that technology is best when it gets out of the way, but multitouch is all about refining the way itself, improving how thoughts, impulses, and ideas are translated into computer commands. Through the 1980s and into the 1990s, touch technology continued to improve, primarily in academic, research, and industrial settings. Motorola made a touchscreen computer that didn’t take off; so did HP. Experimentation with human-machine interfaces had grown more widespread, and multitouch capabilities on experimental devices like Buxton’s tablet at the University of Toronto were becoming more fluid, accurate, and responsive.

But it’d take an engineer with a personal stake in the technology—a man plagued by persistent hand injuries—to craft an approach to multitouch that would finally move it into the mainstream. Not to mention a stroke of luck or two to land it into the halls of one of the biggest technology companies in the world.

In his 1999 PhD dissertation, “Hand Tracking, Finger Identification, and Chordic Manipulation on a Multi-Touch Surface,” Wayne Westerman, an electrical engineering graduate student at the University of Delaware, included a strikingly personal dedication.

This manuscript is dedicated to:

My mother, Bessie,

who taught herself to fight chronic pain in numerous and clever ways,

and taught me to do the same.

Wayne’s mother suffered from chronic back pain and was forced to spend much of her day bedridden. But she was not easily discouraged. She would, for instance, gather potatoes, bring them to bed, peel them lying down, and then get back up to put them on to boil in order to prepare the family dinner. She’d hold meetings in her living room—she was the chair of the American Association of University Women—over which she would preside while lying on her back. She was diligent, and she found ways to work around her ailment. Her son would do the same. Without Wayne’s—and Bessie’s—tactical perseverance in the face of chronic pain, in fact, multitouch might never have made it to the iPhone.

Westerman’s contribution to the iPhone has been obscured from view, due in no small part to Apple’s prohibitive nondisclosure policies. Apple would not permit Westerman to be interviewed on the record. However, I spoke with his sister, Ellen Hoerle, who shared the Westerman family history with me.

Born in Kansas City, Missouri, in 1973, Wayne grew up in the small town of Wellington, which is about as close to the actual middle of America as you can possibly get. His sister was ten years older. Their parents, Bessie and Howard, were intellectuals, a rare breed in Wellington’s rural social scene. Howard, in fact, was pushed out of his first high-school teaching job for insisting on including evolution in the curriculum.

Early on, Wayne showed an interest in tinkering. “They bought him just about every Lego set that had ever been created,” Hoerle says, and his parents started him on piano when he was five. Tinkering and piano, she says, are the two things that opened up his inventive spirit. They’d set up an electric train set in the living room, where it’d run in a loop, winding through furniture and around the room. “They thought, This kid’s a genius,” Hoerle says. And Wayne was indeed excelling. “I could tell when he was five years old that he could learn faster than some of my peers,” she recalls. “He just picked things up so much faster than everybody else. They had him reading the classics and subscribed to Scientific American.”

Bessie had to have back surgery, which marked the beginning of a lifelong struggle. “That’s another thing that was very important about our family. A year after that, she basically became disabled with chronic pain,” Hoerle says. Ellen, now a teenager, took charge of “the physical side of motherhood” for Wayne. “I had to kind of raise him. I had to keep him out of trouble.”

When Ellen went off to college, it left her brother isolated. He already didn’t relate particularly well to other kids, and now he had to do the household work his sister used to. “Cooking, cleaning, sorting laundry, all things he had to take over when he was eight.” By his early teens, Westerman was trying to invent things of his own, working with the circuits and spare parts at his father’s school. His dad bought kits designed to teach children about circuits and electricity, and Wayne would help repair the kits, which the high-school kids tore through.

He graduated valedictorian and accepted a full-ride to Purdue. There, he was struck by tendinitis in his wrists, a repetitive strain injury that would afflict him for much of his life. His hands started to hurt while he was working on papers, sitting perched in front of his computer for hours. Instead of despairing, he tried to invent his way to a solution. He took the special ergonomic keyboards made by a company called Kinesis and attached rollers that enabled him to move his hands back and forth as he typed, reducing the repetitive strain. It worked well enough that he thought it should be patented; the Kansas City patent office thought otherwise. Undeterred, Wayne trekked to Kinesis’s offices in Washington, where the execs liked the concept but felt, alas, that it would be too expensive to manufacture.

He finished Purdue early and followed one of his favorite professors, Neal Gallagher, to the University of Delaware. At the time, Wayne was interested in artificial intelligence, and he set out to pursue his PhD under an accomplished professor, Dr. John Elias. But as his studies progressed, he found it difficult to narrow his focus.

Meanwhile, Westerman’s repetitive strain injuries had returned with a vengeance. Some days he physically couldn’t manage to type much more than a single page.

“I couldn’t stand to press the buttons anymore,” he’d say later. (Westerman has only given a handful of interviews, most before he joined Apple, and the quotes that follow are drawn from them.) Out of necessity, he started looking for alternatives to the keyboard. “I noticed my hands had much more endurance with zero-force input like optical buttons and capacitive touch pads.”

Wayne started thinking of ways he could harness his research to create a more comfortable work surface. “We began looking for one,” he said, “but there were no such tablets on the market. The touch pad manufacturers of the day told Dr. Elias that their products could not process multi-finger input.

“We ended up building the whole thing from scratch,” Westerman said. They shifted the bulk of their efforts to building the new touch device, and he ended up “sidetracked” from the original dissertation topic, which had been focused on artificial intelligence. Inspiration had struck, and Wayne had some ideas for how a zero-force, multi-finger touchpad might work. “Since I played piano,” he said, “using all ten fingers seemed fun and natural and inspired me to create interactions that flowed more like playing a musical instrument.”

Westerman and Elias built their own key-free, gesture-recognizing touchpad. They used some of the algorithms they developed for the AI project to recognize complex finger strokes and multiple strokes at once. If they could nail this, it would be a godsend for people with RSIs, like Wayne, and perhaps a better way to input data, period.

But it struck some of their colleagues as a little odd. Who would want to tap away for an extended period on a flat pad? Especially since keyboards had already spent decades as the dominant human-to-computer input mechanism. “Our early experiments with surface typing for desktop computers were met with skepticism,” Westerman said, “but the algorithms we invented helped surface typing feel crisp, airy, and reasonably accurate despite the lack of tactile feedback.”

Dr. Elias, his adviser, had the skill and background necessary to translate Wayne’s algorithmic whims into functioning hardware. Neal Gallagher, who’d become chair of the department, ensured that the school helped fund their early prototypes. And Westerman had received support from the National Science Foundation to boot.

Building a device that enabled what would soon come to be known as multitouch took over Westerman’s research and became the topic of his dissertation. His “novel input integration technique” could recognize both single taps and multiple touches. You could switch seamlessly between typing on a keyboard and interacting with multiple fingers with whatever program you were using. Sound familiar? The keyboard’s there when you need it and out of the way when you don’t.

But Wayne’s focus was on building an array of gestures that could replace the mouse and keyboard. Gestures like, say, pinching the pad with your finger and thumb to—okay, cut at the time, not zoom. Rotating your fingers to the right to execute an open command. Doing the same to the left to close. He built a glossary of those gestures, which he believed would help make the human-computer interface more fluid and efficient.

Westerman’s chief motivator still was improving the hand-friendliness of keyboards; the pad was less repetitive and required lighter keystrokes. The ultimate proof was in the three-hundred-plus-page dissertation itself, which Wayne had multitouched to completion. “Based upon my daily use of a prototype to prepare this document,” he concluded, “I have found that the [multitouch surface] system as a whole is nearly as reliable, much more efficient, and much less fatiguing than the typical mouse-keyboard combination.” The paper was published in 1999. “In the past few years, the growth of the internet has accelerated the penetration of computers into our daily work and lifestyles,” Westerman wrote. That boom had turned the inefficiencies of the keyboard into “crippling illnesses,” he went on, arguing, as Apple’s ENRI team would, that “the conventional mechanical keyboard, for all of its strengths, is physically incompatible with the rich graphical manipulation demands of modern software.” Thus, “by replacing the keyboard with a multitouch-sensitive surface and recognizing hand motions… hand-computer interaction can be dramatically transfigured.” How right he was.

The success of the dissertation had energized both teacher and student, and Elias and Westerman began to think they’d stumbled on the makings of a marketable product. They patented the device in 2001 and formed their company, FingerWorks, while still under the nurturing umbrella of the University of Delaware. The university itself became a shareholder in the start-up. This was years before incubators and accelerators became buzzwords—outside of Stanford and MIT, there weren’t a lot of universities providing that sort of support to academic inventors.

In 2001, FingerWorks released the iGesture NumPad, which was about the size of a mousepad. You could drag your fingers over the pad, and sensors would track their movements; gesture recognition was built in. The pad earned the admiration of creative professionals, with whom it found a small user base. It made enough of a splash that the New York Times covered the release of FingerWorks’ second product: the $249 TouchStream Mini, a full-size keyboard replacement made up of two touchpads, one for each hand.

“Dr. Westerman and his co-developer, John G. Elias,” the newspaper of record wrote, “are trying to market their technology to others whose injuries might prevent them from using a computer.” Thing was, they didn’t have a marketing department.

Nonetheless, interest in the start-up slowly percolated. They were selling a growing number of pads through their website, and their dedicated users were more than just dedicated; they took to calling themselves Finger Fans and started an online message board by the same name. But at that point, FingerWorks had sold around fifteen hundred touchpads.

At an investment fair in Philadelphia, they caught the attention of a local entrepreneur, Jeff White, who had just sold his biotech company. He approached the company’s booth. “So I said, ‘Show me what you have,’” White later said in an interview with Technical.ly Philly. “He put his hand on his laptop and right away, I got it… Right away I got the impact of what they were doing, how breakthrough it was.” They told him they were looking for investors.

“With all due respect,” White told them, “you don’t have a management team. You don’t have any business training. If you can find a management team, I’ll help you raise the rest of the money.” According to White, the FingerWorks team essentially said, Well, you just sold your company—why not come run ours? He said, “Make me a cofounder and give me founder equity,” and he’d work the way they did—he wouldn’t take a salary. “It was the best decision I ever made,” he said.

White hatched a straightforward strategy. Westerman had carpal tunnel syndrome, so his primary aim was to help people with hand disabilities. “Wayne had a very lofty and admirable goal,” White said. “I just want to see it on as many systems as possible and make some money on it. So I said, ‘If we sold the company in a year, you’d be OK with that?’” White set up meetings with the major tech giants of the day—IBM, Microsoft, NEC, and, of course, Apple. There was interest, but none pulled the trigger.

Meanwhile, FingerWorks continued its gradual ascent; its customer base of Finger Fans expanded and the company began collecting mainstream accolades. At the beginning of 2005, FingerWorks’ iGesture pad won the Best of Innovation award at CES, the tech industry’s major annual trade show.

Still, at the time, Apple execs weren’t convinced that FingerWorks was worth pursuing—until the ENRI group decided to embrace multitouch. Even then, an insider at Apple at the time who was familiar with the deal tells me that the executives gave FingerWorks a lowball offer, and the engineers initially said no. Steve Hotelling, the head of the input group, had to personally call them up and make his case, and eventually they came around.

“Apple was very interested in it,” White said. “It turned from a licensing deal to an acquisition deal pretty quickly. The whole process took about eight months.”

As part of the deal, Wayne and John would head west to join Apple full-time. Apple would obtain their multitouch patents. Jeff White, as co-founder, would enjoy a considerable windfall. But Wayne had some reservations about selling FingerWorks to Apple, his sister suggests. Wayne very much believed in his original mission—to offer the many computer users with carpal tunnel or other repetitive strain injuries an alternative to keyboards. He still felt that FingerWorks was helping to fill a void and that in a sense he’d be abandoning his small but passionate user base.

Sure enough, when FingerWorks’ website went dark in 2005, a wave of alarm went through the Finger Fans community.

One user, Barbara, sent a message to the founder himself and then posted to the group.

When the iPhone was announced in 2007, everything suddenly made sense. Apple filed a patent for a multitouch device with Westerman’s name on it, and the gesture-controlled multitouch technology was distinctly similar to FingerWorks’. A few days later, Westerman underlined that notion when he gave a Delaware newspaper his last public interview: “The one difference that’s actually quite significant is the iPhone is a display with the multi-touch, and the FingerWorks was just an opaque surface,” he said. “There’s definite similarities, but Apple’s definitely taken it another step by having it on a display.”

The discontinued TouchStream keyboards became highly sought after, especially among users with repetitive strain injuries. On a forum called Geekhack, one user, Dstamatis, reported paying $1,525 for the once-$339 keyboard: “I’ve used Fingerworks for about 4 years, and have never looked back.” Passionate users felt that FingerWorks’ pads were the only serious ergonomic alternative to keyboards, and now that they’d been taken away, more than a few Finger Fans blamed Apple. “People with chronic RSI injuries were suddenly left out in the cold, in 2005, by an uncaring Steve Jobs,” Dstamatis wrote. “Apple took an important medical product off the market.”

No major product has emerged to serve RSI-plagued computer users, and the iPhone and iPad offer only a fraction of the novel interactivity of the original pads. Apple took FingerWorks’ gesture library and simplified it into a language that a child could understand—recall that Apple’s Brian Huppi had called FingerWorks’ gesture database an “exotic language”—which made it immensely popular. Yet if FingerWorks had stayed the course, could it have taught us all a new, richer language of interaction? Thousands of FingerWorks customers’ lives were no doubt dramatically improved. In fact the ENRI crew at Apple might never have investigated multitouch in the first place if Tina Huang hadn’t been using a FingerWorks pad to relieve her wrist pain. Then again, the multitouch tech Wayne helped put into the iPhone now reaches billions of people, as it’s become the de facto language of Android, tablets, and trackpads the world over. (It’s also worth noting that the iPhone would come to host a number of accessibility features, including those that assist the hearing and visually impaired.)

Wayne’s mother passed away in 2009, from cancer. His father passed a year later. Neither owned an iPhone—his father refused to use cell phones as a matter of principle—though they were proud of their son’s achievements. In fact, so is all of Wellington. Ellen Hoerle says the small town regards Wayne as a local hero.

Like his mother, Wayne had found a clever way around chronic pain. In the process, he helped, finally, usher in the touchscreen as the dominant portal to computers, and he wrote the first dictionary for the gesture-based language we all now speak.

Which brings us back to Jobs’s claim that Apple invented multitouch. Is there any way to support such a claim? “They certainly did not invent either capacitive-touch or multitouch,” Buxton says, but they “contributed to the state of the art. There’s no question of that.” And Apple undoubtedly brought both capacitive touchscreens and multitouch to the forefront of the industry.

Apple tapped a half a century’s worth of touch innovation, bought out one of its chief pioneers, and put its own formidable spin on its execution. Still, one question remains: Why did it take so long for touch to become the central mode of human-machine interaction when the groundwork had been laid decades earlier? “It always takes that long,” Buxton says. “In fact, multitouch went faster than the mouse.”

Buxton calls this phenomenon the Long Nose of Innovation, a theory that posits, essentially, that inventions have to marinate for a couple of decades while the various ecosystems and technologies necessary to make them appealing or useful develop. The mouse didn’t go mainstream until the arrival of Windows 95. Before that, most people used the keyboard to type on DOS, or, more likely, they used nothing at all.

“The iPhone made a quantum leap in terms of being the first really successful digital device that had, for all intents and purposes, an analog interface,” Buxton says. He gets poetic when describing how multitouch translates intuitive movements into action: “Up until that point, you poked, you prodded, you bumped, you did all this stuff, but nothing flowed, nothing was animated, nothing was alive, nothing flew. You didn’t caress, you didn’t stroke, you didn’t fondle. You just push. You poke, poke, poke, and it went blip, flip, flip. Things jumped; they didn’t flow.”

Apple made multitouch flow, but they didn’t create it. And here’s why that matters: Collectives, teams, multiple inventors, build on a shared history. That’s how a core, universally adopted technology emerges—in this case, by way of boundary-pushing musical experimenters; smart, innovative engineers with eyes for efficiency; idealistic, education-obsessed CEOs; and resourceful scientists intent on creating a way to transcend their own injuries.

“The thing that concerns me about the Steve Jobs and Edison complex—and there are a lot of people in between and those two are just two of the masters—what worries me is that young people who are being trained as innovators or designers are being sold the Edison myth, the genius designer, the great innovator, the Steve Jobs, the Bill Gates, or whatever,” Buxton says. “They’re never being taught the notion of the collective, the team, the history.”

Back at CERN, Bent Stumpe made an impressively detailed case that his inventions had paved the way for the iPhone. The touchscreen report was published in 1973, and a year later, a Danish firm began manufacturing touchscreens based on the schematic. An American magazine ran a feature about it, and hundreds of requests for information poured in from the biggest tech companies of the day. “I went to England, I went to Japan, I went all over and installed things related to the CERN development,” Stumpe says. It seems entirely plausible that Stumpe’s touchscreen innovations were absorbed into the touchscreen bloodstream without anyone giving him credit or recompense. Then again, as with most sapling technologies, it’s almost impossible to tell which was first, or concurrent, or foundational.

After the tour, Stumpe invites me back to his home. As we leave, we watch a young man slinking down the sidewalk, head bent over his phone. Stumpe laughs and shakes his head with a sigh as if to say, All this for that?

All this for that, maybe. One of the messy things about dedicating your life to innovation—real innovation, not necessarily the buzzword deployed by marketing departments—is that, more often than not, it’s hard to see how, or if, those innovations play out. It may feed into a web so thick any individual threads are inscrutable, and it may contribute to the richness of the ideas “in the air.” Johnson, Theremin, Norris, Moog, Stumpe, Buxton, Westerman—and the teams behind them—who’s to say how and if the iPhone’s interface would feel without any of their contributions? Of course, it takes another set of skills entirely to develop a technology into a product that’s universally desirable, and to market, manufacture, and distribute that product—all of which Apple happens to excel at.

But imagine watching the rise of the smartphone and the tablet, watching the world take up capacitive touchscreens, watching a billionaire CEO step out onto a stage and say his company invented them—thirty years after you were certain you proved the concept. Imagine watching that from the balcony of your third-floor one-bedroom apartment in the suburbs of Geneva that you rent with your pension and having proof that your DNA is in the device but finding that nobody seems to care. That kind of experience, I’m afraid, is the lot of the majority of inventors, innovators, and engineers whose collective work wound up in products like the iPhone.

We aren’t great at conceiving of technologies, products, even works of art as the intensely multifaceted, sometimes generationally collaborative, efforts that they tend to be. Our brains don’t tidily compute such ecosystemic narratives. We want eureka moments and justified millionaires, not touched pioneers and intangible endings.