The development of the notebook computer offers numerous future-making lessons. The initial concept was quite ambitious and far out, but not too far out—important aspects could be accomplished, while much was still left to strive for. The idea was based on an educational philosophy, was human centered and formed with people’s activities and thinking in mind, and was developed with an awareness of important precomputer technologies for thinking, such as paper notebooks. The people developing the vision, and the main person who framed it, acknowledged that the project would have an influence on how people did intellectual work. They looked for signs of their work’s influence and strove to make it positive.
Just as one important digital media imagination was of the hypertextual network, another, related to it and compatible with it, was that of a computer that is personal, portable, and open to exploration and experimentation. This concept of a book-like computer, a notebook computer, is largely due to Alan Kay, who called the system he imagined a Dynabook.1
People often view new media pioneers as seers, uncannily accurate predictors of the future. This is a way of revering these innovators, but it doesn’t capture everything about their role as future-makers. Alan Kay has been described (even by me) as having “foretold” the current state of notebook computing. But Kay and his collaborators didn’t really foretell it—they made the future we now live in, one in which we have computers as ready to hand, and as well designed, as books.
In the 1950s the computer was conceptualized by many experts as a building-sized “giant brain.” Today, while we have rack-mounted units, boxes made to sit on or under desks, and tiny embedded systems, when someone speaks of a computer, and particularly of “my computer,” they often mean something the size, shape, and heft of a book. While people seldom shelve their notebook computers with paper notebooks and other hardbacks, the metaphor of the computer as book has been a powerful one in reframing the way we think about computation and the purposes to which it can be put.
Other important technologies we frequently encounter have changed in scale and situation to some extent. The airplanes most people fly on are significantly larger than the first one put together by those bicycle-builders, the Wright Brothers, and they’re also enclosed, made of different materials, propelled differently, and part of a network of regulation, of the architectural and organizational systems of airports, and of industry. Other transportation technologies have changed less in the same time, but may be on the brink of significant change. If people have their cars driven by Google’s corporate computing power, rather than by human drivers, our relationship to the automobile could certainly be transformed. But at this point, transportation and mobility technologies haven’t been reshaped to the extent that general-purpose computing has been.
To understand the transformations of computing, and the idea of personal computing specifically, it is useful to consider the two eras that preceded personal computing: The batch era, corresponding to mainframe computing, and the time-sharing era, in which the minicomputer was emblematic. All three computing paradigms (batch, time-sharing, and personal computing) overlap in various ways. There are still mainframe computers in use today, for instance, and we have “batch jobs” that run on today’s personal computers. These three paradigms nevertheless represent different practices that predominated during the history of computing, shaping the way people thought about computing and imagined its possibilities. As computer use moved through these phases, anxieties about what computing would bring, and hopes for its future, also shifted.
The main concept in batch processing was grouping similar types of work together and doing it all at once. This means breaking down tasks so that the components of a task can all be done together, as with grading everyone’s first exam question before moving on to the next one. To “batch” work together in this way wasn’t an idea original to a certain type of computer, or to digital computing at all. Decades earlier, work was structured like this, as Martin Campbell-Kelly and William Aspray explain: “The reason for this largely manual enterprise—it could have been run on almost identical lines in the 1890s—was that all existing data-processing technologies, whether punched-card machines or computers, operated in what was known as batch-processing mode. Transactions were first batched in hundreds or thousands, then sorted, then processed, and finally totaled, checked, and dispatched.”2 It wasn’t only business processes that were done in batches, but also the computation of tables for various purposes: scientific, or in some cases military—for example, the firing tables that the famous early computer ENIAC, developed at the University of Pennsylvania, first computed. Rather than see batch processing as an outcome of the mainframe computer, it’s more sensible to see the development of the mainframe as prompted by this batch mode of working.
Programming in the batch era was an unforgiving, bureaucratic process that required the programmer to more or less fully plan and complete work beforehand. Programmers needed to write programs and use a keypunch machine (or have someone else use a keypunch machine on their behalf) to prepare a deck of punched cards, which would then be taken to the window of the room where the mainframe was located. After handing the cards over to priestly attendants, the programmer would need to wait for the submitted job to have its turn; the output would then be provided on paper. An error of a single character would still require use of the keypunch machine to correct, and might require the programmer to wait several more hours, since access to the computer was regulated.
A major new aspect of interactive computing was that a single computer served a large number of users, typing and reading at terminals, all at once. In a way, the concept of the batch process was similar—one would get a set of punched-card stacks and run them one after the other. But interactive computing transformed this process so that people could write and edit programs while actually using the computer, rather than writing code in pencil and having it keypunched in. Essentially, the programmer was no longer waiting on the machine; the machine was waiting on the programmer.
Time-sharing was the era in which BASIC was developed, although many know the language from microcomputers. Unlike COBOL and Fortran, which were designed for punched cards, BASIC was one of the languages made for interactive use—so that programs could be typed, tried out, and revised. BASIC, with its origins in time-sharing, also became the lingua franca of the home computer era.3
While time-sharing brought many benefits, computing pioneers saw an opportunity for further ways that digital media could extend and amplify human thought and better serve people. The ideas of personal computing, present in some respects by the end of World War II in Vannevar Bush’s vision of the Memex, were essential to Alan Kay’s computer-as-book concept. With this sketch of the first two important eras in computing history, batch processing and time-sharing, we are ready to move to consider Kay’s vision, a powerful formulation of the idea of personal computing.
At the risk of reducing Kay’s important invention to a manifesto, let’s consider a few influential concepts of the Dynabook. It was for everyone, was personal and owned by the user, was portable, was a metamedium, and supported truly personal and reflective work.
An important idea about computing expressed in the Dynabook is that it is for “children of all ages,” not specifically experts, researchers, businesspeople, or the like. Perhaps this system wasn’t truly for everyone, since it wasn’t an instrumental, single-purpose tool for those interested only in tool use, but it was for all those willing to make play part of their computing practice.
Kay conceptualized the Dynabook as a general-purpose computer that the user would own: “What then is a personal computer? One would hope that it would be both medium for containing and expressing arbitrary symbolic notions, and also a collection of useful tools for manipulating these structures, with ways to add new tools to the repertoire. ... ‘Personal’ also means owned by its user.”4
In retrospect, that’s exactly what a personal computer is, but this was not an obvious plan for computing at the beginning of the 1970s. As described, another computing revolution had recently taken hold, supplanting the batch-mode system of cards, keypunch machines, climate-controlled mainframes, and operators and machines running indifferently on schedules. Instead of waiting on the computer, the computer waited on the programmer, slicing up its time between many people at different terminals. Interactive programming—changing a program on the fly to fix a bug or experiment with a different approach—finally became practical, and a great advance was made over the first way the practice of programming had been conducted.
In formulating the Dynabook, Kay saw that his demanding users and programmers—young children were his initial focus, although he imagined the system being used by all—would need more power than time-sharing systems could provide if they were to truly use the computer to do general-purpose programming. It would also be necessary to take full advantage of the computer’s ability as a metamedium, its capability for simulating other media. As Kay wrote, “This means that we should either build a new resource several hundred times the capacity of current machines and share it (very difficult and expensive), or we should investigate the possibility of giving each person his own powerful machine. We chose the second approach.”5 Even if it was less difficult and expensive, it didn’t prove possible to pack the capabilities Kay had specified into a form like today’s notebook computer or tablet. They could be realized on a desktop computer, an “interim Dynabook,” however, and they were. The system’s workings were developed in this sort of personal form. While that meant setting aside the goal of portability, increasing miniaturization of course eventually brought that within reach.
The computer, as Dynabook, is not just a processor of information, but a medium, and indeed a metamedium, one that can serve as all other media:
The essence of a medium is very much dependent on the way messages are embedded, changed, and viewed. Although digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. Moreover, this new “metamedium” is active—it can respond to queries and experiments.6
Furthermore, the Dynabook is not just for communication and information access, but for personal reflection: “Although it can be used to communicate with others through the ‘knowledge utilities’ of the future such as a school ‘library’ (or business information system), we think that a large fraction of its use will involve reflexive communication of the owner with himself through this personal medium, much as paper and notebooks are currently used.”7 Although there may be elements of this sort of personal use in the Memex concept, Kay turned things around so that what we normally consider the expertise and purpose of computers (information access) is deemphasized in favor of reflection.
Among other things, the Dynabook reframed the purpose of the computer in humanistic terms. It was understood as being for learning, exploration, “creative thought,” reflexive writing, and communication with one’s self as well as with others.
The unusual status of the Dynabook makes it difficult to understand historically. In part, it was the plan used over decades to develop the notebook computer and the tablet—the important hardware aspects of the project were realized by many others. It was a vision for flexible, educational engagement with computing that continues to this day. And, it was a working software system, featuring the graphical user interface (GUI) in largely the same form it is used today and offering the capabilities of object-oriented programming via Smalltalk.
Was the Dynabook “unrealized,” as people often write of it? The answer is yes, no, no, and no. First, the full vision of the project as a system for learning, within culture and society, has indeed never been completely realized in the form of a particular product. Second, the most fundamental aspect of the project—personal computing—was realized in important ways back in the 1970s. Third, the core software of the project was also realized—it was developed by Kay and his collaborators. And, finally, fourth, the essential hardware was realized too, over a longer time span, and by many others working in different contexts, although Kay was also involved in this development.
Having looked at the Dynabook at a high level, it’s now important to dig into some details and see how the early software, running on the interim Dynabook, worked. The Dynabook is a powerful idea, but also was a technological system with working components. Children did use the interim Dynabook to program, learn, and create. And from looking at this history, we can see how the future that we’re living in—the era of personal computing and notebook computing—was built.
The Dynabook was given that name in 1972, but Kay had started work on a portable computer at Xerox PARC in 1970. It was called “KiddiKomp,” and was based on an earlier concept of Kay’s, from 1968. While the sort of book-sized computer that Kay envisioned wasn’t made in the 1970s, there was a first approximation to the Dynabook made in prototype by Xerox in 1978, a computer called the NoteTaker. It weighed almost fifty pounds, but, as it ran on battery power, could be taken aboard a plane and used in flight—indeed, it was. While only ten were made, the computer served to inspire mass-manufactured systems such as the Osborne 1. Xerox’s portable system ran Smalltalk.
The two software contributions that stand out from Kay’s Dynabook research are Smalltalk (an operating system and programming language that developed many of the principles of object-oriented programming) and the graphical user interface. Both of these existed in an early form in Ivan Sutherland’s Sketchpad, as Kay often points out. The programming language SIMULA, another precedent, provided a more refined object-oriented language. Kay’s contributions were critical to developing these computing concepts, which still serve many purposes today. As the interim Dynabook was being developed, the main purpose it served was play, exploration, and user-directed learning.
Smalltalk had children as users beginning in 1973, when Adele Goldberg joined Kay from Stanford University and began to teach children how to program in the language. Early on, the children programmed not only animations and demos, but also complete tools for drawing, illustration, music, and circuit design—these developed by twelve-year-olds and fifteen-year-olds. Kay and his collaborators did have trouble extending the project beyond a talented few from Palo Alto schools, but it was remarkable that Smalltalk was used so effectively by a group of young users.8 The language wasn’t only useful for children. Eventually, Smalltalk had a successful commercial life on Wall Street, where programmers found it good for rapidly developing interfaces for traders. Smalltalk is still commercially available, and Kay developed a new implementation of the language, Squeak, beginning in 1996. Squeak, in turn, was developed to include the Etoys framework and made part of the One Laptop Per Child project, which is discussed later in this chapter.
Alan Kay begins a 1993 article on the history of Smalltalk with this text, which refers to his commercial notebook computer (not the Xerox Alto) as a more recent “Interim Dynabook”:
I’m writing this introduction in an airplane at 35,000 feet. On my lap is a five pound notebook computer—1992’s “Interim Dynabook”—by the end of the year it sold for under $700. It has a flat, crisp, high-resolution bitmap screen, overlapping windows, icons, a pointing device, considerable storage and computing capacity, and its best software is object-oriented. It has advanced networking built-in and there are already options for wireless networking. Smalltalk runs on this system, and is one of the main systems I use for my current work with children. In some ways this is more than a Dynabook (quantitatively), and some ways not quite there yet (qualitatively). All in all, pretty much what was in mind during the late sixties.9
Head back to your seat from the rear of an airplane today and you are likely to see modern-day Dynabook users undertaking a few different tasks: Watching movies, modifying spreadsheets, building PowerPoint decks, and perhaps (when in-flight Wi-Fi is installed and working) answering emails and browsing the Web. It’s less likely that someone will be exploring a complex simulation, unless it’s in form of a video game. Those watching movies—and there are likely to be many doing so, on seat-back computers or on their notebook computers—are engaged in a very standard type of media consumption, one that isn’t really part of the Dynabook vision. Those people working with spreadsheets may be doing important computational exploration, but they are using a method of calculating values that is now rather antiquated. The people preparing presentations are getting ready to use their computers to aid communication and collaboration, but almost always rely on the “slide” format rather than some more flexible and powerful means of multimedia support.
The interim Dynabooks we have today have developed in fits and starts, offering new ways of working that then seem to more or less solidify for decades. Our notebook computers have become essential business tools in almost every industry. It’s less certain that they are suitable for children of all ages, people who want to use them to explore computation and to explore how computation can be used to model and question the world. But there has been other development work focused on young computer users, work that strives to build the future imagined in the original Dynabook concept.
If we’re looking to find the most Dynabook-like computer that has been mass produced, there would be a few reasonable candidates. Looking beyond the standard range of notebook computers, the iPad presents itself as one possibility. In terms of software and hardware—and the co-development of these by Apple—the system does exhibit many features of the Dynabook, and has a similar form factor to the original Dynabook model. Several qualities of iOS and the iPad seem quite evidently opposed to the ideals of the Dynabook, though. The iPad is optimized for media consumption and the purchase of apps, which users cannot easily make or freely share. Specifically, while a user is allowed to develop an app, such a program cannot directly be given away. If Apple approves it via a process designed for professionals, the user may choose to sell the program through Apple’s marketplace; the price, if one gets this approval, can be set to $0. Essentially, this system—however slick and well-designed—has been constructed first and foremost for corporate enrichment, not personal enrichment, play, exploration, and sharing, and certainly not activity (beyond media consumption) being undertaken by children.
However amazing a product, the openness to imagination and creativity (an aspect that allows people to better participate in future-making of their own) is lacking in Apple’s Dynabook-like tablet offerings. One could ask if there are any systems designed with different principles. There are, and for several reasons. Computers and software made by companies, who of course seek to earn profits, do not always emphasize media consumption or place restrictions on sharing programs. Whether you use OS X, Windows, or GNU/Linux, you are free to write programs and give them away to people online. The recipients may get a warning about how it’s dangerous to download and run software, but programs can still be freely shared today.
Another reason: Some computer systems are developed in other ways. One of them was the XO, developed by Nicholas Negroponte’s educational nonprofit One Laptop Per Child (OLPC) beginning in 2005. (OLPC did other hardware development work, but I will focus on their first computer.) The project was motivated by Seymour Papert’s constructionist ideas and his use of computers for learning. Not only was the Dynabook an important inspiration for the development of the XO; Kay—who was influenced by Papert early in his work—also collaborated on the development of this computer. The XO included a version of Smalltalk, Squeak Etoys.
The idea presented in the title of the One Laptop Per Child project is that young people, even in the poorest areas of the world, should have access to their own computer to facilitate their learning and exploration. It’s not enough to put one computer in a classroom with dozens of students: The best system for learning will be always available for a child’s use—and will provide opportunities for collaboration, via the network. Because these computers were made for use throughout the world, they were developed to be rugged, to require little power, and to have multilingual keyboards and interfaces. The idea was to have governments purchase and distribute them, and coordinate their support along with teacher training. While the centerpiece of OLPC was certainly the XO computer that it developed, the organization was always imagined as an educational effort, rather than one based on technology.
The concept that became the XO was unveiled as “the $100 Laptop,” although the price of the original computer ended up being almost twice that. This and other aspects of the project provided some grist for those who wanted to find fault with OLPC. Some countries signed on, only to retract their interest later. The project was initially dismissed by Bill Gates and Intel Chairman Craig Barrett. When Microsoft and, briefly, Intel later allied with OLPC to offer a version of Windows on the original hardware, some saw the project as abandoning its free-software ideals. The exclusive environment the laptop used had been the free and open Sugar, running on GNU/Linux and developed for OLPC. As might have been expected, deployments of laptops and interactions with governments did not always go smoothly. The in-house technology development work of OLPC was focused on the first laptop; the project continues to support these laptops and manage additional sales through an office in Miami.10
While the project was not without problems, it would be hard to claim that OLPC wasn’t successful in some ways. The organization shipped 2.5 million XO computers for children around the world, with Uruguay becoming the first country to provide a laptop for every child. In years after the initial development of the XO, OLPC focused on the poorest and least literate countries, showing the broad applicability of its approach to personal computing to education. And, even before Intel became involved in the OLPC project, and despite Intel’s initial disparagement of OLPC, the company responded by offering what is generally seen as a direct competitor to the XO, a low-cost netbook specifically targeted for the developing world, the Intel Classmate. While OLPC hasn’t reached every child, it did have a direct and indirect effect of getting computing to millions of children.
The difficulty in declaring whether the Dynabook project was realized and to what extent it was completed is, in terms of future-making, better thought of as a feature than a bug. If the Dynabook could be reduced to a single, very specific innovation, such as the overlapping windows of a graphical interface, it would be easier to declare it fully realized, but much harder to see the overall cultural importance. If it were just a high-level idea with no specific plan for implementation, there would be no specifics to consider. Such an idea would be much easier to dismiss; even if the vision appealed to us, it would be hard to figure out where to get started developing it. Here are a few aspects of the Dynabook that contributed to its effective future-making:
Alan Kay has recently noted that a few prominent systems, such as the Microsoft Surface, now come with both a stylus and a keyboard, as his original Dynabook concept did, and that it took a very long time for the industry to see the importance of both interfaces. There is something to be said for flexibility and openness to different styles of use, but this comment about the stylus seems to me—pardon the pun—to put too fine of a point on it. For instance, the configuration of portable computers that has become pervasive is the clamshell arrangement in which the screen folds down over the keyboard like a book closing. My own computer—my 2017 interim Dynabook—is of this sort, and I have used one like it for many years.
Whether this more book-like chassis is inherently better than the more tablet-like arrangement of the Dynabook, which is also well represented in computing today, is worth considering, but does not seem to be a really central issue for the Dynabook vision and for tracing its influence. My ability to use the system reflectively, for instance, to take notes, arrange reading that interests me, and communicate with myself as well as others, is a powerful connection to the Dynabook vision and means more than the presence or absence of a hinge or a stylus. And, as far as future-making is concerned, the general features of the Dynabook concept and project are what seem particularly compelling. We don’t have to appeal to the Microsoft Surface to understand the power of this way of thinking. Kay has shown, regardless of interface specifics, how six important ways of thinking can allow one to engage in future-making when developing a vision that involves new technologies.