Doug Engelbart (1925–2013) read Bush’s “As We May Think” (chapter 11) while stationed with the Navy in the South Pacific at the end of World War II. It must have seemed like science fiction, but he spent his career trying to put flesh on the bones of Bush’s vision.
Engelbart’s “augmented human intellect” project was carried out mostly at SRI in Menlo Park, California (originally the Stanford Research Institute). The project encompassed collaboration between humans and interactivity between humans and computers, in order to amplify human intellectual capacity. This selection is a part of a long, early outline of the project. Engelbart’s work was never considered mainstream, even at the highly innovative SRI, but it fell under the umbrella of research funding flowing from ARPA, some of it under the direction of J. C. R. Licklider. Like many daring technological investments, parts of the project never took hold (for example, a one-handed keyboard on which five fingers, each depressing a separate wand, could be placed in 31 different positions, enough to input the letters of the Roman alphabet). Others had enormous influence (such as the mouse). And others, such as the idea of distributed, collaborative workflows, greatly influenced—or were rediscovered as aspects of—later developments.
Engelbart both abstracted and concretized Licklider’s vision in a famous 1968 demonstration that came to be known as “the mother of all demos.” It dazzled the audience at the Fall Joint Computer Conference in San Francisco with many interactive technologies that have now become commonplace: the mouse, networking, videoconferencing, hyperlinks, collaborative text editing, and windowing, among others.
A decade later SRI sold Engelbart’s laboratory to a for-profit business, where it did not flourish, in part because of Engelbart’s determined idiosyncrasies, and in part because the booming personal computer and networking industries were independently changing the way computers were being used. For all his technical wizardry, Engelbart was more than anything spiritually committed to human improvement. He was a creature of the 1960s, and with the advent of the 1970s commercialization was coming. Engelbart received the Turing Award in 1997 “for an inspiring vision of the future of interactive computing and the invention of key technologies to help realize this vision.”
BY “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble. And by “complex situations” we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers—whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human “feel for a situation” usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.
Man’s population and gross product are increasing at a considerable rate, but the complexity of his problems grows still faster, and the urgency with which solutions must be found becomes steadily greater in response to the increased rate of activity and the increasingly global nature of that activity. Augmenting man’s intellect, in the sense defined above, would warrant full pursuit by an enlightened society if there could be shown a reasonable approach and some plausible benefits.
This report covers the first phase of a program aimed at developing means to augment the human intellect. These “means” can include many things—all of which appear to be but extensions of means developed and used in the past to help man apply his native sensory, mental, and motor capabilities—and we consider the whole system of a human and his augmentation means as a proper field of search for practical possibilities. It is a very important system to our society, and like most systems its performance can best be improved by considering the whole as a set of interacting components rather than by considering the components in isolation.
This kind of system approach to human intellectual effectiveness does not find a ready-made conceptual framework such as exists for established disciplines. Before a research program can be designed to pursue such an approach intelligently, so that practical benefits might be derived within a reasonable time while also producing results of longrange significance, a conceptual framework must be searched out—a framework that provides orientation as to the important factors of the system, the relationships among these factors, the types of change among the system factors that offer likely improvements in performance, and the sort of research goals and methodology that seem promising.
In the first (search) phase of our program we have developed a conceptual framework that seems satisfactory for the current needs of designing a research phase. §22.2 contains the essence of this framework as derived from several different ways of looking at the system made up of a human and his intellect-augmentation means.
The process of developing this conceptual framework brought out a number of significant realizations: that the intellectual effectiveness exercised today by a given human has little likelihood of being intelligence limited—that there are dozens of disciplines in engineering, mathematics, and the social, life, and physical sciences that can contribute improvements to the system of intellect-augmentation means; that any one such improvement can be expected to trigger a chain of coordinating improvements; that until every one of these disciplines comes to a standstill and we have exhausted all the improvement possibilities we could glean from it, we can expect to continue to develop improvements in this human-intellect system; that there is no particular reason not to expect gains in personal intellectual effectiveness from a concerted system-oriented approach that compare to those made in personal geographic mobility since horseback and sailboat days. …
Let us consider an “augmented” architect at work. He sits at a working station that has a visual display screen some three feet on a side; this is his working surface, and is controlled by a computer (his “clerk”) with which he can communicate by means of a small keyboard and various other devices.
He is designing a building. He has already dreamed up several basic layouts and structural forms, and is trying them out on the screen. The surveying data for the layout he is working on now have already been entered, and he has just coaxed the clerk to show him a perspective view of the steep hillside building site with the roadway above, symbolic representations of the various trees that are to remain on the lot, and the service tie points for the different utilities. The view occupies the left two-thirds of the screen. With a “pointer,” he indicates two points of interest, moves his left hand rapidly over the keyboard, and the distance and elevation between the points indicated appear on the right-hand third of the screen.
Now he enters a reference line with his pointer, and the keyboard. Gradually the screen begins to show the work he is doing—a neat excavation appears in the hillside, revises itself slightly, and revises itself again. After a moment, the architect changes the scene on the screen to an overhead plan view of the site, still showing the excavation. A few minutes of study, and he enters on the keyboard a list of items, checking each one as it appears on the screen, to be studied later.
Ignoring the representation on the display, the architect next begins to enter a series of specifications and data—a six-inch slab floor, twelve-inch concrete walls eight feet high within the excavation, and so on. When he has finished, the revised scene appears on the screen. A structure is taking shape. He examines it, adjusts it, pauses long enough to ask for handbook or catalog information from the clerk at various points, and readjusts accordingly. He often recalls from the “clerk” his working lists of specifications and considerations to refer to them, modify them, or add to them. These lists grow into an evermore-detailed, interlinked structure, which represents the maturing thought behind the actual design.
Prescribing different planes here and there, curved surfaces occasionally, and moving the whole structure about five feet, he finally has the rough external form of the building balanced nicely with the setting and he is assured that this form is basically compatible with the materials to be used as well as with the function of the building.
Now he begins to enter detailed information about the interior. Here the capability of the clerk to show him any view he wants to examine (a slice of the interior, or how the structure would look from the roadway above) is important. He enters particular fixture designs, and examines them in a particular room. He checks to make sure that sun glare from the windows will not blind a driver on the roadway, and the “clerk” computes the information that one window will reflect strongly onto the roadway between 6 and 6:30 on midsummer mornings.
Next he begins a functional analysis. He has a list of the people who will occupy this building, and the daily sequences of their activities. The “clerk” allows him to follow each in turn, examining how doors swing, where special lighting might be needed. Finally he has the “clerk” combine all of these sequences of activity to indicate spots where traffic is heavy in the building, or where congestion might occur, and to determine what the severest drain on the utilities is likely to be.
All of this information (the building design and its associated “thought structure”) can be stored on a tape to represent the design manual for the building. Loading this tape into his own clerk, another architect, a builder, or the client can maneuver within this design manual to pursue whatever details or insights are of interest to him—and can append special notes that are integrated into the design manual for his own or someone else’s later benefit.
In such a future working relationship between human problem-solver and computer “clerk,” the capability of the computer for executing mathematical processes would be used whenever it was needed. However, the computer has many other capabilities for manipulating and displaying information that can be of significant benefit to the human in nonmathematical processes of planning, organizing, studying, etc. Every person who does his thinking with symbolized concepts (whether in the form of the English language, pictographs, formal logic, or mathematics) should be able to benefit significantly.
Every process of thought or action is made up of sub-processes. Let us consider such examples as making a pencil stroke, writing a letter of the alphabet, or making a plan. Quite a few discrete muscle movements are organized into the making of a pencil stroke; similarly, making particular pencil strokes and making a plan for a letter are complex processes in themselves that become sub-processes to the over-all writing of an alphabetic character.
Although every sub-process is a process in its own right, in that it consists of further sub-processes, there seems to be no point here in looking for the ultimate bottom of the process-hierarchical structure. There seems to be no way of telling whether or not the apparent bottoms (processes that cannot be further subdivided) exist in the physical world or in the limitations of human understanding.
In any case, it is not necessary to begin from the bottom in discussing particular process hierarchies. No person uses a process that is completely unique every time he tackles something new. Instead, he begins from a group of basic sensory-mental-motor process capabilities, and adds to these certain of the process capabilities of his artifacts. There are only a finite number of such basic human and artifact capabilities from which to draw. Furthermore, even quite different higher order processes may have in common relatively high-order sub-processes.
When a man writes prose text (a reasonably high-order process), he makes use of many processes as sub-processes that are common to other high-order processes. For example, he makes use of planning, composing, dictating. The process of writing is utilized as a sub-process within many different processes of a still higher order, such as organizing a committee, changing a policy, and so on.
What happens, then, is that each individual develops a certain repertoire of process capabilities from which he selects and adapts those that will compose the processes that he executes. This repertoire is like a tool kit, and just as the mechanic must know what his tools can do and how to use them, so the intellectual worker must know the capabilities of his tools and have good methods, strategies, and rules of thumb for making use of them. All of the process capabilities in the individual’s repertoire rest ultimately upon basic capabilities within him or his artifacts, and the entire repertoire represents an inter-knit, hierarchical structure (which we often call the repertoire hierarchy).
We find three general categories of process capabilities within a typical individual’s repertoire. There are those that are executed completely within the human integument, which we call explicit-human process capabilities; there are those possessed by artifacts for executing processes without human intervention, which we call explicit-artifact process capabilities; and there are what we call the composite process capabilities, which are derived from hierarchies containing both of the other kinds.
We assume that it is our H-LAM/T system (Human using Language, Artifacts, Methodology, in which he is Trained) that has the capability and that performs the process in any instance of use of this repertoire. Let us look within the process structure for the LAM/T ingredients, to get a better “feel” for our models. Consider the process of writing an important memo. There is a particular concept associated with this process—that of putting information into a formal package and distributing it to a set of people for a certain kind of consideration—and the type of information package associated with this concept has been given the special name of memorandum. Already the system language shows the effect of this process—i.e., a concept and its name. …
This view of the system as an interacting whole is strongly bolstered by considering the repertoire hierarchy of process capabilities that is structured from the basic ingredients within the H-LAM/T system. The realization that any potential change in language, artifact, or methodology has importance only relative to its use within a process and that a new process capability appearing anywhere within that hierarchy can make practical a new consideration of latent change possibilities in many other parts of the hierarchy—possibilities in either language, artifacts, or methodology—brings out the strong interrelationship of these three augmentation means.
Increasing the effectiveness of the individual’s use of his basic capabilities is a problem in redesigning the changeable parts of a system. The system is actively engaged in the continuous processes (among others) of developing comprehension within the individual and of solving problems; both processes are subject to human motivation, purpose, and will. To redesign the system’s capability for performing these processes means redesigning all or part of the repertoire hierarchy. To redesign a structure, we must learn as much as we can of what is known about the basic materials and components as they are utilized within the structure; beyond that, we must learn how to view, to measure, to analyze, and to evaluate in terms of the functional whole and its purpose. In this particular case, no existing analytic theory is by itself adequate for the purpose of analyzing and evaluating over-all system performance; pursuit of an improved system thus demands the use of experimental methods.
It need not be just the very sophisticated or formal process capabilities that are added or modified in this redesign. Essentially any of the processes utilized by a representative human today—the processes that he thinks of when he looks ahead to his day’s work—are composite processes of the sort that involve external composing and manipulating of symbols (text, sketches, diagrams, lists, etc.). Many of the external composing and manipulating (modifying, rearranging) processes serve such characteristically “human” activities as playing with forms and relationships to ask what develops, cut-and-try multiple-pass development of an idea, or listing items to reflect on and then rearranging and extending them as thoughts develop.
Existing, or near-future, technology could certainly provide our professional problem-solvers with the artifacts they need to have for duplicating and rearranging text before their eyes, quickly and with a minimum of human effort. Even so apparently minor an advance could yield total changes in an individual’s repertoire hierarchy that would represent a great increase in over-all effectiveness. Normally the necessary equipment would enter the market slowly; changes from the expected would be small, people would change their ways of doing things a little at a time, and only gradually would their accumulated changes create markets for more radical versions of the equipment. Such an evolutionary process has been typical of the way our repertoire hierarchies have grown and formed.
But an active research effort, aimed at exploring and evaluating possible integrated changes throughout the repertoire hierarchy, could greatly accelerate this evolutionary process. The research effort could guide the product development of new artifacts toward taking long-range meaningful steps; simultaneously competitively minded individuals who would respond to demonstrated methods for achieving greater personal effectiveness would create a market for the more radical equipment innovations. The guided evolutionary process could be expected to be considerably more rapid than the traditional one.
The category of “more radical innovations” includes the digital computer as a tool for the personal use of an individual. Here there is not only promise of great flexibility in the composing and rearranging of text and diagrams before the individual’s eyes but also promise of many other process capabilities that can be integrated into the H-LAM/T system’s repertoire hierarchy.
22.2.3.1 The source of intelligence When one looks at a computer system that is doing a very complex job, he sees on the surface a machine that can execute some extremely sophisticated processes. If he is a layman, his concept of what provides this sophisticated capability may endow the machine with a mysterious power to sweep information through perceptive and intelligent synthetic thinking devices. Actually, this sophisticated capability results from a very clever organizational hierarchy so that pursuit of the source of intelligence within this system would take one down through layers of functional and physical organization that become successively more primitive.
To be more specific, we can begin at the top and list the major levels down through which we would pass if we successively decomposed the functional elements of each level in search of the “source of intelligence.” A programmer could take us down through perhaps three levels (depending upon the sophistication of the total process being executed by the computer) perhaps depicting the organization at each level with a flow chart. The first level down would organize functions corresponding to statements in a problem-oriented language (e.g., ALGOL or COBOL), to achieve the desired over-all process. The second level down would organize lesser functions into the processes represented by first-level statements. The third level would perhaps show how the basic machine commands (or rather the processes which they represent) were organized to achieve each of the functions of the second level.
Then a machine designer could take over, and with a block diagram of the computer’s organization he could show us (Level 4) how the different hardware units (e.g., random-access storage, arithmetic registers, adder, arithmetic control) are organized to provide the capability of executing sequences of the commands used in Level 3. The logic designer could then give us a tour of Level 5, also using block diagrams, to show us how such hardware elements as pulse gates, flip-flops, and AND, OR, and NOT circuits can be organized into networks giving the functions utilized at Level 4. For Level 6 a circuit engineer could show us diagrams revealing how components such as transistors, resistors, capacitors, and diodes can be organized into modular networks that provide the functions needed for the elements of Level 5.
Device engineers and physicists of different kinds could take us down through more layers. But rather soon we have crossed the boundary between what is man-organized and what is nature-organized, and are ultimately discussing the way in which a given physical phenomenon is derived from the intrinsic organization of sub-atomic particles, with our ability to explain succeeding layers blocked by the exhaustion of our present human comprehension.
If we then ask ourselves where that intelligence is embodied, we are forced to concede that it is elusively distributed throughout a hierarchy of functional processes—a hierarchy whose foundation extends down into natural processes below the depth of our comprehension. If there is any one thing upon which this intelligence depends, it would seem to be organization. The biologists and physiologists use a term “synergism” to designate (Webster, 1959) the “… cooperative action of discrete agencies such that the total effect is greater than the sum of the two effects taken independently….” This term seems directly applicable here, where we could say that synergism is our most likely candidate for representing the actual source of intelligence
Actually, each of the social, life, or physical phenomena we observe about us would seem to derive from a supporting hierarchy of organized functions (or processes), in which the synergistic principle gives increased phenomenological sophistication to each succeedingly higher level of organization. In particular, the intelligence of a human being, derived ultimately from the characteristics of individual nerve cells, undoubtedly results from synergism.
22.2.3.2 Intelligence amplification It has been jokingly suggested several times during the course of this study that what we are seeking is an “intelligence amplifier.” (The term is attributed originally to W. Ross Ashby [1952, 1956].) At first this term was rejected on the grounds that in our view one’s only hope was to make a better match between existing human intelligence and the problems to be tackled, rather than in making man more intelligent. But deriving the concepts brought out in the preceding section has shown us that indeed this term does seem applicable to our objective.
Accepting the term “intelligence amplification” does not imply any attempt to increase native human intelligence. The term “intelligence amplification” seems applicable to our goal of augmenting the human intellect in that the entity to be produced will exhibit more of what can be called intelligence than an unaided human could; we will have amplified the intelligence of the human by organizing his intellectual capabilities into higher levels of synergistic structuring. What possesses the amplified intelligence is the resulting H-LAM/T system, in which the LAM/T augmentation means represent the amplifier of the human’s intelligence.
In amplifying our intelligence, we are applying the principle of synergistic structuring that was followed by natural evolution in developing the basic human capabilities. What we have done in the development of our augmentation means is to construct a superstructure that is a synthetic extension of the natural structure upon which it is built. In a very real sense, as represented by the steady evolution of our augmentation means, the development of “artificial intelligence” has been going on for centuries.
22.2.3.3 Two-domain systems The human and the artifacts are the only physical components in the H-LAM/T system. It is upon their capabilities that the ultimate capability of the system will depend. This was implied in the earlier statement that every composite process of the system decomposes ultimately into explicit-human and explicit-artifact processes. There are thus two separate domains of activity within the H-LAM/T system: that represented by the human, in which all explicit-human processes occur; and that represented by the artifacts, in which all explicit-artifact processes occur. In any composite process, there is cooperative interaction between the two domains, requiring interchange of energy (much of it for information exchange purposes only). Figure 22.1 depicts this two-domain concept and embodies other concepts discussed below.
Where a complex machine represents the principal artifact with which a human being cooperates, the term “man–machine interface” has been used for some years to represent the boundary across which energy is exchanged between the two domains. However, the “man–artifact interface” has existed for centuries, ever since humans began using artifacts and executing composite processes.
Exchange across this “interface” occurs when an explicit-human process is coupled to an explicit-artifact process. Quite often these coupled processes are designed for just this exchange purpose, to provide a functional match between other explicit-human and explicit-artifact processes buried within their respective domains that do the more significant things. For instance, the finger and hand motions (explicit human processes) activate key-linkage motions in the typewriter (couple to explicit-artifact processes). But these are only part of the matching processes between the deeper human processes that direct a given word to be typed and the deeper artifact processes that actually imprint the ink marks on the paper. …
22.3.2.1 Background To try to give you (the reader) a specific sort of feel for our thesis in spite of this situation, we shall present the following picture of computer-based augmentation possibilities by describing what might happen if you were being given a personal discussion-demonstration by a friendly fellow (named Joe) who is a trained and experienced user of such an augmentation system within an experimental research program which is several years beyond our present stage. We assume that you approach this demonstration-interview with a background similar to what the previous portion of this report provides—that is, you will have heard or read a set of generalizations and a few rather primitive examples, but you will not yet have been given much of a feel for how a computer-based augmentation system can really help a person.
Joe understands this and explains that he will do his best to give you the valid conceptual feel that you want—trying to tread the narrow line between being too detailed and losing your over-all view and being too general and not providing you with a solid feel for what goes on. He suggests that you sit and watch him for a while as he pursues some typical work, after which he will do some explaining. You are not particularly flattered by this, since you know that he is just going to be exercising new language and methodology developments on his new artifacts—and after all, the artifacts don’t look a bit different from what you expected—so why should he keep you sitting there as if you were a complete stranger to this stuff? It will just be a matter of “having the computer do some of his symbol-manipulating processes for him so that he can use more powerful concepts and concept-manipulation techniques,” as you have so often been told.
Joe has two display screens side by side, but one of them he doesn’t seem to use as much as the other. And the screens are almost horizontal, more like the surface of a drafting table than the near-vertical picture displays you had somehow imagined. But you see the reason easily, for he is working on the display surface as intently as a draftsman works on his drawings, and it would be awkward to reach out to a vertical surface for this kind of work. Some of the time Joe is using both hands on the keys, obviously feeding information into the computer at a great rate.
Another slight surprise, though—you see that each hand operates on a set of keys on its own side of the display frames, so that the hands are almost two feet apart. But it is plain that this arrangement allows him to remain positioned over the frames in a rather natural position, so that when he picks the light pen out of the air (which is its rest position, thanks to a system of jointed supporting arms and a controlled tension and rewind system for the attached cord) his hand is still on the way from the keyset to the display frame. When he is through with the pen at the display frame, he lets go of it, the cord rewinds, and the pen is again in position. There is thus a minimum of effort, movement, and time involved in turning to work on the frame. That is, he could easily shift back and forth from using keyset to using light pen, with either hand (one pen is positioned for each hand), without moving his head, turning, or leaning.
A good deal of Joe’s time, though, seems to be spent with one hand on a keyset and the other using a light pen on the display surface. It is in this type of working mode that the images on the display frames changed most dynamically. You receive another real surprise as you realize how much activity there is on the face of these display tubes. You ask yourself why you weren’t prepared for this, and you are forced to admit that the generalizations you had heard hadn’t really sunk in—“new methods for manipulating symbols” had been an oft-repeated term, but it just hadn’t included for you the images of the free and rapid way in which Joe could make changes in the display, and of meaningful and flexible “shaping” of ideas and work status which could take place so rapidly.
Then you realized that you couldn’t make any sense at all out of the specific things he was doing, nor of the major part of what you saw on the displays. You could recognize many words, but there were a good number that were obviously special abbreviations of some sort. During the times when a given image or portion of an image remained un changed long enough for you to study it a bit, you rarely saw anything that looked like a sentence as you were used to seeing one. You were beginning to gather that there were other symbols mixed with the words that might be part of a sentence, and that the different parts of what made a full-thought statement (your feeling about what a sentence is) were not just laid out end to end as you expected. But Joe suddenly cleared the displays and turned to you with a grin that signalled the end of the passive observation period, and also that somehow told you that he knew very well that you now knew that you had needed such a period to shake out some of your limited images and to really realize that a “capability hierarchy” was a rich and vital thing.
“I guess you noticed that I was using unfamiliar notions, symbols, and processes to go about doing things that were even more unfamiliar to you?” You made a non-committal nod—you saw no reason to admit to him that you hadn’t even been able to tell which of the things he had been doing were to cooperate with which other things—and he continued. “To give you a feel for what goes on, I’m going to start discussing and demonstrating some of the very basic operations and notions I’ve been using. You’ve read the stuff about process and process-capability hierarchies, I’m sure. I know from past experience in explaining radical augmentation systems to people that the new and powerful higher-level capabilities that they are interested in—because basically those are what we are all anxious to improve—can’t really be explained to them without first giving them some understanding of the new and powerful capabilities upon which they are built. This holds true right on down the line to the type of low-level capability that is new and different to them all right, but that they just wouldn’t ordinarily see as being ‘powerful.’ And yet our systems wouldn’t be anywhere near as powerful without them, and a person’s comprehension of the system would be rather shallow if he didn’t have some understanding of these basic capabilities and of the hierarchical structure built up from them to provide the highest-level capabilities.”…
Three principal conclusions may be drawn concerning the significance and implications of the ideas that have been presented.
First, any possibility for improving the effective utilization of the intellectual power of society’s problem solvers warrants the most serious consideration. This is because man’s problem-solving capability represents possibly the most important resource possessed by a society. The other contenders for first importance are all critically dependent for their development and use upon this resource. Any possibility for evolving an art or science that can couple directly and significantly to the continued development of that resource should warrant doubly serious consideration.
Second, the ideas presented are to be considered in both of the above senses: the direct-development sense and the “art of development” sense. To be sure, the possibilities have long-term implications, but their pursuit and initial rewards await us now. By our view, we do not have to wait until we learn how the human mental processes work, we do not have to wait until we learn how to make computers more intelligent or bigger or faster, we can begin developing powerful and economically feasible augmentation systems on the basis of what we now know and have. Pursuit of further basic knowledge and improved machines will continue into the unlimited future, and will want to be integrated into the “art” and its improved augmentation systems—but getting started now will provide not only orientation and stimulation for these pursuits, but will give us improved problem-solving effectiveness with which to carry out the pursuits.
Third, it becomes increasingly clear that there should be action now—the sooner the better—action in a number of research communities and on an aggressive scale. We offer a conceptual framework and a plan for action, and we recommend that these be considered carefully as a basis for action. If they be considered but found unacceptable, then at least serious and continued effort should be made toward developing a more acceptable conceptual framework within which to view the over-all approach, toward developing a more acceptable plan of action, or both.
This is an open plea to researchers and to those who ultimately motivate, finance, or direct them, to turn serious attention toward the possibility of evolving a dynamic discipline that can treat the problem of improving intellectual effectiveness in a total sense. This discipline should aim at producing a continuous cycle of improvements—increased understanding of the problem, improved means for developing new augmentation systems, and improved augmentation systems that can serve the world’s problem solvers in general and this discipline’s workers in particular. After all, we spend great sums for disciplines aimed at understanding and harnessing nuclear power. Why not consider developing a discipline aimed at understanding and harnessing “neural power”? In the long run, the power of the human intellect is really much the more important of the two.
Reprinted from Engelbart (1962), with permission from SRI International.