6
The Cosa Nostra of the Data Processing Industry
We are at once the most unmanageable and the most poorly managed specialism in our society. Actors and artists pale by comparison. Only pure mathematicians are as cantankerous, and it’s a calamity that so many of them get recruited by simplistic personnel men.
—Herbert Grosch, “Programmers: The Industry’s Cosa Nostra,” 1966
Unsettling the Desk Set
The 1957 film Desk Set is best known to movie buffs as a lightweight but enjoyable romantic comedy, the eighth of nine pictures in which Spencer Tracy and Katherine Hepburn acted together, and the first to be filmed in color. The film is generally considered frivolous yet enjoyable, not one of the famous pair’s best, though still popular and durable. The plot is fairly straightforward: Tracy, as Richard Sumner, is an efficiency expert charged with introducing computer technology into the reference library at the fictional Federal Broadcasting Network. There he encounters Bunny Watson, the Hepburn character, and her spirited troop of female reference librarians. Watson and her fellow librarians, who spend their days researching the answers to such profound questions as “What kind of car does the king of the Watusis drive?” and “How much damage is caused annually to American forests by the spruce budworm?” immediately suspect Sumner of trying to put them all out of a job. After the usual course of conventional romantic comedy fare—mutual mistrust, false assumptions, sublimated sexual tension, and humorous misunderstandings—Watson comes to see Sumner as he truly is: a stand-up guy who was only seeking to make her work as a librarian easier and more enjoyable.
What is less widely remembered about Desk Set is that it was sponsored in part by the IBM Corporation. The film opens with a wide-angle view of an IBM showroom, which then closes to a tight shot of a single machine bearing the IBM logo. The equipment on the set was provided by IBM, and the credits at the end of the film—in which an acknowledgment of IBM’s involvement and assistance features prominently—appear as if printed on an IBM machine. IBM also supplied equipment operators and training.
The IBM Corporation’s involvement with Desk Set was more than an early example of opportunistic product placement. Underneath the trappings of a lighthearted comedy, Desk Set was the first film of its era to deal seriously with the organizational and professional implications of the electronic computer. In the midst of the general enthusiasm that characterized popular coverage of the computer in this period crept hints of unease about the possibility of electronic brains displacing humans in domains previously thought to have been free from the threat of mechanization. In 1949 the computer consultant Edmund Berkeley, in the first popular book devoted to the electronic computer, had dubbed them “Giant Brains; or, Machines That Think.” The giant brain metaphor suggested a potential conflict between human and machine—a conflict that was picked up by the popular press. “Can Man Build a Superman?” Time magazine asked in a cover story in 1950 on the Harvard Mark III computer.1 More pressingly, asked Colliers magazine a few years later, “Can a Mechanical Brain Replace You?”2 Probably it could, concluded Fortune magazine, at least if you worked in an office, where “office robots” were poised to “eliminate the human element.”3 IBM’s participation in production of Desk Set can only be understood in terms of its ongoing efforts, which started in the early 1950s, to reassure the public that despite rumors to the contrary, computers were not poised “to take over the world’s affairs from the human inhabitants.”4
Seen as a maneuver in this larger public relations campaign, Desk Set was an unalloyed triumph for IBM.5 The film is unambiguously positive about the electronic computer. The idea that human beings might ever be replaced by machines is represented as amusingly naive. Sumner’s Electronic-Magnetic Memory and Research Arithmetic Calculator (EMERAC) is clearly no threat to Watson’s commanding personality and efficiency. In fact, “Emmy” turns out to be charmingly simpleminded. When a technician mistakenly asks the computer for information on the Island “Curfew” (as opposed to Corfu), Emmy goes amusingly haywire. Fortunately, she could easily be put right using only a bobby pin, judiciously applied. The reassuring message was that computers were useful but dimwitted servants, and unlikely masters. As one reviewer described the situation, “It simply does not seem very ominous when they threaten to put a mechanical brain in a broadcasting company’s reference library, over which the efficient Miss Hepburn has sway. . . . The prospect of automation is plainly no menace to Kate.”6
But if the computer held no dangers for Hepburn, it did for many of the real-life office workers watching the film. Like Watson and her librarians, most would have greeted the arrival of a computer-toting efficiency expert with fear and trepidation. Although Tracy imbued the character of Sumner with his trademark gruff-but-likable persona, such experts were generally seen as the harbingers of reorganization, mechanization, and what the economist Thorstein Veblen described as the “degradation of labor.”7 And as Thomas Haigh has suggested, it was no coincidence that Sumner was both an efficiency expert and a computer designer; many of the “systems men” of the early electronic computer era were efficiency experts turned computer consultants. In any case, the specter of computer-driven unemployment looms large over Desk Set, if only as the source of initial conflict between Sumner and Watson. But even the most casual viewers of Desk Set might have suspected that absent the feisty Hepburn, the librarians at the Federal Broadcasting Network might not have gotten off so easily. Although the film alluded to a second EMERAC that had been installed in the payroll department, no mention was made of the payroll workers having a Watson of their own. Even if the skilled reference librarians and accountants were immune from computerization, though, what about other, less specialized workers? Did anyone really expect the two Emmies to remain confined to the library and payroll departments? It seemed inevitable that at least some Federal Broadcasting Network employees would be reduced to the status of mere machine operators, or perhaps replaced altogether.
Insofar as the Desk Set has been interpreted critically, it is in the context of these larger concerns about the replacement of human beings with computers. The struggle of human versus machine (or more precise, woman versus machine) depicted in the film is often seen as a metaphor for worker resistance to computerization. Although the possibility that computers might supersede humans was much discussed in the popular press during the 1950s and early 1960s, with the exception of a small number of occupational categories the adoption of computer technology generally did not involve large-scale worker displacement. For the most part, what resistance to corporate computerization efforts did emerge came not from ordinary workers but rather from their managers. It was these managers who frequently saw their work most directly affected by the applications developed by computer programmers and systems analysts. Over the course of the 1950s corporations had discovered that the electronic computer was more than just an improved version of the mechanical calculator or Hollerith machine. What was originally envisioned as a “chromium-plated tabulator,” as Haigh has portrayed it, was increasingly seen as a tool for managerial control and communication.8 As the electronic computer was gradually reinterpreted in larger organizational terms, first as an “electronic data processing” device and then again as a “management information system,” it was increasingly seen as a source of institutional and professional power.
Computers Can’t Solve Everything
The 1960s were something of a golden age for the computer industry. The industry grew at an average annual rate of 27 percent during this period.9 At the beginning of the decade there were roughly fifty-four hundred computers installed in the United States; by 1970 this number had grown to more than seventy-four thousand.10 In 1969 alone U.S. firms purchased $7 billion worth of electronic computers and related equipment. An additional $14 billion was spent on computer personnel and materials. The corporate world’s total investment in computing that year represented 10 percent of the nation’s total annual expenditure on capital equipment.11 These corporate investors were also getting increasingly more for their money. In the first half of the decade, innovations in transistor and integrated circuit technology had increased the memory size and processor speed of computers by a factor of ten, providing an effective performance improvement of almost a hundred. By the end of the decade, the inexorable march toward smaller, faster, and cheaper computing predicted by Gordon Moore in 1965 was clearly in evidence.12
It was during this period that the IBM Corporation rose to worldwide dominance, establishing in the process a series of institutional structures and technological standards that shaped developments in the industry for the next several decades. Under IBM’s substantial umbrella a broad and diverse set of subsidiary industries flourished, including not just manufacturers of complementary (or even competing) hardware products but also programming services companies, time-sharing “computer utilities,” and independent data processing service providers. When we consider such subsidiary industries, our estimate of the total size of the computer industry almost doubles.13
And yet by the late 1960s there were signs of trouble in paradise. Foreshadowing the “productivity paradox” debate of later decades, hints began to appear in the literature that a growing number of corporations were questioning the value of their investment in computing. As an article in 1969 in Fortune magazine entitled “Computers Can’t Solve Everything” described the situation, “After buying or leasing some 60,000 computers during the past fifteen years, businessmen are less and less able to state with assurance that it’s all worth it.” The article recited a litany of overambitious and ultimately unsuccessful attempts to computerize planning and management processes at such firms as Pillsbury, Westinghouse, and the International Minerals and Chemical Corporation. The success that many companies experienced in computerizing their clerical operations in the 1950s, argued industry reporter Thomas Alexander, had generated unrealistic expectations about their ability to apply computing power to more sophisticated applications, such as controlling manufacturing operations, optimizing inventory and transportation flows, and improving the quality of managerial decision making. But perhaps one in ten businesses was “showing expertise in the management of the computer” to higher-order activities. The rest were slowly and uncomfortably “waking up to the fact that they were oversold” on computer technology—not just by self-interested manufacturers and computer consultants, but by their own data processing personnel.14
Fortune was not alone in its assessment of the apparent unprofitability of many corporate computerization efforts. Beginning in the mid-1960s, the noted Harvard Business School professor John Dearden published a series of articles in the Harvard Business Review dismissing as “myths” and “mirages” the alleged benefits of computerized corporate information systems.15 Prominent industry analyst John Diebold complained, also in the pages of the Harvard Business Review, about the “naive standards” that many businesses used to evaluate the costs and benefits of computer technology. “Nowhere is this lack of [business] sophistication more apparent than in the way in which computers are applied in American industry today.”16 Management consultant David Hertz argued that computers were “oversold and underemployed.”17 A survey in 1968 by the Research Institute for America had determined that only half of all corporate computer users were convinced that their investment in computing had paid off. The inability of computerization projects to justify their own existence signaled “the fizzle in the ‘computer revolution,’” suggested the accounting firm Touche Ross and Company.18
Perhaps the most devastating critique of corporate computing came from the venerable consulting firm McKinsey and Company. In 1968 McKinsey released a report titled “Unlocking the Computer’s Profit Potential,” in which it claimed that “computer efforts, in all but a few exceptional companies, are in real, if often unacknowledged, trouble.” Despite years of investment in “sophisticated hardware,” “larger and increasingly costly computer staffs,” and “complex and ingenious applications,” most of these companies were nowhere near realizing their anticipated returns on the investment in electronic computing. Instead, they were increasingly characterized by rising costs, lost opportunities, and diminishing returns. Although the computer had transformed the administrative and accounting operations of many U.S. businesses, “the computer has had little impact on most companies’ key operating and management problems.”19
The McKinsey report was widely cited within the business and technical literature. The editors of Datamation endorsed it almost immediately, declaring that it “lays waste to the cherished dream that computers create profits.”20 Computers and Automation reprinted it in its entirety several months later. References to the report appear in a diverse range of journals for at least two decades after its initial publication.21
The dissatisfaction with corporate computerization efforts expressed in the McKinsey report and elsewhere must be interpreted within the context of a larger critique of software that was percolating in this period. As mentioned earlier, the “gap in programming support” that emerged in 1950s had worsened to “software turmoil” in the early 1960s, and by the end of the decade was being referred to as a full-blown “software crisis.”22 And in 1968, the first NATO Conference on Software Engineering firmly established the language of the software crisis in the vernacular of the computer community. Large software development projects had acquired a reputation for being behind schedule, over budget, and bug ridden. Software had become “a scare item for management . . . an unprofitable morass, costly and unending.”23
It is important to note that the use of the word software in this period was somewhat inconsistent. As Thomas Haigh has suggested, the meaning of the word software was changing rapidly during the 1960s, and could refer alternatively to something specific—the systems software and utilities that today we would describe as an operating system—or more generally to the applications, personnel, and processes associated with computing. He argues that the software crisis as it was understood by the NATO conference organizers referred only to the former definition.24 Substantial evidence shows that as early as 1962 the term “software” was being used much more broadly to refer to a broad range of computer-based applications.25 But even if one were to insist on a narrow systems-oriented definition of the word “software,” however, then the predicament described by the McKinsey report might simply be recharacterized as an “applications crisis.”26 From a more modern understanding of software as the heterogeneous collection of tools, applications, personnel, and procedures that together comprise the system of computing in action, the distinction is immaterial.
Whether we call them a software crisis or an applications crisis, the concerns of corporate managers were clearly about the “softer” elements of computer-based systems. The crucial distinction to be made between the applications crisis discussed in the business literature and the more technical literature on the software crisis lies not in its identification of symptoms but rather in its diagnosis of the underlying disease. Both communities were concerned with the apparent inability of existing software development methods to produce cost-effective and reliable commercial applications. But where the technical experts identified the root causes of the crisis in terms of production—in other words, as a function of the difficulties inherent in building software right—many corporate managers believed that the real challenge was in determining the right software to build. Faced with exponentially rising software costs, and threatened by the unprecedented degree of autonomy that top-level executives seemed to grant to computer people, many corporate managers began to reevaluate their largely hands-off policies toward programmer management. Whereas in the previous decade computer programming had been widely considered to be a uniquely creative activity—and therefore almost impossible to manage using conventional methods—by the end of the 1960s new perspectives on these problems began to appear in the industry literature. The real reason that most data processing installations were unprofitable, according to the McKinsey report, was that “many otherwise effective top managements . . . have abdicated control to staff specialists.” These specialists might be “good technicians,” but they had “neither the operation experience to know the jobs that need doing nor the authority to get them done right.”27 Or as another contemporary report summarized the situation, “many managers sat back and let the computer boys monkey around with systems that were doomed to failure or mediocrity.”28
The dramatic shift in tone of the management literature during this time is striking. Prior to the late 1960s the conventional wisdom was that computer programming was a uniquely creative activity—genuine “‘brain business,’ often an agonizingly difficult intellectual effort”—and therefore almost impossible to manage using conventional methods.29 But by the end of the decade, the same journals that had previously considered programming unmanageable were filled with exhortations toward better software development management: “Controlling Computer Programming”; New Power for Management; “Managing the Programming Effort”; and The Management of Computer Programming Efforts.30 The same qualities that had previously been seen as essential indicators of programming ability, such as creativity and a mild degree of personal eccentricity, now began to be perceived as merely unprofessional. As part of their rhetorical construction of the applications crisis as a crisis of programmer management, corporate managers accused programmers of lacking professional standards and loyalties: “too frequently these people [programmers], while exhibiting excellent technical skills, are non-professional in every other aspect of their work.”31 A widely quoted psychological study that identified as a “striking characteristic of programmers . . . their disinterest in people,” reinforced the managers’ contention that programmers were insufficiently concerned with the larger interests of the company.32 Computer specialists were increasingly cast as self-interested peddlers of whizbang technologies. “In all too many cases the data processing technician does not really understand the problems of management and is merely looking for the application of his specialty,” wrote William Walker in a letter to the editor in the management-oriented journal Business Automation.33 Calling programmers the “Cosa Nostra” of the industry, the colorful former- programmer-turned-technology-management-consultant Herbert Grosch declared that computer specialists “are at once the most unmanageable and the most poorly managed specialism in our society. Actors and artists pale by comparison. Only pure mathematicians are as cantankerous, and it’s a calamity that so many of them get recruited by simplistic personnel men.” He warned managers to “refuse to embark on grandiose or unworthy schemes, and refuse to let their recalcitrant charges waste skill, time and money on the fashionable idiocies of our [computer] racket.”34
The most obvious explanation for the sudden reversal in management attitudes toward computer people is that just as corporate investment in computing assets escalated rapidly in this period, so did its economic interest in managing these assets effectively. And since the costs of computer software, broadly defined to include people, planning, and processes, were growing rapidly in relation to hardware—for every dollar spent on computer hardware, claimed the McKinsey report, two dollars were spent on staff and operations—it should be no surprise that personnel issues were the focus of particular attention. Computer programmers alone required at least 35 percent of the total operational budget. The size of the average computer department had doubled in the years between 1962 and 1968, and was expected to double again by 1975. A report in 1966 by the American Federation of Information Processing Societies (AFIPS) estimated that in 1960, there were already 60,000 systems analysts and as many as 120,000 computer programmers working in the industry. AFIPS expected this number to more than double by the end of the decade.35
There is no question that the rising costs of software development caused tension between computer personnel and their corporate managers. The continuous gap between the demand and supply of qualified computer personnel had in recent years pushed up their salary levels far faster (and in many cases higher) than those of other professionals and managers. In 1965 the ADP (Automatic Data Processing, Inc.) newsletter predicted average salary increases in data processing in the range of 40 to 50 percent over the next five years.36 Programming professionals had a “personal monopoly” that “manifests itself in the market place,” which provided them with considerable opportunities for horizontal mobility, either in pursuit of higher salaries or more challenging positions.37 Simply maintaining existing programming staff levels proved a real trial for personnel managers.38 One large employer experienced a sustained turnover rate of 10 percent per month.39 For entry-level programmers whose marketability increased rapidly the turnover rate was a high as 100 percent, one personnel manager estimated, which further exacerbated the problem of training and recruitment.40 Who was willing to train programmers only to see them leverage that investment into a higher salary elsewhere? The problem of “body snatching” of computer personnel by search firms and other personnel consultants became so bad that AFIPS banned recruiters from the annual Joint Computer Conferences.41 This simply shifted the action to nearby bars and hotel rooms, where headhunters would slip blanket job offers under every door.
But although the rising cost of software and software personnel was certainly a factor in the perceived applications crisis of the late 1960s, this was more than simply a recapitulation of the personnel problems of the previous decade. Then it had been largely accepted that the work that the computer specialists did was valuable enough to deserve special consideration. It might be a problem for the industry that good computer programmers and systems analysts were hard to find and develop, but this was because software development was inherently difficult. The solutions proposed to this problem generally involved elevating the computer personnel: developing better tools for screening potential programmer trainees, establishing programs for computer science education and fundamental research, and encouraging programmers to professionalize. Even the development of new automatic programming systems such as FORTRAN and COBOL, although originally intended to eliminate the need for skilled programmers altogether, had the unintended effect of elevating their status. For those interested in advancing the academic status of computer science, the design of programming languages provided an ideal forum for exploring the theoretical aspects of their discipline. More practical-minded programmers saw programming languages as a means of eliminating the more onerous and error-prone aspects of software development. By eliminating much of the tedium associated with low-level machine coding, they allowed programmers to focus less on technical minutia and more on high-status activities such as design and analysis. In any case, the organizational conflicts that define the applications crisis of the late 1960s were rarely mentioned in the first decade or so of commercial computing. As late as 1963 a survey of programmers found that the majority (59 percent) reported that the general attitude toward them and their work was positive.42
What is novel and significant about the applications crisis of the late 1960s is that it marked a fundamental change in attitude toward computer personnel. This change was reflected in both the increasingly dismissive language used by corporate managers to refer to their computer personnel—not only did the formerly affectionate “computer boys” acquire a new, patronizing edge but even less flattering titles appeared, such as “the new theocracy,” “prima donnas,” and “industrial carpetbaggers”—and also the solutions that were proposed to the now seemingly perpetual crisis in software development.43 It was in this period that the rhetoric of crisis became firmly established in the industry literature. But more important, it was during this time that the emerging crisis became defined as fundamentally managerial in nature. Many of the technological, managerial, and economic woes of the software industry became wrapped up in the problem of programmer management. Indeed, as will be described in a subsequent chapter, many of the most significant innovations in software engineering to be developed in the immediate NATO conference era were as much managerial innovations as they were technological or professional ones.
By reconstructing the emerging software crisis as a problem of management technique rather than technological innovation, advocates of these new management-oriented approaches also relocated the focus of its solution, removing it from the domain of the computer specialist and placing it firmly in the hands of traditional managers. Programmers and systems analysts, it was argued, “may be superbly equipped, technically speaking, to respond to management’s expectations,” but they are “seldom strategically placed (or managerially trained)—to fully assess the economics of operations or to judge operational feasibility.”44 By representing programmers as shortsighted, self-serving technicians, managers reinforced the notion that they were ill equipped to handle the big picture, mission-critical responsibilities. After all, according to the McKinsey report, “only managers can manage the computer in the best interests of the business.”45 And not just any managers would do: only those managers who had traditional business training and experience were acceptable, since “managers promoted from the programming and analysis ranks are singularly ill-adapted for management.”46 It would be this struggle for organizational authority and managerial control that would come to dominate later discussions about the nature and causes of the software crisis.
Seat-of-the-Pants Management
Computer specialists had always posed something of a conundrum for managers. The expectation that they would quietly occupy the same position in the organizational hierarchy as the earlier generation of data processing personnel was quickly proven unrealistic. Unlike a tabulating machine, the electronic computer was a large, expensive technology that required a high level of technical competence to operate effectively. The decision to purchase a computer had to be made at the highest levels of the organization. But although the high-tech character of electronic computing appealed to upper management, few executives had any idea how to integrate this novel technology effectively into their existing social, political, and technological networks. Many of them granted their computer specialists an unprecedented degree of independence and authority.
Even the lowest ranking of these specialists possessed an unusual degree of autonomy. To be sure, the occupations of machine technician and keypunch operators remained relatively unskilled and, to a certain degree, feminized. Yet the largest and fastest-growing segment of this population, the computer programmers, were increasingly being recognized as being valuable—perhaps even irreplaceable—corporate employees. This was certainly true of the first generation of programmers, whose idiosyncratic techniques for coaxing maximum performance out of primitive equipment were absolutely indispensable. The fact that the technology of computing was changing so rapidly in this period further complicated the ability of even data processing managers—who generally lacked practical programming experience—to understand and supervise the activities of programmers. The “best practice” guidelines that applied to one particular generation of equipment were quickly superseded by a different set of techniques and methodologies.47 Even as the technology of computing stabilized over the course of the early 1950s, though, programmers maintained their position of central importance. Perhaps even more crucial, programming acquired a reputation for being a uniquely creative endeavor, one relatively immune from traditional managerial controls. The discovery (allegedly) of great disparities between programmers reinforced the conventional wisdom that good programmers were born, not made. One widely cited IBM study determined that code produced by a truly excellent programmer was twenty-six times more efficient than that produced by their merely average colleagues.48 Despite the serious methodological flaws that compromised this particular study (including a sample population of only twelve individuals), the twenty-six to one performance ratio quickly became part of the standard lore of the industry. The implication was that talented programmers were effectively irreplaceable. “The vast range of programmer performance indicated earlier may mean that it is difficult to obtain better size- performance software using machine code written by an army of programmers of lesser than average caliber,” argued Dr. Edward E. David of Bell Telephone Laboratories.49 All of this suggested that “the major managerial task” was finding—and keeping—“the right people”: “with the right people, all problems vanish.”50
The idea that computer programmers possessed an innate and inarticulable skill was soon embodied in the hiring practices of the industry, which selected programmers on the basis of aptitude tests and personality profiles that emphasized mathematical ability and logical thinking over business knowledge or managerial savvy. In fact, many of these early selection mechanisms seemed to pick traits that were entirely opposed to traditional corporate virtues. “Look for those who like intellectual challenge rather than interpersonal relations or managerial decision-making. … Do not consider the impulsive, the glad hander, or the ‘operator.’”51 The one personality characteristic of programmers that appeared to be universally recognized was their “disinterest in people.” According to an influential study by the SDC personnel psychologists Dallis Perry and William Cannon, compared with other corporate employees, “programmers dislike activities involving close personal interaction. They prefer to work with things rather than people.”52 Whether this lack of sociability was an inherent trait of talented programmers, a reflection of self-selection within the profession, or an undeserved stereotype is largely irrelevant: the point is that the perception that programmers were “difficult” was widespread in the industry. As the management consultant Richard Brandon described it, the average programmer was “often egocentric, slightly neurotic, and he borders upon a limited schizophrenia.” As a group, programmers could be singled out in any corporation by their higher incidence of “beards, sandals, and other symptoms of rugged individualism or nonconformity.”53 Programmers were hardly a group that seemed destined to get along well with traditional managers.
Figure 6.1
There is some truth to the perception that the “longest-haired computer theorists” were seen as corporate outsiders.54 Leaving aside the fact that apparently enough working programmers took their artistic persona seriously enough to flaunt corporate conventions of dress and appearance, the need to keep expensive computers running as continuously as possible meant that many programmers worked nonstandard hours. During the day the machine operators had privileged access to the machines, so programmers frequently worked at night and were therefore not always available during traditional business hours. The need to work nights appeared to have a particular problem for female programmers, who were frequently barred by company policy from being on the premises during the off-hours.55 Combined with their sometimes slovenly appearance, this practice of keeping odd hours suggested to more conventional employees that programmers considered themselves superior. The direct supervisors of computer personnel might have understood the underlying reasons for these apparent eccentricities, but the majority of managers did not. The fact that data processing was seen as a service department within the larger organization also did nothing to help ingratiate programmers to their colleagues. Whereas most other employees saw themselves as part of a collective endeavor to make things or provide services, service staffs were seen as a necessary though nonproductive second-class citizens. They were essentially just an overhead cost, like heat or electricity.
But despite this latent, low-level corporate resentment of computer specialists, there were few overt expressions of outright hostility. The general consensus through the mid-1960s seemed to be that computer programming was somehow an “exceptional” activity, unconstrained by the standard organizational hierarchy and controls. “Generating software is ‘brain business,’ often an agonizingly difficult intellectual effort,” argued one article in Fortune magazine in 1967. “It is not yet a science, but an art that lacks standards, definitions, agreement on theories and approaches.”56 The anecdotal evidence seemed to indicate that “the past management techniques so successful in other disciplines do not work in programming development. . . . Nothing works except a flying-by-the-seat-of-the-pants approach.”57 The general consensus was that computer programming was “the kind of work that is called creative [and] creative work just cannot be managed.”58
The word creative and its various analogs have frequently been used to describe the work of computer specialists—and computer programmers in particular—most often in the context of discussions about their alleged unmanageability. But what did it mean to do creative work in the corporate context? Surely computer programming is not the only white-collar occupation that requires skill, ingenuity, and imagination? And why did the supposed creativity of programmers suddenly, in a relatively short period in the late 1960s, become a major professional liability rather than the asset it had been just a few years earlier?
The earliest and most obvious references to programmer creativity appear in discussions of the black art of programming in the 1950s. For the most part these references are disparaging, referring to the arcane and idiosyncratic techniques as well as mysterious—and quite possibly chimerical—genius of individual programmers. John Backus, for example, had no use for such expressions of programmer creativity.59 Yet for many others the idea of the programmer as artist was compelling and captured useful truths. When Frederick Brooks described the programmer as a poet, building “castles in the air, from air, creating by exertion of the imagination,” he meant the metaphor to be taken seriously.60 The noted computer scientist Donald Knuth also frequently portrayed programming as a legitimate literary genre, and went so far as to suggest that it “is best regarded as the process of creating literature, which are meant to be read.”61 Although references to programming as an creative activity in this artistic sense pervade the technical and popular literature on computing, and play an important role in defining the programming community’s self-identity from the 1950s to the present, this is not the sense in which programming was considered creative by most corporate managers.62
The meaning of creativity most often mobilized in the corporate context was intended to differentiate the mechanical tasks associated with programming—the coding of machine instructions, for example—from the more intellectual activities associated with design and analysis. As was described in chapter 2, early attempts to define programming in terms of coding did not long survive their infancy. Translating even the simplest and most well-defined algorithm into the limited set of instructions understood by a computer turned out to require a great deal of human ingenuity. This is one expression of programmer creativity. But more important, the process of constructing the algorithm in the first place turned out to be even more challenging. Even the most basic human cognitive processes are surprisingly difficult to reduce to a series of discrete and unambiguous activities. The skills required to do so were not just technical but also social and organizational. In order to computerize a payroll system, for instance, an applications developer had to interview everyone currently involved in the payroll process, comprehend and document their contributions to the process in explicit detail—not failing to account for exceptional cases and infrequent variations to normal procedures—and then translate these complex activities first into a form that other programmers could understand and eventually into the precise commands required by the computer. Since the payroll department did not operate in isolation, it had to work with other departments to coordinate activities, standardize the required inputs and outputs to the procedures, and negotiate points of conflict and contention. It also had to produce documentation, train users, arrange testing and verification procedures, and manage the logistics of implementation and rollout. All of this had to happen without a major interruption of service, since missing a payroll cycle would make everyone in the company extremely unhappy. These were all of the activities associated with the broad term software development. It is not hard to see why such development required creativity, or also why such expressions of creativity could be perceived as threatening. As Carl Reynolds of the Computer Usage Corporation described the situation, “There’s a tremendous gap between what the programmers do and what the managers want, and they can’t express these things to each other.”63
In many companies, the various activities associated with software development were split among several categories of computer personnel. The primary division was between programmers and systems analysts. The systems analysts were charged with the more organizational and design-related activities, and programmers with the more technical elements. But although many companies maintained seemingly rigid hierarchies of occupational categories—junior programmer, senior programmer, systems analyst, and senior systems analyst—in practice these neat divisions of labor quickly broke down.64 In any case, to the rest of the corporation, both groups were generally referred to as programmers. Computer programming, broadly defined to include the entire range of activities associated with designing, producing, and maintaining heterogeneous software systems, remained an activity with ambiguous boundaries, a combination of technical, intellectual, and organizational expertise that increasingly brought programmers into conflict with other white-collar employees.
The first glimpse of this potential can be seen in a Price Waterhouse report from 1959 called Business Experience with Electronic Computers. The report was the first book-length, comprehensive, publicly available study of corporate computing efforts, and appears to have been made widely available. In it, a group of Price Waterhouse consultants concludes that the secret to success in computing was the availability of high-quality programming, and confirmed the conventional wisdom that “high quality individuals” were the “key to top grade programming.” Why? Because “to ‘teach’ the equipment, as is amply evident from experience to date, requires considerable skill, ingenuity, perseverance, organizing ability, etc. The human element is crucial in programming.” In emphasizing the “considerable skill, ingenuity, perseverance, [and] organizing ability” required of programmers, the study deliberately conflated the roles of programmer and analyst. In fact, its authors suggested, “the term ‘programmer’ . . . is unfortunate since it seems to indicate that the work is largely machine oriented when this is not at all the case. . . . [T]raining in systems analysis and design is as important to a programmer as training in machine coding techniques; it may well become increasingly important as systems get more complex and coding becomes more automatic.” Perhaps even more significantly, the study blurred the boundary between business experience and technical expertise. If anything, it privileged the technical, since “a knowledge of business operations can usually be obtained by an adequate expenditure of time and effort,” whereas “innate ability . . . seems to have a great deal to do with a man’s capacity to perform effectively in . . . systems design.”65
Management, Information, and Systems
As software projects expanded in scope to encompass not only traditional data processing applications (payroll, for example) but also management and control, computer personnel began to encroach on the domains of operational managers. The changing role of the computer in corporate management and the rising power of EDP professionals did not go unnoticed by other midlevel managers. As early as 1959, observers were noting a sense of “disenchantment” on the part of many managers. Overambitious computerization efforts had “placed stresses on established organizational relationships,” and demanded skills “not provided by the previous experience of people assigned to the task.”66 The increasing inclusion of computer personnel as active participants in all phases of software development, from design to implementation, brought them into increasing contact—and conflict—with other corporate employees.
The situation was complicated by the publication in 1958 of an article in the Harvard Business Review titled “Management in the 1980s,” in which Howard Leavitt and Thomas Whisler predicted a coming revolution in U.S. business management. Driven by the emergence of what they called “information technology,” this revolution would radically reshape the landscape of the modern corporation, completely reversing the recent trend toward participative management, recentralizing power in the hands of a few top executives, and utterly decimating the ranks of middle management. And although “major resistance” could be expected during the process of transforming “relatively autonomous and unprogrammed middle-management jobs” into “highly routinized programs,” the benefits offered to top-level executives meant that an information technology revolution would be inevitable.67
The central premise of Leavitt and Whisler’s vision was that information technology—which they described as a heterogeneous system comprised of the electronic computer, operations research techniques, and sophisticated decision-support software—would largely eliminate the need for autonomous middle managers. Jobs that had previously required the discernment and experience of skilled managers would be replaced by scientifically “programmed” systems and procedures. “Just as planning was taken from the hourly worker and given to the industrial engineer,” so too would it be taken from the operational managers. Information technology allowed “the top to control the middle just as Taylorism allowed the middle to control the bottom.” The top would increasingly include what Leavitt and Whisler called a “programmer elite.” And although the programmer being referred to here was obviously a logistical or mathematical planner rather than a computer programmer, it was also clear that this new elite would be intimately familiar with computer technology and software design.68
Although “Management in the 1980s” is most generally cited for its role in introducing the term information technology, it is best understood in the context of a more general shift in management practices in the decades after the Second World War. The war had produced a series of “managerial sciences”—including operations research, game theory, and systems analysis—all of which promised a more mathematical and technologically oriented approach to business management. As Philip Mirowski and others have suggested, these nascent “cyborg” sciences were deeply connected to the emerging technology of electronic computing.69 Not only did many of these new techniques require a significant amount of computing power in and of themselves but they relied on the electronic computer as a central metaphor for understanding the nature of the modern bureaucratic organization.70 Many of the most visionary proposals for the use of the electronic computer in management frequently rode into the corporation on the back of this new breed of expert consultants.
Foremost among these new computer radicals was Herbert Simon, who in 1949 helped found the Carnegie-Mellon University’s Graduate School of Industrial Administration (and who in 1978 was awarded a Nobel Prize for his work on the economics of rational decision making). In his book The New Science of Management in 1960, Simon outlined his version of a machine-aided system of organizational management. An early pioneer in the field of artificial intelligence, Simon had no doubts about the ability of the electronic computer to transform organizations; as a result of advances in decision-support software, Simon argued, technologically sophisticated firms were “acquiring the technical capacity to replace humans with computers in a rapidly widening range of ‘thinking’ and ‘deciding’ tasks.” Within twenty-five years, he predicted, firms will “have the technical capability of substituting machines for any and all human functions in organizations.” Interestingly enough, Simon did not believe that this radical new use of the computer would lead to the creation of a computing elite but rather that improvements in artificial intelligence would lead to the elimination of the computer specialist altogether.71
The idea that “thinking machines” would soon replace expert computer programmers was not widely shared outside the artificial intelligence community, however. More common was the notion that the need for such decision makers could be made redundant by the development of an integrated management system that would feed information directly to high-level executives, bypassing middle managers completely. John Diebold described one version of such a system in an article in the Harvard Business Review in 1964. When Diebold had introduced the concept of “automation” more than a decade earlier, he had confined the use of automatic control systems to traditional manufacturing and production processes. But his article proposed a “bolder, more innovative” approach to automatic data processing (ADP) that blurred the boundaries between factory floor and office space. Calling ADP the “still-sleeping giant” of modern corporate management, Diebold described, in vividly organic terms, a single information system that would “feed” an entire business. This system would be “the arteries through which will flow the life stream of the business: market intelligence, control information, strategy decisions, feedback for change.” Gradually, the system would grow to encompass and absorb the entire organization. And after that, suggested Diebold, “management would never be the same.”72
The monolithic information system portrayed by Diebold became the management enthusiasm of the 1960s, variously referred to in the literature as the “total systems concept,” “management system,” “totally integrated management information system,” and most frequently, MIS. As Thomas Haigh has convincingly demonstrated, during the decade of the 1960s “a very broad definition of MIS spread rapidly and was endorsed by industrial corporations, consultants, academic researchers, management writers, and computer manufacturers.”73 Although important differences existed between the specific versions of MIS presented by these various champions, in general they shared several key characteristics: the assumption that information was a critical corporate and managerial asset; a general enthusiasm for the electronic computer and its ability to centralize managerial information; and the clear implication that such centralization would come at the expense of middle managers.
A New Theocracy—or Industrial Carpetbaggers?
Although the dream of the total management system never really came to fruition, the shift of power from operational managers to computer specialists did seem to occur in at least some organizations. In a follow-up to “Management in the 1980s” in 1967 titled “The Impact of Information Technology on Organizational Control,” Thomas Whisler reiterated his view that information technology “tends to shift and scramble the power structure of organizations. . . . The decision to locate computer responsibility in a specific part of an organization has strong implications for the relative authority and control that segment will subsequently achieve.” It seemed unlikely, he argued, that anyone “can continue to hold title to the computer without assuming and using the effective power it confers.” He cited one insurance executive as saying that “there has actually been a lateral shift to the EDP manager of decision-making from other department managers whose departments have been computerized.” Whisler also quoted another manager at length who was concerned about the relative decline of managerial competence in relation to computer expertise: “The supervisor . . . has been replaced as the person with superior technical knowledge to whom the subordinates can turn for help. This aspect of supervision has been transferred, at least temporarily, to the EDP manager and programmers or systems designers involved with the programming. . . . [U]nderneath, the forward planning function of almost all department managers has transferred to the EDP manager.”74
Whisler was hardly alone in his assessment of the role of computing personnel in organizational power shifts.
In 1962 the Harvard Business Review warned against “computer people … attempting to assume the role of high priests to the [electronic brain],” who would “ignore all the people with operating experience and concern themselves with looking for a place to apply some new trick technique.”75 A 1964 article in U.S. News and World Report asked if the computer was “running wild” within the corporation, and quoted one expert as saying that the “computer craze” would end as a “nightmare” for executives.76 In 1965, Robert McFarland warned of an “electronic power grab” in which computer specialists were “stealing” decision-making authority from top executives: “Control of data processing activities can mean control of the firm—without the knowledge of top management.”77 A textbook for managers from 1969 complained that “all too often management adopts an attitude of blind faith (or at least hope) toward decisions of programmers.”78 In her book How Computers Affect Management from 1971, Rosemary Stewart described how computer specialists mobilized the mystery of their technology to “impinge directly on a manager’s job and be a threat to his security or status.”79 The adoption of computer technology threatened to bring about a revolution in organizational structure that carried with it tangible implications for the authority of managers: “What has not been predicted, to any large degree, is the extent to which political power would be obtained by this EDP group. Top management has helped . . . by not doing their job and controlling computer systems.”80 The frequent association of computer boys with external consultants only compounded the resentment of regular employees.
There is no doubt that by the end of the decade, traditional corporate managers were extremely aware of the potential threat to their occupational territory posed by the rise of computer professionals. Thomas Alexander, in his Fortune article in 1969, noted a growing cultural clash between programmers and managers: “Managers . . . are typically older and tend to regard computer people either as mere technicians or as threats to their position and status—in either case they resist their presence in the halls of power.”81 In that same year, Michael Rose, in his Computers, Managers, and Society, suggested that local departmental managers
obviously tend to resist the change. For a start, it threatens to transform the concern as they know and like it. . . . At the same time the local’s unfamiliarity with and suspicion of theoretical notions leave him ill-equipped to appreciate the rationale and benefits of computerization. It all sounds like dangerously far-fetched nonsense divorced from the working world as he understands it. He is hardly likely to hit it off with the computer experts who arrive to procure the organizational transformation. Genuine skepticism of the relevance of the machine, reinforced by emotional factors, will drive him towards non-cooperation.82
It is not difficult to understand why many managers came to fear and dislike computer programmers and other software specialists. In addition to the usual suspicion with which established professionals generally regarded unsolicited changes in the status quo, managers had particular reasons to resent EDP departments. The unprecedented degree of autonomy that corporate executives granted to computer people seemed a deliberate affront to the local authority of departmental managers. The “inability or unwillingness of top management to clearly define the objectives of the computer department and how it will be utilized to the benefit of the rest of the organization” lead many operational managers to “expect the worst and, therefore, begin to react defensively to the possibility of change”83 In the eyes of many nontechnical managers, the personnel most closely identified with the digital computer “have been the most arrogant in their willful disregard of the nature of the manager’s job. These technicians have clothed themselves in the garb of the arcane wherever they could do so, thus alienating those whom they would serve.”84
The Revolt of the Managers
In response to this perceived challenge to their authority, managers developed a number of interrelated responses intended to restore them to their proper role in the organizational hierarchy.
The first was to define programming as an activity, and by definition programmers as professionals, in such a way as to assign it and them a subordinate role as mere technicians or service staff workers. As the sociologists Haroun Jamous and Bernard Peloille argued in their groundbreaking study of the organizational politics of professional development, this technique of reducing the contributions of competing groups to the merely technical is a time-honored strategy for defending occupational and professional boundaries.85 We have already seen some of the ways in which the rhetoric of management literature reinforced the notion that computer specialists were self-interested, narrow technicians rather than future-minded, bottom-line-oriented good corporate citizens. “People close to the machine can also lose perspective,” maintained one computer programming “textbook” for managers. “Some of the most enthusiastic have an unfortunate knack of behaving as if the computer were a toy. The term ‘addictive’ comes to mind.”86 Managers emphasized the youthfulness and inexperience of most programmers. The results of early aptitude tests and personality profiles—those that emphasized their “dislike for people” and “preference for . . . risky activities”—were widely cited as examples of the “immaturity” of the computer professions. In fact, one of the earliest and most widely cited psychological profiles of programmers suggested that there was a negative correlation between programming ability and interpersonal skills.87
The perception that computer programmers were particularly antisocial, that they “preferred to work with things rather than people,” reinforced the notion that programming was an inherently solitary activity, ill suited to traditional forms of corporate organization and management. The same qualities that had previously been thought essential indicators of programming ability, such as creativity and a mild degree of personal eccentricity, now began to be perceived as being merely unprofessional. As part of their rhetorical construction of the applications crisis as a problem of programmer management, corporate managers accused programmers of lacking professional standards and loyalties: “Too frequently these people [programmers], while exhibiting excellent technical skills, are non-professional in every other aspect of their work.”88
Another common strategy for deprecating computer professionals was to challenge their technical monopoly directly. If working with computers was in fact not all that difficult, then dedicated programming staffs were superfluous. One of the alleged advantages of the COBOL programming language usually touted in the literature was its ability to be read and understood—and perhaps even written—by informed managers.89 The combination of new programming technology and stricter administrative controls promised to eliminate management’s dangerous dependency on individual programmers: “The problems of finding personnel at a reasonable price, and the problem of control, are both solved by good standards. If you have a set of well-defined standards you do not need clever programmers, nor must you find yourself depending on them.”90 At the very least, managers could learn enough about computers to avoid being duped by the “garb of the arcane” in which many programmers frequently clothed themselves.91 At West Point, cadets were taught enough about computers to prevent them from “being at the mercy of computers and computer specialists. . . . [W]e want them to be confident that they can properly control and supervise these potent new tools and evaluate the significance of results produced by them.”92
In much of the management literature of this period, computer specialists were cast as self-interested peddlers of whizbang technologies. “In all too many cases the data processing technician does not really understand the problems of management and is merely looking for the application of his specialty.”93 In the words of one Fortune 500 data processing executive, “They [EDP personnel] don’t exercise enough initiative in identifying problems and designing solutions for them. . . . They are impatient with my lack of knowledge of their tools, techniques, and methodology—their mystique; and sometimes their impatience settles into arrogance. . . . In sum, these technologists just don’t seem to understand what I need to make decisions.”94 The book New Power for Management emphasized the myopic perspective of programmers: “For instance, a technician’s dream may be a sophisticated computerized accounting system; but in practice such a system may well make no major contribution to profit.”95 Others attributed to them even more Machiavellian motives: “More often than not the systems designer approaches the user with a predisposition to utilize the latest equipment or software technology—for his resume—rather than the real benefit for the user.”96
Experienced managers stressed the critical differences between “real-world problems” and “EDP’s version of real-world problem.”97 The assumptions about programmers embedded in many of these accounts—that they were narrowly technical, inexperienced, and “poorly qualified to set the course of corporate computer effort”—resonated with many corporate managers.98 The accounts provided a convenient explanation for the burgeoning software crisis. Managers had in effect “abdicated their responsibility and let the ‘computer boys’ take over.”99 The fault was not entirely the manager’s own, though. Calling electronic data processing “the biggest rip-off that has been perpetrated on business, industry, and government over the past 20 years,” one author suggested that business executives have been actively prevented “from really bearing down on this situation by the self-proclaimed cloak of sophistication and mystique which falsely claims immunity from normal management methods. They are still being held at bay by the computer people’s major weapon—the snow job.”100 Computer department staffs, although “they may be superbly equipped, technically speaking, to respond to management’s expectations,” are “seldom strategically placed (or managerially trained)—to fully assess the economics of operations or to judge operational feasibility.”101 Only the restorations of the proper balance between computer personnel and managers could save the software projects from a descent into “unprogrammed and devastating chaos.”102
The Road to Garmisch
In the late 1960s, new perspectives on the problem of programmer management began to appear in the industry literature. “There is a vast amount of evidence to indicate that writing—a large part of programming is writing after all, albeit in a special language for a very restricted audience—can be planned, scheduled and controlled, nearly all of which has been flagrantly ignored by both programmers and their managers,” argued Robert Gordon in 1968 in a review of contemporary software development practices.103 Although it was admittedly true “that programming a computer is more an art than a science, that in some of its aspects it is a creative process,” this new perspective on software management suggested that “as a matter of fact, a modicum of intelligent effort can provide a very satisfactory degree of control.”104
It was the NATO Conference on Software Engineering in 1968 that irrevocably established software management as one of the central rhetorical cornerstones of all future debates about the nature and causes of the software crisis. In the fall of that year, as mentioned earlier, a diverse group of influential computer scientists, corporate managers, and military officials gathered in Garmisch, Germany, to discuss their growing concern that the production of software had become “a scare item for management . . . an unprofitable morass, costly and unending.” The solution to the budding software crisis, the conference organizers claimed, was for computer programmers to embrace an industrialized software engineering approach to development. By defining the software crisis in terms of the discipline of software engineering, the NATO conference set an agenda that influenced many of the technological, managerial, and professional developments in commercial computing for the next several decades.