1. JOHN McCARTHY

“Solving today’s problems tomorrow”

“What do judges know that we cannot eventually tell a computer?” John McCarthy asked himself with a rhetorical flourish in a debate at Stanford about the limits, if any, to artificial intelligence research. In 1973, the answer was obvious to McCarthy: “Nothing.”1 The leader of the university’s highly regarded artificial intelligence lab, McCarthy appeared part mad scientist, part radical intellectual, with the horn-rimmed glasses and pocket protector of an engineer and the bushy hair and rough beard of a firebrand. McCarthy’s opposite that day was Joseph Weizenbaum, a dapper, pipe-smoking MIT computer science professor, who by the 1970s had come to challenge everything McCarthy stood for. Where McCarthy believed nothing human was beyond the capability of machines when properly instructed, Weizenbaum insisted that some tasks—like sitting in judgment of the accused or giving counsel to those in distress—could only be entrusted to people. Even to consider otherwise was to commit a “monstrous obscenity.”2 A Jewish refugee from Nazi Germany at age thirteen, Weizenbaum was ever on the watch for those whom he suspected didn’t believe that all human life was sacred, whether because of a commitment to racial superiority or a conviction that nothing innate separated people from machines.3

To Weizenbaum, McCarthy’s ease in casting aside the ethical concerns of others was the clearest sign yet that elite AI scientists, whom Weizenbaum called the “artificial intelligentsia,” had lost their way. They would sacrifice anything for the cause, even their own humanity. Case in point were the young computer experts—the hackers—whom McCarthy had nurtured in his labs, first at MIT and later at Stanford. “They work until they nearly drop, twenty, thirty hours at a time,” Weizenbaum wrote in Computer Power and Human Reason, his anti-AI manifesto published in 1976. “Their food, if they arrange it, is brought to them: coffee, Cokes, sandwiches. If possible, they sleep on cots near the computer. But only for a few hours—then back to the console or the printouts. Their rumpled clothes, their unwashed and unshaven faces and their uncombed hair all testify that they are oblivious to their bodies and to the world in which they move. They exist, at least when so engaged, only through and for the computers. These are computer bums, compulsive programmers.”4

Naturally, this assessment of the stars of his lab struck McCarthy as unfair, but that last slur—bum!—from a fellow computer scientist really stung. First, McCarthy knew that the hackers’ enthusiasm, even compulsion, was crucial to running a successful lab, something Weizenbaum apparently didn’t need to consider now that he was more interested in ethics than research. “We professors of computer science sometimes lose our ability to write actual computer programs through lack of practice and envy younger people who can spend full time in the laboratory,” McCarthy explained to Weizenbaum in a review of Computer Power, which he titled “An Unreasonable Book.” “The phenomenon is well known in other sciences and in other human activities.”5 But more, McCarthy saw critics like Weizenbaum as lacking the scientist’s relentless drive to understand the world, no matter where that drive may take him, perhaps even to programming computers to think like judges. “The term ‘appropriate science’ suggests that there is ‘inappropriate science,’” McCarthy said at a later debate. “If there is ‘inappropriate science,’ then there is ‘appropriate ignorance’ and I don’t believe in it.”6 His young “computer bums” had nothing to apologize for.

McCarthy came easily to this take-no-prisoners debating style on behalf of science and research—ideological combat was in his blood. As a teenager, John McCarthy wasn’t only a mathematics prodigy—skipping two grades at Belmont High School in Los Angeles and teaching himself advanced calculus from college textbooks he bought second-hand—he was also a young Marxist foot soldier, a full member of the Communist Party at the age of seventeen, skipping ahead there, too.7

John McCarthy was born back east to parents who represented two prominent streams within American radicalism—the Jewish and the Irish. His mother, Ida Glatt, was a Lithuanian Jewish immigrant who grew up in Baltimore and managed to attend the prestigious women’s school Goucher College, probably with the assistance of the city’s wealthier German Jewish community. His father, Jack, an Irish Catholic immigrant, hoped to avoid deportation for his political activities by claiming to be a native Californian whose birth certificate went missing in the San Francisco earthquake. Years before Ida and Jack ever met, each had already led popular protests: Ida was at the head of Goucher students marching for women’s suffrage, and Jack was urging stevedores in Boston to stop loading a British ship in solidarity with a hunger strike of Terence MacSwiney, a jailed Irish Republican politician.8

Though Jack never made it past the fourth grade, his son John remembered his literary temperament, whether that meant quoting Browning and Kipling or reciting poems and lyrics about the Irish cause.9 Ida was an accomplished student and an idealistic champion for the poor and oppressed. She graduated from Goucher in 1917 with a degree in political economy and went to the University of Chicago to do graduate work that quickly spilled into labor organizing. Due to her refined education and her sex, Ida was brought under the umbrella of the Women’s Trade Union League, an organization identified with the wealthy progressive women who helped fund it—the so-called mink brigade. The league’s motto connected workers’ rights to justice for women and families: “The eight hour day, a living wage, to guard the home.”10

A few years later, Ida and Jack were introduced in Boston—she hired him to build a set of bookshelves, according to family lore—and Ida was fully radicalized. In addition to his carpentry, Jack ran organizing drives for fishermen and dry-cleaning deliverers, trolley workers and longshoremen. Their first child, John, was born there in 1927, though soon the family moved to New York, where the couple worked for the Communist Party newspaper the Daily Worker; Ida was a reporter and Jack was a business manager.11 For the sake of the health of young John, who had a life-threatening sinus condition, they relocated to Los Angeles, then known for its clean, dry air. Ida became a social worker, and Jack continued labor organizing, serving at one point as an aide to Harry Bridges, the radical San Francisco–based longshoremen’s union leader who was West Coast director of the CIO during the 1930s and 1940s.

Young John regained his strength and early on showed an interest in math and science. His parents gave him a party-approved volume, the English translation of a children’s science book popular in the Soviet Union, 100,000 Whys, which ingeniously explains biology, chemistry, and physics by looking at how an ordinary household works—the stove, the faucet, the cupboard. The slim volume opens with the observation, “Every day, somebody in your house starts the fire, heats water, boils potatoes. . . .”12 Later, he would purchase used calculus textbooks.

In an act of “extreme arrogance,” John applied to a single college, the nearby California Institute of Technology. His equivalent of a college essay was a single sentence: “I intend to be a professor of mathematics.” John was accepted and completed his work there in two and a half years, though graduation was delayed two years because he was suspended twice for refusing to attend mandatory gym classes. One of the suspensions led to a detour to the army, at nearby Fort MacArthur, where McCarthy and other young enlisted men were entrusted with the calculations that determined whether candidates deserved a promotion—“the opportunity for arbitrary action by just us kids was extraordinary.” He harbored little anger at the delay in obtaining his diploma, noting that after World War II the army became a rather genial place. “Basic training was relaxing compared to physical education at Caltech,”13 he recalled. After graduation in 1948, McCarthy spent a year at Caltech as a mathematics graduate student in preparation for becoming a professor. Two events that year propelled McCarthy toward what would become his lifelong quest: to create a thinking computer.

First, before ever setting eyes on a computer, McCarthy studied how to program one. He attended lectures about the Standards Western Automatic Computer, which wouldn’t be completed and installed in Los Angeles until two years later. The word computer was being transformed during these years. Once, it had been used to describe a person, usually a woman, who carries out complicated calculations. But by 1948, computer could also describe a machine that in theory was able to follow instructions to perform any task, given enough time. This framework for a computer was proposed by the great British mathematician Alan Turing and called the “universal Turing machine.” By breaking down any action, up to and including thought, as merely a sequence of discrete steps each of which could be achieved by a computer, Turing would give hope to dreamers like McCarthy and convince them that, in McCarthy’s words, “AI was best researched by programming computers rather than by building machines.”14

Turing was so confident that computers could be instructed how to think that he later devised a way of verifying this achievement when it inevitably arrived, through what he called “the imitation game.” Turing’s contention with his game, which is now more commonly called the “Turing test,” was that if a computer could reliably convince a person that he was speaking with another person, whom he could not see, then it should be considered intelligent. The true potency of the test, writes the historian Paul N. Edwards, was its limited, machine friendly definition of intelligence. “Turing did not require that the computer imitate a human voice or mimic facial expressions, gestures, theatrical displays, laughter, or any of the thousands of other ways humans communicate,” Edwards writes. “What might be called the intelligence of the body—dance, reflex, perception, the manipulation of objects in space as people solve problems, and so on—drops from view as irrelevant. In the same way, what might be called social intelligence—the collective construction of human realities—does not appear in the picture. Indeed, it was precisely because the body and the social world signify humanness directly that Turing proposed the connection via remote terminals.”15

Turing’s ideas were highly speculative: computers and mathematicians hardly seemed up to the task of creating an artificial intelligence, even under Turing’s forgiving definition. Nonetheless, interest was building. In 1948, McCarthy attended an unprecedented conference at Caltech, “The Hixon Symposium on Cerebral Mechanisms in Behavior,” which included talks like “Why the Mind Is in the Head” and “Models of the Nervous System.”16 The featured speaker was the mathematician and computer pioneer John von Neumann, who cited Turing to explain how a computer could be taught to think like a human brain. There also were lectures from psychologists who worked the other way around, finding parallels between the human brain and a computer. Within this intellectually charged atmosphere, McCarthy committed himself to studying “the use of computers to behave intelligently.”17

When McCarthy departed Caltech, and the family home, for Princeton the next year, he was on course to earn a PhD in mathematics—there was no such field as artificial intelligence, or even computer science for that matter. By virtue of von Neumann’s presence nearby at the Institute for Advanced Study, McCarthy was studying at one of the few hotbeds for these ideas. He soon met Marvin Minsky, another mathematics graduate student, who became a friend and an AI fellow traveler. McCarthy also fell in with a circle of mathematicians, including John Forbes Nash, who were devising game theory, a scheme for modeling human behavior by describing the actions of imaginary, self-interested individuals bound by clear rules meant to represent laws or social obligations.

Still on the left politically, McCarthy hadn’t accepted game theory’s cynical view of people and society. He recalled Nash fondly, but considered him peculiar. “I guess you could imagine him as though he were a follower of Ayn Rand,” McCarthy said, “explicitly and frankly egotistical and selfish.” He was there when Nash and others in their circle helped create a game of deceit and betrayal that Nash called “Fuck Your Buddy”; McCarthy said he came up with a more family-friendly name for the game, “So Long, Sucker.” It stuck. The ruthless strategy needed to excel in “So Long, Sucker” offended McCarthy, and he lashed out at Nash one time as the game descended into treachery: “I remember playing—you have to form alliances and double cross at the right time. His words toward me at the end were, ‘But I don’t need you anymore, John.’ He was right, and that was the point of the game, and I think he won.”18

Exposed to the rationalist ideas of thinkers like von Neumann, Nash and Minsky, and others, McCarthy was becoming increasingly intellectually independent. He was finally away from his parents—even in the army he had been assigned to a nearby base—and had the freedom to drift from radical politics. McCarthy tells a story of dutifully looking up the local Communist Party cell when he arrived in Princeton and finding that the only active members were a janitor and a gardener. He passed on that opportunity and quietly quit the party a few years later. During the Red Scare led by a different McCarthy (Senator Joseph) he had to lie a couple of times about having been a Communist, “but basically the people that I knew of who were harmed were people who stuck their necks out.”19 McCarthy didn’t, and thus began a steady shift to the right.

Toward the end of his life, when his political transformation from left to right was complete, McCarthy wrote an essay trying to explain why otherwise sensible people were attracted to Marxism. One of the attractions he identified—“the example of hard work and self-sacrifice by individual socialists and communists in the trade union movement”—undoubtedly sprung from the committed political lives of his parents, Ida and Jack. Nonetheless, McCarthy rates the Marxist experiment a terrible blight on human history and observes about his own radical upbringing, “An excessive acquaintance with Marxism-Leninism is a sign of a misspent youth.”20

McCarthy’s “misspent youth,” however, is what gives narrative coherence to his life, even if it is the kind of narrative O. Henry liked to tell. It goes something like this: Man spends his life shedding the revolutionary ideology of his upbringing and its dream of a utopian society without poverty or oppression to commit instead to a life of scientific reason. The man becomes an internationally recognized computer genius and runs a prestigious lab. Over time, the man’s scientific research fuels a new revolutionary ideology that aspires to create a utopian society without poverty or oppression. In other words, if McCarthy believed as a young man that adopting the rational values of science and mathematics would offer refuge from the emotional volatility of politics, he couldn’t have been more mistaken. By the end of his life, McCarthy was more political agitator than scientist, even if he continued to see himself as the epitome of a rational being.

After McCarthy completed his PhD at Princeton in pure math, he had little direction in his academic career—mainly he was struck by what he considered the shallowness of his own mathematical research, especially when compared with the depth of the work of Nash and the others in his circle. McCarthy had original ideas he would speculate on, but he wondered if that was enough for the highest levels of mathematics. McCarthy was hired by Stanford as an assistant professor and then quickly let go—“Stanford decided they’d keep two out of their three acting assistant professors, and I was the third.”21

McCarthy continued his academic career at Dartmouth, where in 1955 he began planning a summer workshop on thinking computers. There wasn’t yet a term describing this type of research. “I had to call it something, so I called it ‘artificial intelligence,’” he recalled. An earlier name for the topic, automata studies, came from the word for self-operating machines, but didn’t describe the goal nearly as breathlessly as artificial intelligence did.22 The Dartmouth summer workshop matched up McCarthy and Minsky with famous names in computing, the information theorist Claude Shannon, who had left Bell Labs for MIT, and Nathaniel Rochester, an IBM researcher, though perhaps the most famous name of them all, von Neumann, would not be there: by the summer of 1956, he was too sick to attend. The youthful arrogance of Minsky and McCarthy was on bright display throughout the proposal to the Rockefeller Foundation asking for $13,500 to cover stipends, travel expenses, and the like, including this succinct description of their core belief: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”23

Buoyed by the success of the conference, which now has its own commemorative plaque on the Dartmouth campus, McCarthy maneuvered himself to MIT. First, he persuaded the head of the mathematics department at Dartmouth, John Kemeny, to arrange a fellowship, which McCarthy elected to take at MIT, and then, “I double-crossed him by not returning to Dartmouth, but staying at MIT,” he recalled. By 1958, Minsky had arrived at MIT, too; the next year, the two were running the artificial intelligence project there. MIT was so flush with government money in those years that administrators offered the newly arrived junior professors the funds to support six mathematics graduate students, no questions asked.24

McCarthy and Minsky’s graduate students were put to work applying their training in mathematical logic to computers; they were asked to find ways to represent the outside environment, as well as a brain’s stored knowledge and thought processes, by means of a series of clearly defined statements. In this sense, the early AI project was conceived as a reverse-engineering problem. How to build an intelligent computer? Well, first you “open up” a person and study in detail what makes him tick. For example, Minsky and his graduate students would ask probing questions of children—so probing that Margaret Hamilton, an MIT graduate student at the time and later a famous software engineer, recalls Minsky’s team making her three-year-old daughter cry during an experiment that had a researcher read a computer’s critical comments back to her. “That’s how they talked back then; they thought computers were going to take over the world then,” Hamilton said.25 In one unpublished paper, called “The Well-Designed Child,” McCarthy tried to detail what we know and don’t know about the tools for reasoning that are born to human babies. A short section, titled “Unethical Experiment,” shows just how curious he was: “Imagine arranging that all a baby ever sees is a plan of a two-dimensional room and all his actions move around in the room. Maybe the experiment can be modified to be safe and still be informative.”26

From what AI researchers deduced about people, by experiment or intuiting, they devised computer code to imitate the process step by step, algorithm by algorithm. In an essay for Scientific American in 1966, McCarthy made the same confident assertion he had first expressed as part of the Dartmouth conference, that nothing, in theory, separated a computer from a person. “The computer accepts information from its environment through its input devices; it combines this information, according to the rules of the program stored in its memory, with information that is also stored in its memory, and it sends information back to its environment through its output devices,” he wrote, which was just the same as people. “The human brain also accepts inputs of information, combines it with information stored somehow within and returns outputs of information to its environment.”27 McCarthy’s every instinct was to demystify the process of human thinking and intelligence. Behavior could be explained by “the principle of rationality”—setting a goal (not necessarily a rational goal) and then coming up with a plan to achieve it.28

The tricky thing for a computer to replicate was ordinary common sense, not differential calculus, McCarthy concluded. By 1968, a robot in McCarthy’s lab had an arm with refined touch, yet it still could not tie a pair of shoes. Maddening. “I have observed two small children learn how to do this, and I don’t understand how they learn how to do it,” he said. “The difficulty in this case is not so much in getting the sense itself but programming what to do with it.”29 Thus, McCarthy spent a lot of time trying to understand precisely how people got through daily life. He was forever challenging his intelligent machines with “mundane and seemingly trivial tasks, such as constructing a plan to get to the airport,” or reading and comprehending a brief crime report in the New York Times.30

In his pursuit of a thinking machine, McCarthy was making a bunch of dubious leaps, as he later came to acknowledge. To deconstruct a human being necessarily meant considering him in isolation, just as the designers of an intelligent machine would be considering it in isolation. Under this approach to intelligence there would be no pausing to consider the relevance of factors external to the brain, such as the body, social ties, family ties, or companionship. When people spoke about the mystery of the human mind, McCarthy would scoff. Could a machine have “free will”? A colleague recalled his answer, which sought to remove the sanctity of the term: “According to McCarthy, free will involves considering different courses of action and having the ability to choose among them. He summarized his position about human free will by quoting his daughter, Sarah, who said at age four, ‘I can, but I won’t.’”31

In the spring of 1959, four years after he finally learned to program properly, McCarthy taught the first freshman class in computers at MIT, turning on a generation of aspiring programmers who followed him to the artificial intelligence lab.32 The curriculum was brand-new, and the term computer science had only just surfaced. One of its first uses was in a report commissioned by Stanford around 1956 as it considered whether the study of computers could be an academic discipline. Stanford’s provost, Frederick Terman, was always thinking of ways to raise the university’s profile and add to its resources, and there was a tempting offer from IBM that it would donate state-of-the-art equipment, including its pathbreaking 650 mainframe, to any university that offered classes on scientific and business computing. Stanford asked a local computer expert, Louis Fein, to study the topic and propose an academically rigorous curriculum in the “science” of computers.

The university’s mathematics faculty was skeptical. One professor told Fein that he thought computing was like plumbing, a useful tool to be applied to other projects. “We don’t have plumbers studying within the university,” he told Fein. “What does computing have to do with intellect?” But Fein was inclined to flip the question around in his head. “Why is it that economics and geology is in a university,” he’d ask himself, “and why is it that plumbing isn’t?”33 For his report, “The Role of the University in Computers, Data Processing, and Related Fields,”34 Fein interviewed the nation’s computer experts—a total in the “tens” at the time, he recalled—including McCarthy, when he was still at Dartmouth, and Weizenbaum, when he was a researcher at Wayne State University. Fein recommended that Stanford move forward in creating a separate department for computer science, and the university, as a first step, introduced a division within the mathematics department.

Stanford’s administrators ignored an unstated conclusion in the report: namely, that Fein should be brought in to lead Stanford’s new computer science department. Terman’s deputy made it clear to Fein that Stanford would only be recruiting big names—“what we need is to get a von Neumann out here and then things will go well,” he was told.35 McCarthy might fit the bill, however. In 1962, Stanford offered him a big promotion—immediate tenure as a full professor in mathematics—as a prelude to joining a newly minted computer science department. Yet the quick return and advancement of McCarthy at Stanford after initial failure would give ammunition to those mathematicians who argued that computer work must not be rigorous. Wasn’t this big shot McCarthy only just passed over as a junior professor?

Intelligence was the coin of the realm in those years in a preview of how Silicon Valley would operate—that is, everyone was intent on identifying who was most brilliant and rewarding him, all the while looking for the magic formula to mass-produce intelligence in the computer lab. Not even McCarthy was above reproach. No one ever is, really. In 2014, the Web site Quora began a discussion, “How did Mark Zuckerberg become a programming prodigy?” Coders were eager to burst that bubble and explain that Zuckerberg, despite his success, was no prodigy. “The application he wrote was not unique, and not all that well-made,” one Quora contributor explained.36

Any doubts about McCarthy’s brilliance, based on his early failure as a mathematics professor, weren’t fair, of course. McCarthy had found his academic purpose within computer science. While at MIT, he had invented a powerful programming language called Lisp, which allowed complicated instructions to be given to the computer with relatively simple commands and is still used today. Paul Graham, the cofounder of the company Y Combinator, which encourages and invests in start-ups, considers McCarthy a hero programmer. “In 1958 there seem to have been two ways of thinking about programming. Some people thought of it as math . . . others thought of it as a way to get things done, and designed languages all too influenced by the technology of the day. McCarthy alone bridged the gap. He designed a language that was math. But designed is not really the word; discovered is more like it,” Graham writes in appreciation.37 Another academic achievement of McCarthy’s at MIT was the AI lab itself, which was filling up with eager young programmers who lived and breathed computers.

McCarthy had recognized early on that for the lab to hum with breakthroughs—and for such breakthroughs to build on each other—there had to be more computer time to spread around. The old system had been too limiting and too forbidding: hulking IBM mainframes guarded by specially trained operators who behaved as priests at the ancient temple. Students and the staff skirmished frequently as the hackers tried to trick the IBM guardians into leaving their post.38 The balance of power began to shift toward the hackers by the end of the 1950s with the arrival of the TX-0, a hand-me-down computer from a military electronics laboratory affiliated with MIT. The TX-0 did not need special operators. In 1961, it was joined by a donated prototype of the PDP-1, the first minicomputer from Digital Equipment Corporation, a Boston-area start-up founded by former MIT scientists who had worked on the TX-0. McCarthy proposed “time-sharing”39 as a way of replicating the flexibility of these new computers on the IBM mainframe while removing the bottleneck for computer time by replacing one-at-a-time use of a computer with a system that allowed as many as twenty-four people to connect individually with a computer; eventually each person would have his own monitor and keyboard.40

Indeed, once the students interacted with a computer directly the seduction was complete. Young people camped out at McCarthy and Minsky’s lab, waiting often into the wee hours of the night for a time slot to open up. When they were dissatisfied with the performance of a vital piece of code that came with the PDP-1, the assembler, they asked the lab for the assignment of writing a better one and were given a weekend to do it. The technology writer Steven Levy recounts the “programming orgy” that ensued: “Six hackers worked around 250 man-hours that weekend, writing code, debugging and washing down take-out Chinese food with massive quantities of Coca-Cola.” Digital asked the hackers for a copy of the new assembler to offer to other owners of the PDP-1. They eagerly complied, Levy writes, and “the question of royalties never came up. . . . Wasn’t software more like a gift to the world, something that was reward in itself?”41

The ecstasy from directly interacting with the computer—not the chance for profits—was the basis of what became the Hacker Ethic, a set of principles these young programmers lived by as they pursued a personal relationship with computing. This first principle was called the “hands-on imperative,” the belief that there could be no substitute for direct control over a computer, its inner workings, its operating system, its software. The deeper you went, the better—whether that meant manipulating the so-called machine language that the computer used at its core, or opening up the box to solder new pathways in the hardware that constituted the computer’s brain. The access not only had to be direct, but casual, too. “What the user wants,” McCarthy wrote at the time, “is a computer that he can have continuously at his beck and call for long periods of time.”42

If this all sounds sort of sexual—or like an old-fashioned marriage—well, you aren’t the first to notice. McCarthy’s lab may as well have had a No Girls sign outside. Levy described this first generation of hackers at MIT as being locked in “bachelor mode,” with hacking replacing romantic relationships: “It was easy to fall into—for one thing, many of the hackers were loners to begin with, socially uncomfortable. It was the predictability and controllability of a computer system—as opposed to the hopelessly random problems in a human relationship—which made hacking particularly attractive.”43 Margaret Hamilton, the MIT graduate student who later led the software team for the Apollo space mission, would occasionally visit the AI lab and clearly had the chops to be one of the “hackers.” She says she had a similar mischievous approach to computing and “understood these kids and their excitement.” Even so, she kept a healthy distance. The age gap may have been small, but Hamilton must have seemed a generation older. In her early twenties, she was already married with a child; her computer talents were focused on a practical problem, helping to interpret research within MIT’s meteorology department. Hamilton remembers wondering how it was that the AI lab could always be filled with undergraduates. When she was studying math as an undergraduate at Earlham College, she didn’t remember having so much free time: “These kids weren’t worried about bad marks and satisfying their professors?”44

Of course, hackers didn’t care about grades. Computers had changed everything. There were other principles to the Hacker Ethic, as explicated by Levy—including “mistrust authority,” “all information should be free,” and “hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position”—but each was really a reframing of the hands-on imperative. The goal was to get close to a computer, and anyone or anything that stood in your way was the enemy. In the competition for computer time, for example, the higher up the academic ladder you were, the greater access you had, whether you were adept or not. That isn’t right. Programming skill, not age or academic degrees, is all that should matter in deciding who gets his hands on a computer. Later, there were the businesses that insisted on charging for the programs necessary for you to operate your computer. No way. Information should be free and lack of financial resources shouldn’t be an impediment to programming.

These young men were fanatically devoted to computers, which they valued for being more reliable, helpful, and amusing than people. The hackers dreamed of spreading their joy. “If everyone could interact with computers with the same innocent, productive, creative impulse that hackers did, the Hacker Ethic might spread through society like a benevolent ripple,” Levy wrote in 1984, “and computers would, indeed, change the world for the better.”45

By 1962, the thirty-five-year-old McCarthy had already made a series of important professional contributions, arguably the most important of his career: the Lisp programming language, time-sharing, an early framework for instructing a computer to play chess, a quintessentially intellectual activity once considered safely beyond machines. Nonetheless, as Stanford quickly became aware, McCarthy was ripe for the picking. He was annoyed that MIT was dragging its feet in implementing his beloved time-sharing idea; an offer of full tenure there would be a few years off, if it came at all. The cold Cambridge winters weren’t helping matters, either. McCarthy accepted Stanford’s offer.

McCarthy was a big get for Stanford, and not just because he was coming from the pinnacle, MIT. McCarthy was an academic rock star, the kind of professor students wanted to work with. He was indulgent of his students; his field, artificial intelligence, was on the cutting edge. By contrast, George Forsythe, the first computer science professor at Stanford and the first chairman of the department, had a wonky specialty, numerical analysis. At least he had a sense of humor about it. “In the past 15 years,” he wrote in 1967, “many numerical analysts have progressed from being queer people in mathematics departments to being queer people in computer science departments!”46 AI research and programming languages and systems, on the other hand, would be where the growth in computer science would occur because they “seem more exciting, important and solvable at this particular stage of computer science.”47 McCarthy was set up in a remote building that had been abandoned by a utility company, where he went to work re-creating the MIT lab as the Stanford Artificial Intelligence Lab (SAIL).

“What you get when you come out to Stanford is a hunting license as far as money is concerned,” McCarthy recalled, and he already had well-established, MIT-based connections to DARPA, the defense department’s research group. “I am not a money raiser,” McCarthy said. “I’m not the sort of person who calls somebody in Washington and makes an appointment to see him and says, look here, look at these great things, why don’t you support it? As long as you could get money by so to speak just sending away for it, I was O.K., but when it took salesmanship and so forth then I was out of it.”48 These government funds came with barely any strings attached, and no supervision to speak of: “I was the only investigator with a perfect record,” he liked to say. “I never handed in a quarterly progress report.”49 Because of this benign neglect, McCarthy was able to use money assigned to artificial intelligence research to support a series of important improvements in how computers worked, including individual display terminals on every desk, computer typesetting and publishing, speech recognition software, music synthesizers, and self-driving cars. By 1971, SAIL had seventy-five staff members, including twenty-seven graduate students pursuing an advanced degree, who represented a range of fields: mathematics, computer science, music, medicine, engineering, and psychiatry.50

The transfer from MIT to Stanford led to some California-esque adjustments. To start, there was a sauna and a volleyball court.51 And while still members of a boys’ club, SAIL’s hackers were more likely to take notice of the opposite sex. A Life magazine profile from 1970 helped establish Stanford’s reputation as the home of a bunch of horny young scientists, so different from asexual MIT. The article quoted an unnamed member of the team programming a computer psychiatrist and “patient” as saying they were expanding the range of possible reactions a computer can experience: “So far, we have not achieved complete orgasm.”52 The lab’s administrators insisted to the student newspaper that the quote was made up, but a reputation was taking shape.53 In one notorious episode a year later, some computer science students shot a pornographic film in the lab for their abnormal psychology class, recruiting the actress through an ad in the student newspaper seeking an “uninhibited girl.”54 The film was about a woman with a sexual attraction to a computer—for these purposes, one of the experimental fingers attached to a computer used in robotics research proved an especially useful prop. The entire lab was able to watch the seduction scene through an in-house camera system connected to the time-sharing terminals dispersed throughout the building.55

At this point in the history of AI, researchers were intrigued by the idea of a computer having sex with a woman, which was central to the plot of Demon Seed, a 1970s horror film. Makes sense: there was still confidence (or fear) that AI researchers would succeed in making independent thinking machines to rival or surpass humans. The male scientists identified with the powerful computer they were bringing to life. When the goals for AI had diminished, and computers would only imitate thought rather than embody it, the computers became feminized, sex toys for men in movies like Weird Science, Ex Machina, and Her.

Despite the sexual hijinks and the California sun, the Stanford lab was otherwise quite similar to the MIT lab in its isolation and inward-looking perspective. Located five miles outside of campus, SAIL provided built-in sleeping quarters in the attic for those hackers who wouldn’t or couldn’t leave; the signs on the doors were written in the elvish script invented for The Lord of the Rings. One visitor from Rolling Stone described the scene in 1972: “Long hallways and cubicles and large windowless rooms, brutal fluorescent light, enormous machines humming and clattering, robots on wheels, scurrying arcane technicians. And, also, posters and announcements against the Vietnam War and Richard Nixon, computer print-out photos of girlfriends, a hallway-long banner SOLVING TODAY’S PROBLEMS TOMORROW.”56

During these rocking, carefree years, McCarthy and his team were forced to recognize just how flawed the original artificial intelligence project was. They were stuck in a research quagmire, as McCarthy freely admitted to a reporter for the student newspaper: “There is no basis for a statement that we will have machines as intelligent as people in 3 years, or 15 years, or 50 years, or any definite time. Fundamental questions have yet to be solved, and even to be formulated. Once this is done—and it might happen quickly or not for a long time—it might be possible to predict a schedule of development.”57 By accepting this new reality, McCarthy freed himself to write a series of far-sighted papers on how computers could improve life without achieving true artificial intelligence.

In one such paper, from 1970, McCarthy laid out a vision of Americans’ accessing powerful central computers using “home information terminals”—the time-sharing model of computer access brought to the general public.58 The paper describes quite accurately the typical American’s networked life to come, with email, texting, blogs, phone service, TV, and books all digitally available through what is the equivalent of the digital “cloud” we rely on today. He was so proud of that paper that he republished it with footnotes in 2000, assessing his predictions. Over and over, the footnote is nothing more than: “All this has happened.” He allowed, however, “there were several ways things happened differently.”59 For example, his interests were plainly more intellectual than most people’s, so while he emphasized how the public would be able to access vast digital libraries, “I didn’t mention games—it wasn’t part of my grand vision, so to speak.”60

McCarthy’s grand vision for domestic computing was notable for being anti-commercial. He predicted that greater access to information would promote intellectual competition, while “advertising, in the sense of something that can force itself on the attention of a reader, will disappear because it will be too easy to read via a program that screens out undesirable material.” With such low entry costs for publishing, “Even a high school student could compete with the New Yorker if he could write well enough and if word of mouth and mention by reviewers brought him to public attention.” The only threat McCarthy could see to the beautiful system he was conjuring were monopolists, who would try to control access to the network, the material available, and the programs that ran there. McCarthy suspected that the ability of any individual programmer to create a new service would be a check on the concentration of digital power, but he agreed, “One can worry that the system might develop commercially in some way that would prevent that.”61 As, indeed, it has.

Just as AI research was on the wane, McCarthy’s lab became a target of radical students, who cited SAIL’s reliance on defense department funds to link the work there, however indirectly, to the war in Vietnam. In 1970, a protester threw an improvised bomb into the lab’s building—fortunately, it landed in an empty room and was quickly doused by sprinklers. Lab members briefly set up a patrol system to protect their building, while McCarthy responded by taking the fight to the enemy.62 When anti-war protestors interrupted a Stanford faculty meeting in 1972 and wouldn’t leave, the university president adjourned the meeting. McCarthy remained, however, telling the protestors, “The majority of the university takes our position, so go to hell.” When they responded by accusing him and his lab of helping carry out genocide in Vietnam, McCarthy responded: “We are not involved in genocide. It is people like you who start genocide.”63 Six years later, McCarthy debated, and had to be physically separated from, the popular Stanford biology professor Paul Ehrlich, who warned that humanity was destroying the environment through overpopulation.64 McCarthy’s disdain for Ehrlich could be summarized in his observation that “doomsaying is popular and wins prizes regardless of how wrong its prophecies turn out to be.”65

In Joseph Weizenbaum, however, McCarthy found a more persistent and formidable critic, one who spoke the same technical language. The two wrote stinging reviews of each other’s work: McCarthy would berate Weizenbaum for foggy thinking that paved the way for authoritarian control of science; Weizenbaum, the AI apostate, insisted on bringing morality into the equation. Weizenbaum also questioned the self-proclaimed brilliance of his peers. He, for one, chose to study mathematics because, “Of all the things that one could study, mathematics seemed by far the easiest. Mathematics is a game. It is entirely abstract. Hidden behind that recognition that mathematics is the easiest is the corresponding recognition that real life is the hardest.”66

Weizenbaum was born in Berlin in 1923, the second son of a furrier, Jechiel, and his wife, Henrietta. After the anti-Semitic laws of 1935, Weizenbaum’s family made their way to the United States by way of Bremen. His studies were interrupted by service in the U.S. Army Air Corps during World War II,67 but by the early 1950s, he was a graduate student in mathematics at Wayne State University, where a professor had decided to build a computer from scratch. Weizenbaum was part of the group that assembled and programmed that computer, even soldering the components. Based on that experience, Weizenbaum in 1952 was asked to join the team that produced the huge check-processing computer for Bank of America, the first for any bank.68

All that mattered to Weizenbaum at the time were the technical challenges that lay ahead—how to teach a computer to read checks that were crumpled, or of different sizes. “It was a whale of a lot of fun attacking those hard problems, and it never occurred to me at the time that I was cooperating in a technological venture which had certain social side effects which I might come to regret,” he said. “I was totally wrapped up in my identity as a professional, and besides, it was just too much fun.”69 In other words, when Weizenbaum, in Computer Power and Human Reason, described the intoxication that comes from being a compulsive programmer he was sharing firsthand experiences. “The computer programmer,” he wrote, “is a creator of universes for which he alone is the lawgiver. . . . No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.”70

After Weizenbaum arrived at MIT in 1963 as an associate professor in electrical engineering, his familiarity with computers led him toward artificial intelligence and an intriguing project he called Eliza (named for Eliza Doolittle of Pygmalion). Eliza was programmed to ask a user probing, if vague, questions in the manner of a therapist. A sample conversation between a “patient,” played by Weizenbaum, and Eliza, circa 1966, is quite revealing, proving how adept Eliza was at pushing all the right (human) buttons.71

PATIENT. Men are all alike.

ELIZA. In what way?

PATIENT. They’re always bugging us about something or other.

ELIZA. Can you think of a specific example?

PATIENT. Well, my boyfriend made me come here.

ELIZA. Your boyfriend made you come here?

PATIENT. He says I’m depressed much of the time.

ELIZA. I am sorry to hear you are depressed.

PATIENT. It’s true. I am unhappy.

ELIZA. Do you think coming here will help you not to be unhappy?

PATIENT. I need some help, that much seems certain.

ELIZA. What would it mean to you if you got some help?

PATIENT. Perhaps I could learn to get along with my mother.

ELIZA. Tell me more about your family.

PATIENT. My mother takes care of me.

ELIZA. Who else in your family takes care of you?

PATIENT. My father.

ELIZA. Your father.

PATIENT. You are like my father in some ways.

ELIZA. What resemblance do you see?

PATIENT. You are not very aggressive but I think you don’t want me to notice that.

ELIZA. What makes you think I am not very aggressive?

PATIENT. You don’t argue with me.

ELIZA. Why do you think I don’t argue with you?

PATIENT. You are afraid of me.

ELIZA. Does it please you to believe I am afraid of you?

Eliza was meant to explore an obscure concept, “knowledge representation,” but became a turning point for Weizenbaum’s career; he and his software were a media phenomenon. TV cameras arrived at the lab. In 1968, the New York Times headlined a report on Eliza, “Computer Is Being Taught to Understand English.” The Times reported that Weizenbaum’s secretary apparently felt so connected to Eliza that she was offended when he casually presumed he could eavesdrop on her conversation. After typing a few sentences, she turned to him to say, “Would you mind leaving the room, please?” Weizenbaum was taken aback.72

When confronted with his power to manipulate people with relatively simple coding, and then to have access to their most personal thoughts, Weizenbaum retreated in horror. He began to ask hard questions of himself and colleagues. The artificial intelligence project, he concluded, was a fraud that played on the trusting instincts of people. Weizenbaum took language quite seriously, informed in part by how the Nazis had abused it, and in that light he concluded that Eliza was lying and misleading the very people it was supposed to be helping. A comment like “I am sorry to hear you are depressed” wasn’t true and should appall anyone who hears it. A computer can’t feel and Eliza isn’t “sorry” about anything a patient said.

As we’ll see in the pages that follow, the Know-It-Alls have moved boldly ahead where Weizenbaum retreated, eager to wield the almost hypnotic power computers have over their users. Facebook and Google don’t flinch when their users unthinkingly reveal so much about themselves. Instead, they embrace the new reality, applying programming power to do amazing—and amazingly profitable—things with all the information they collect. If anything, they study how to make their users comfortable with sharing even more with the computer. Weizenbaum becomes an example of the path not taken in the development of computers, similar, as we’ll see, to Tim Berners-Lee, the inventor of the World Wide Web. Weizenbaum and Berners-Lee each advocated stepping back from what was possible for moral reasons. And each was swept away by the potent combination of the hacker’s arrogance and the entrepreneur’s greed.

Weizenbaum kept up the fight, however, spending the remainder of his life trying to police the boundary between humans and machines, challenging the belief, so central to AI, that people themselves were nothing more than glorified computers and “all human knowledge is reducible to computable form.”73 This denial of what is special about being human was the great sin of the “artificial intelligentsia,” according to Weizenbaum. In later debates with AI theorists, he was accused of being prejudiced in favor of living creatures, of being “a carbon-based chauvinist.”

One bizarre manifestation of this charge that Weizenbaum favored “life” over thinking machines (should any arrive) came during a discussion with Daniel Dennett and Marvin Minsky at Harvard University. Weizenbaum recalled that “Dennett pointed out to me in just so many words: ‘If someone said the things that you say about life about the white race in your presence, you would accuse him of being a racist. Don’t you see that you are a kind of racist?’”74 Weizenbaum couldn’t help but consider this accusation as the abdication of being “part of the human family,” even if he suspected that the drive behind artificial intelligence researchers like McCarthy, almost all of whom are men, was all too human. “Women are said to have a penis envy, but we are discovering that men have a uterus envy: there is something very creative that women can do and men cannot, that is, give birth to new life,” he told an interviewer. “We now see evidence that men are striving to do that. They want to create not only new life but better life, with fewer imperfections. Furthermore, given that it is digital and so on, it will be immortal so they are even better than women.”75

Weizenbaum never could have imagined how the Know-It-Alls, beginning in the 1990s, managed to amass real-world wealth and power from the imaginary worlds on their screens and the submerged urge to be the bestower of life. His later years were spent in Germany, far removed from the events in Silicon Valley.76 However, in 2008, a few months before he died in his Berlin apartment at age eighty-five, Weizenbaum did share the stage at the Davos World Economic Forum with Reid Hoffman, the billionaire founder of LinkedIn, and Philip Rosedale, the chief executive of the pathbreaking virtual reality site Second Life. Hoffman explained why LinkedIn was such an important example of “social software”: “What happens is this is my expression of myself, and part of what makes it social is it is me establishing my identity. It’s me interacting with other people.” Weizenbaum shakes his head in disbelief. People were misleading each other about their “identities” via computers much the way his Eliza program misled the people who communicated with it. Speaking in German, which was simultaneously translated, Weizenbaum tried and failed to rouse the crowd. “Nonsense is being spouted. Dangerous nonsenses. . . . You’ve already said twice, ‘it’s happening and it will continue’—as if technological progress has become autonomous. As if it weren’t created by human beings. Or that people are doing it for all sorts of intentions. The audience is just sitting here, and no one is afraid, or reacting. Things are just happening.”77

As the 1970s ended, so, too, did McCarthy’s independent laboratory. By 1979, DARPA’s funding was largely eliminated, and SAIL merged with the Stanford Computer Science Department and relocated on campus. McCarthy the scientist was already in recess, but McCarthy the polemicist had one last great success: he led the fight against censorship of the Internet, and arguably we are still dealing with its consequences today.

The World Wide Web wasn’t up and running in early 1989, but computers at universities like Stanford were already connected to each other via the Internet. People would publish, argue, and comment on Usenet “newsgroups”—conversations organized around particular subjects—which were accessible through computers connected to the university’s network. The newsgroups reflected the interests of computer users at the time, so if you think today’s Internet is skewed toward young men obsessed with science fiction and video games, just imagine what the Internet was like then. Some newsgroups were moderated, but frequently they were open to all; send a message and anyone could read it and write a response. At the time, Stanford’s electronic bulletin board system hosted roughly five hundred newsgroups with topics ranging from recipes to computer languages to sexual practices.

The controversy began with a dumb joke about a cheap Jew and a cheap Scotsman on the newsgroup rec.humor.funny.78 The joke was older than dirt and not very funny: “A Jew and a Scotsman have dinner. At the end of the dinner the Scotsman is heard to say, ‘I’ll pay.’ The newspaper headline next morning says, ‘Jewish ventriloquist found dead in alley.’” But it landed at an awkward time. Stanford in the late 1980s was consumed by the identity politics ferment. The university was slowly becoming more diverse, which led the administration to replace required courses on Western civilization with a more inclusive curriculum featuring texts by women and people of color. Similarly, there were demonstrations demanding that Stanford offer greater protection to minorities and women on campus who didn’t feel fully welcome.79 These changes brought a backlash as well, leading Peter Thiel and another undergraduate at the time to start the conservative magazine Stanford Review to fight the move toward “multiculturalism,” which, Thiel later said, “caused Stanford to resemble less a great university than a Third World country.”80

With these charged events in the background, Stanford administrators decided to block the rec.humor.funny newsgroup, which included a range of offensive humor, from appearing on the university’s computers. As McCarthy never tired of pointing out, no one at Stanford had ever complained about the joke. An MIT graduate student was first to object by contacting the Canadian university that hosted the newsgroup. The university cut its ties, but the man who ran the newsgroup found a new host and wouldn’t take down the joke. Stanford administrators learned of the controversy in the winter of 1989 and directed a computer science graduate student to block the entire newsgroup. The administrators, for once, were trying to get ahead of events, but they were also presuming to get between the university’s hackers and their computers. The graduate student whose expertise was needed to carry out the online censorship immediately informed McCarthy.81

McCarthy, the computer visionary, saw this seemingly trivial act as a threat to our networked future. “Newsgroups are a new communication medium just as printed books were in the 15th century,” he wrote. “I believe they are one step towards universal access through everyone’s computer terminal to the whole of world literature.” The psychological ease in deleting or blocking controversial material risked making censorship easy to carry out. No book need be taken off the shelves and thrown out, or, god forbid, burned. Would the public even recognize that “setting up an index of prohibited newsgroups is in the same tradition as the Pope’s 15th century Index Liber Prohibitorum”?82 He rallied his peers in the computer science department to fight for a censorship-free Internet.

Throughout this campaign, McCarthy barely acknowledged the racial tensions that had so clearly influenced the university’s actions. He once discussed what he saw as the hypersensitivity of minority groups with a professor who approved of the censorship and was amazed to learn that this professor believed that a minority student might not object to a racist joke because of “internalized oppression.” McCarthy was suspicious of this appeal to a force that was seemingly beyond an individual’s control.83 The question of racism finally managed to intrude in the internal discussions of the computer science department through William Augustus Brown Jr., an African American medical student at Stanford who was also studying the use of artificial intelligence in treating patients.

Brown was the lone voice among his fellow computer scientists to say he was glad that “For once the university acted with some modicum of maturity.”84 Drawing from his personal experience, Brown described the issue quite differently than McCarthy and the overwhelmingly white male members of the computer science department had. “Whether disguised as free speech or simply stated as racism or sexism, such humor IS hurtful,” he wrote to the bulletin board. “It is a University’s right and RESPONSIBILITY to minimize such inflammatory correspondence in PUBLIC telecommunications.” He saw what was at stake very differently, too. “This is not an issue of free speech; this is an issue of the social responsibility of a University. The University has never proposed that rec.humor’s production be halted—it has simply cancelled its subscription to this sometimes offensive service. It obviously does NOT have to cater to every news service, as anyone who tries to find a Black magazine on campus will readily discover.”85

McCarthy never responded directly, or even indirectly, to Brown, but others in his lab did, offering an early glimpse at how alternative opinions would be shouted down or patronized online. These responses from the Stanford computer science department today might collectively be called “whitesplaining.” One graduate student responded to Brown, “I am a white male, and I have never been offended by white male jokes. Either they are so off-base that they are meaningless, or, by having some basis in fact (but being highly exaggerated) they are quite funny. I feel that the ability to laugh at oneself is part of being a mature, comfortable human being.”86 Others suggested that Brown didn’t understand his own best interests. “The problem is that censorship costs more than the disease you’re trying to cure. If you really believe in the conspiracy, I’m surprised that you want to give ‘them’ tools to implement their goals,” a graduate student wrote.87

The reactions against him were so critical that Brown chose a different tack in reply. He opened up to his fellow students about his struggles at Stanford as a black man. “Having received most of my pre-professional training in the Black American educational system, I have a different outlook than most students,” Brown wrote. “I certainly didn’t expect the kind of close, warm relationships I developed at Hampton University, but I was not prepared for the antagonism. I don’t quite know if it’s California, or just Stanford, but . . . I don’t know how many times I have had the most pompous questions asked of me; how many times a secretary has gone out of her way to make my day miserable. I sincerely doubt any of my instructors even know my name, although I am in the most difficult program in the medical center. Even my colleagues in my lab waited until last month to get the courage to include me in a casual conversation for the first time, although I have been working there five months.” He continued: “I don’t really mind the isolation—I can still deal, and it gives me PLENTY of time to study. But I really don’t like the cruel humor. Once you come down from the high-flying ideals, it boils down to someone insisting on his right to be cruel to someone. That is a right he/she has, but NOT in ALL media.”88

Needless to say, such displays of raw emotion were not typical of the communication on the computer science department’s bulletin board. No one responded directly to what Brown had shared about his struggles as an African American medical student and computer scientist at Stanford; they continued to mock his ideas as poorly thought out and self-defeating. The closest there was to a defense of Brown was the comment of one graduate student who said he didn’t agree with the censorship but worried that many of his peers believed that “minority groups complain so much really because they like the attention they get in the media. People rarely consider the complaints and try to understand the complaints from the minority point of view.” He ended his email asking, “Do people feel that the environment at Stanford has improved for minority students? Worsened? Who cares?” Based on the lack of response, Who Cares carried the day.

The twenty-five years since haven’t eased the pain for Brown, who left Stanford for Howard University Medical School and today is head of vascular surgery at the Naval Medical Center in Portsmouth, Virginia. The attitude at Stanford, he recalled, was elitist and entitled: “If you came from a refined enough background you could say whatever you wanted. Somehow the First Amendment was unlimited and there was no accountability. Any time you wanted to hold anyone accountable it was un-American. But those comments are neither American nor respectful.” The lack of engagement from his peers was “very typical,” he said. “It was isolationist there, almost hostile. Hostile in a refined way toward anyone who was different. Dismissive. That’s my experience. Unfortunately, I see that attitude today, it doesn’t matter whether it’s Stanford or the alt-right.”

What was particularly painful for Brown was that, other than for his skin color, this was his tribe—he was a hacker, too, who taught himself how to manipulate phone tones, who could discuss the elements in the blood of the Vulcans on Star Trek. Yet, he recalled, “As a minority, you are in the circle and not in the circle.” He never felt comfortable retreating from society and going so deep into computers. The AI movement, he said, was based on the idea that “I’ll be a great person because I will have created something better than me. But that misses the whole point of life—the compassion for humanity. That has nothing to do with how you do in school, whether you can program in seven different languages. Compassion for others, that is the most complex problem humanity has ever faced and it is not addressed with technology or science.”89

No personal testimony posted to the Stanford bulletin board, no matter how gripping, would ever persuade McCarthy to see the issue of censorship as a matter of empathy for the targets of hate speech. To his mind, there was no such thing as inappropriate science or inappropriate speech. Others may disagree, he allowed. “Stanford has a legal right to do what its administration pleases, just as it has a legal right to purge the library or fire tenured faculty for their opinions,” he wrote in an email to the computer science department. But he predicted the university would pay a price in loss of respect in the academic world if the authorities were given control over the Internet. McCarthy’s hackers didn’t respect authority for its own sake, and he was no different—letting the administrators responsible for information technology at the university decide what could be read on the computers there, he contended, was like giving janitors at the library the right to pick the books.90

McCarthy’s colleagues in computer science innately shared his perspective; the department unanimously opposed removing the rec.humor.funny newsgroup from university computers. The computer science students overwhelmingly backed McCarthy as well, voting in a confidential email poll, 128 to 4.91 McCarthy was able to win over the entire university by enlisting a powerful metaphor for the digital age. Removing a newsgroup, he explained to those who may not be familiar with them, was like removing a book from the library system because it was offensive. Since Mein Kampf was still on the shelves, it was hard to imagine how the decision to remove an anti-Semitic, anti-Scottish joke would make the cut. Either you accepted offensive speech online or you were in favor of burning books. There would be no middle ground permitted, and thus no opportunity to introduce reasonable regulations to ensure civility online, which is our predicament today.

The newsgroup and the dumb joke were restored in a great victory for McCarthy, which took on greater meaning in the years that followed, when the Web brought the Internet to even more parts of the university. Stanford had agreed that “a faculty member or student Web page was his own property, as it were, and not the property of the university,” McCarthy told a crowd gathered for the fortieth anniversary of the Stanford Computer Science Department in 2006.92

This achievement represented McCarthy’s final act in support of the hackers he had helped introduce to the world. He had ensured that their individualistic, anti-authoritarian ideas would be enshrined at Stanford and later spread across the globe, becoming half of what we know as Silicon Valley values. Only half because McCarthy had no role in that other aspect of Silicon Valley values—the belief that hackers’ ideas are best spread through the marketplace. McCarthy was no entrepreneur, and periodically he felt compelled to explain himself. That same fortieth-anniversary celebration featured a panel of fabulously wealthy entrepreneurs who studied computer science at Stanford (and often didn’t graduate). McCarthy took exception to the idea that “somehow, the essence of a computer science faculty was starting companies, or at least that that was very important.” How could this be true, he asked the audience, since he himself hadn’t started any companies? “It’s my opinion that there’s considerable competition between doing research, doing basic research and running a company,” he said, adding grumpily, “I don’t expect to convince anybody because things have gone differently from that.”93 To understand why things have gone so differently, we must look elsewhere on the Stanford campus.