Two

THE GOOGLE THEORY OF HISTORY

THE NETWORK WANTS ITS TENDRILS to cradle the world, wrapping itself around everyone and everything. In the summer of 2015, Google renamed itself Alphabet, which was a statement about the company’s place in history. A search engine called Google remained, but the company had become so much more than that. It is a commercial bazaar, a backbone of Internet infrastructure, a software company, a hardware company, a phone company, an advertising agency, a home appliance company, a life sciences company, a machine learning company, an automobile company, a social media company, and a TV network. One of its subsidiaries claims to counter political extremism; another launches balloons to transmit the Internet to far corners of the globe. The alphabet was one of humanity’s greatest innovations, the sort of everlasting achievement that the company intends to foment again and again.

Bluster pours forth from the tech elite, and much of the world tends to look at their lengthy inventory of grandiose projects as vanity. If Jeff Bezos wants to launch rockets into space, then Elon Musk will do him one better and colonize Mars. But Silicon Valley is hardly distinguished by the hegemonic egos of its leaders, especially relative to finance or media. What makes Big Tech different is that it pursues these projects with a theological sense of conviction—which makes its efforts both wondrous and dangerous.

At the epicenter of Google’s bulging portfolio is one master project: The company wants to create machines that replicate the human brain, and then advance beyond. This is the essence of its attempts to build an unabridged database of global knowledge and its efforts to train algorithms to become adept at finding patterns, teaching them to discern images and understand language. Taking on this grandiose assignment, Google stands to transform life on the planet, precisely as it boasted it would. The laws of man are a mere nuisance that can only slow down such work. Institutions and traditions are rusty scrap for the heap. The company rushes forward, with little regard for what it tramples, on its way toward the New Jerusalem.

•   •   •

LARRY PAGE’S FAITH in this mission was his patrimony. His dad wasn’t like the others. His appearance was different, that was for sure. Polio, contracted on a childhood vacation to Tennessee, had stunted the growth of one leg. His gait was uneven; at times, he struggled to breathe. When he felt good, Carl Page, Sr., was a bundle of magical enthusiasms. He would scurry down the corridors of the computer science department, summoning colleagues to his office to announce one of his many big ideas. He could be an enchanted seer. In the eighties, years before Tim Berners-Lee’s invention of the Web, he would riff about the potential of hyperlinks. The students at Michigan State found Carl’s passions to be both inspiring and a bit overwhelming. His faith in their skills occasionally would stretch beyond the reality of their expertise. There was the time, for instance, he assigned kids to write code that would enable a robot to plug itself into electrical outlets.

Carl Page focused pedagogic attention on Larry and his older brother, Carl Jr. He wanted them to grow up in the future, a place where his own mind tended to reside. Under his supervision, the family’s ranch house in the Pine Crest section of East Lansing was, by the eight-track standards of the era, transformed into an electronic wonderland.

When Larry was six, his dad brought home an Exidy Sorcerer computer—a cult favorite of European programmers—a machine so exotic that Carl Jr. had to compose its operating system from scratch. “I think I was the first kid in my elementary school to turn in a word-processed document,” Larry would later recount. The house was strewn with copies of Popular Science, their Technicolor covers like movie posters, with images of robot-armed submarines and stealth jets. The magazine’s celebration of tinkering perfectly expressed the spirit of the household, and all that inventiveness filtered down to the youngest son. Larry once gathered power tools from all corners of the house so that he could disassemble them and examine their innards. Even if this activity didn’t have official parental sanction—and even if he didn’t quite put things back together—Larry escaped reprimand. Mischief in the quest for technological knowledge was no vice. By the time he left for college in 1991, he had amassed sufficient prowess to convert Legos into an ink-jet printer.

If computers were rare in the Midwest of the late seventies, computer scientists were downright alien. Page’s parents had migrated westward from their spiritual home in Ann Arbor, where they had earned their degrees, but not far enough. Carl took a job at Michigan State, which was hardly Stanford. He would help build a computing outpost on a periphery of the digital world. East Lansing didn’t quite swing like the San Francisco Mid-peninsula, either. Carl stood somewhat apart from his Ward-and-June neighbors. His politics tilted a bit further left. He inherited those from his father, a line worker at the Chevrolet plant in Flint who carried a homemade iron bludgeon to stave off goons during the long strike of 1936–37. Carl even managed a hint of California groovy in his new environs. He would take Larry to Grateful Dead concerts.

Unconventionality wasn’t just a personal style; it was a career necessity. Carl had chosen to pursue an audacious new specialty, a branch of computer science devoted to building machines that could simulate human thought. This subgenre of science fiction turned academic discipline goes by the name artificial intelligence (AI).

It was easy to see why this field would appeal to someone with Carl’s streak of intellectual adventurousness. Yes, the pursuit of artificial intelligence required computational acumen and a knack for algorithmic thinking. But if you wanted to replicate the working of the human brain, you had to intimately understand your model. AI, in other words, required psychology. The engineers read Freud, just like the literary critics—and reinterpreted him for their own purposes. They debated Chomsky about the nature of the mind.

The AI pioneers formulated their own intoxicating theory of the human mind. They believed that the brain is itself a computer—a device controlled by programs. This metaphor provided a fairly neat description of their own task: They were building a mechanical machine to imitate an organic one. But the human mind is a mysterious thing. So creating algorithms that replicate the inner workings of such an inscrutable mass of tissue was a complicated and controversial task. Carl Page had his own idea about how to go about it. He posited that procedures contained in Robert’s Rules of Order, a late-nineteenth-century manual for running effective meetings, could provide the basis for building AI.

There weren’t very many scientists working on artificial intelligence in those years. They made for a fascinating little subculture. That’s how the sociologist Sherry Turkle studied them in her classic tome, The Second Self. Because she was perched at MIT herself, she had a fairly unimpeded view of her subjects. The portrait she constructed was so piercingly apt that they may not have been able to recognize themselves in it. Artificial intelligence, she concluded, wasn’t just a lofty engineering goal; it was an ideology. She compared AI, with its theory about the programmable mind, to psychoanalysis and Marxism—as “a new way of understanding almost everything.”

In each case a central concept restructures understanding on a large scale: for the Freudian, the unconscious; for the Marxist, the relationship to the means of production. . . . [F]or the AI researcher, the idea of program has a transcendent value: it is taken as the key, the until now missing term for unlocking intellectual mysteries.

Carl Page was a rationalist. Yet some biographical accounts of Larry’s childhood note that his father had instructed him with religious intensity. Over the dinner table, Carl would share the good news about AI that was arriving from the booming laboratories on the coasts. This wasn’t simply a matter of filling conversation. It was instruction. His curriculum included field trips to various AI confabs. When the organizers of the International Joint Conference on Artificial Intelligence wouldn’t allow Larry, a sixteen-year-old, into their convention hall, Carl broke from his jovial form and reamed the obstructionists.

It is a testament to Carl Page’s teaching that his son went on to found the most successful, most ambitious AI company in history. Although we don’t think of Google that way, AI is precisely the source of the company’s greatness. Google uses algorithms, trained to think just like you. To accomplish this daunting task, Google must understand the intentions behind your query: When you typed “rock,” did you mean the geological feature or the musical genre or the wrestler-turned-actor? Google’s AI is so proficient that it can even supply the results for your query before you’ve finished typing it.

But, an heir to the great tradition of AI, Larry Page considers this accomplishment an insignificant step on the route to a much more profound mission—a mission in both the scientific and religious senses of the word. He has built his company so that it can achieve what is called “AI complete,” the creation of machines with the ability to equal and eventually exceed human intelligence. A few years after he launched Google, he returned to give a talk at Stanford, where he and Sergey Brin had birthed their search engine. He told a group of students, “Well, I would say the mission I laid out for you will take us a little while since it’s AI complete. It means its artificial intelligence. . . . If you solve search that means you can answer any question, which means you can do basically anything.” The audacity of this claim made the audience laugh, a bit uncomfortably. But their discomfort only stirred Page to push forward with his point. “If we solve the problem I outlined, then we’re doing everything.”

In moments of candor, Page and Brin admit that they imagine going even further than that—it’s not just about creating an artificial brain but welding it to the human. As Brin once told the journalist Steven Levy, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Or as he added on a separate occasion, “Perhaps in the future, we can attach a little version of Google that you just plug into your brain.”

Google may or may not ever achieve these grandiose goals, but that’s how the company views its role. When Page describes Google reshaping the future of humanity, this isn’t simply a description of the convenience it provides; what it aims to redirect is the course of evolution, in the Darwinian sense of the word. It’s not too grandiose to claim that they are attempting to create a superior species, a species that transcends our natural form.

•   •   •

PAGE AND BRIN ARE CREATING a brain unhindered by human bias, uninfluenced by irrational desires and dubious sensory instructions that emanate from the body. In pursuing this goal, they are attempting to complete a mission that began long before the invention of the computer. Google is trying solve a problem that first emerged several centuries ago, amid the blazing battle between the entrenched church and the emerging science. It’s a project that originated with modern philosophy itself and the figure of René Descartes.

A trace of Larry Page’s vision could be spotted on a small ship on the North Sea in the early years of the seventeenth century. Belowdecks, Descartes slept. He often made such journeys. Over his life, he never quite settled. He could be proud and quarrelsome, doggedly private and deliberately enigmatic. Even with centuries of retrospect, we can’t say what fueled his many unsettled years of travel, all the time he spent shuttling from one abode to the next like a fugitive.

Of his many destinations, Protestant Holland felt most like his home, an ease that was perhaps unexpected given his thoroughly Jesuit training. That’s where he stayed longest and where he laid the tracks for his philosophy. Historians will note that it is, based on the evidence, also the locale where he lost his virginity to an Amsterdam housemaid. He recorded the fact of that event with scientific detachment on the flyleaf of a book, as if collecting results for an experiment. The daughter from that encounter was called Francine, and he had made plans for her study in France. But her life was crushingly short. She died of scarlet fever, not yet six.

Descartes enjoyed his sleep; profound revelations came to him in dreams. Entire mornings were often spent in bed. But that wasn’t possible on this voyage. The ship’s captain had been eyeing the philosopher with suspicion. He was especially keen to determine the contents of the trunk that sat beside Descartes’s bed. In the middle of the night, he stormed the cabin and pried open the container. What he found inside was a startlingly lifelike machine—a robot made of springs, an automaton. According to some reports, the machine closely resembled Francine, which was in fact what Descartes called it. Horrified by his discovery, the captain dragged Descartes’s creation above deck and hurled it into the sea.

This story has been told and retold, especially by Descartes’s detractors. It is certainly false, a manufactured smear. As one of his biographers points out, the tale carries a whiff of disturbing sexual innuendo. But his enemies contrived this fiction for a compelling reason: Descartes was indeed obsessed with automata, even if he didn’t always keep one by his bed. During his life, the machine age was arriving in Europe, a subset of the great scientific revolution. In the gardens of the royal palaces, inventors unveiled incredible intricately engineered creations—hydraulically powered statues, figurines that played music, clockwork characters that whirled and gestured. Descartes daydreamed about building his own contraptions from springs and magnets. More important, automata were to play a central role in his effort to resolve the wars—between religions, between science and religion—tearing apart Europe.

The messy, war-torn seventeenth century touched Descartes directly. He served in both Catholic and Protestant armies in the Thirty Years War—an intramural fight over the religious future of Germany that ensnared the major European powers. Everything in Europe, during those years, seemed raw and unsettled. Despite Holland’s relative tolerance, Descartes lived in mortal fear that the Inquisition might target him. To avoid Galileo’s fate, he left manuscripts unpublished for years.

There’s disagreement about the extent to which Descartes remained a devout Catholic, or a believer at all. (One could argue that his proofs of God’s existence are so contorted that they must have been deliberately conceived to highlight the absurdity of his project.) However fervently he clung to his faith, his training and his travel had perfectly prepared him to broker a cease-fire in the conflict that pitted religion against science.

At the center of his theory were automata. The bodies of living creatures, even humans, were nothing more than machines. The human form—“an extended, non-thinking thing”—moved mindlessly in response to stimuli, as if it were composed of springs and levers. Our bodies could be described by scientific laws, just like the movement of the planets. If Descartes had stopped there, his theory would have infuriated the church. Catholic doctrine insisted that humans are the highest form of life, above all other beasts. But Descartes didn’t stop there. He asserted that the human casing contains a divine instrument that elevates humankind above the animal kingdom. Inside our mortal hardware, the “prison of the body,” as Descartes called it, resides the software of mind. In his theory, the mind was the place to find both the intellect and the immortal soul, the capacity for reason and man’s most godlike qualities.

This was a gorgeous squaring of the circle. Descartes had somehow managed to use skepticism in service of orthodoxy; he preserved crucial shards of church doctrine—the immortal soul, for starters—while buying intellectual space for the physical sciences to continue the march toward knowledge.

In solving one problem, however, Descartes created many others, questions that have bedeviled philosophers and theologians ever since. “I am a thinking thing that can exist without a body,” Descartes wrote. If that was true, then why not liberate the mind from the body’s prison? Descartes tried his darndest. He conceived a philosophical method that sounded a bit like a self-help regimen. He set about writing rules to achieve a state of what he called “pure understanding” or “pure intellection.” He would purge his mind of bodily urges to make way for the ideas that God had intended to occupy the mind. As Descartes instructed himself, “I shall now close my eyes, I shall stop my ears, I shall call away my senses, I shall efface even from my thoughts all images of corporeal things.” This wasn’t just a gambit to unleash his own mind, but a method intended to elevate humanity. The intellectual historian David Noble describes Descartes’s project: “He believed that his philosophical method might help mankind overcome the epistemological handicaps of its fallen state and regain control of some of its innate godly powers.”

Descartes’s obsession became philosophy’s obsession. Over the centuries, mathematicians and logicians—Gottfried Leibniz, George Boole, Alfred North Whitehead—aspired to create a new system that would express thought in its purest (and therefore most divine) form. But for all the genius of these new systems, the prison of the body remained. Philosophy couldn’t emancipate the mind, but technology just might. Google has set out to succeed where Descartes failed, except that it has jettisoned all the philosophical questions that rattled around in his head. Where Descartes emphasized skepticism and doubt, Google is never plagued by second-guessing. It has turned the liberation of the brain into an engineering challenge—an exercise that often fails to ask basic questions about the human implications of the project. This is a moral failing that afflicts Google and has haunted computer science from the start.

•   •   •

ALAN TURING WAS AN ATHEIST and a loner. He relished being an outsider. When his mother dispatched him at age thirteen to suffer the cold-shower, hard-bed plight of English boarding school, he bicycled alone to campus, sixty miles in two days. He could be shy and strange. To combat the hay fever that arrived every June, he would don a gas mask. His own mother wrote, “The seclusion of a medieval monastery would have suited him very well.” Exacerbating his innate sense of alienation, he was gay in a society that criminalized and hounded homosexuals.

Descartes had celebrated the sort of isolation that was often Turing’s fate. And indeed, his quiet moments yielded epiphany. In the words of the British philosopher Stuart Hampshire, Turing had “the gift for solitary thinking.” He was capable of intense concentration that blocked received wisdom and the orthodoxies of his colleagues from infiltrating his thoughts. On a summer run in 1935, Turing lay down amid apple trees and conceived of something he called the Logical Computing Machine. His vision, recorded on paper, became the blueprint for the digital revolution.

Engineering is considered the paragon of rationality—a profession devoted to systems and planning, the enemy of spontaneity and instinct. Turing certainly enjoyed playing the role of scientific scold, gleefully mocking all those who nervously fretted over the implications of new inventions. “One day ladies will take their computers for walks in the park and tell each other ‘My little computer said such a funny thing this morning!’” he quipped.

This posture was a bit rich. In his most influential essays, Turing wasn’t simply reporting the evidence—or carefully deploying inductive reasoning. Once you cut through his arch wit and logical bravura, you could see he was thinking spiritually. The mathematicians and engineers may have disavowed the existence of God, but they placed themselves in a celestial role of giving life to a pile of inorganic material. And it changed them.

Turing believed that the computer wasn’t just a machine, it was also a child, a being capable of learning. At times, he described his invention as if it were an English public-school boy, making progress thanks only to a healthy dose of punishment and the occasional rewards. Yet, he never did doubt its potential to achieve: “We may hope that machines will eventually compete with men in all purely intellectual fields.” He wrote those words in 1950, when computers were relatively impotent, very large boxes that could do a little bit of math. At that moment, there was little evidence to justify the belief that these machines would ever acquire the capabilities of the human brain. Still, Turing had faith. He imagined a test of the computer’s intelligence in which a person would send written questions to a human and a machine in another room. Receiving two sets of answers, the interrogator would have to guess which answers came from the human. Turing predicted that within fifty years the machine would routinely fool the questioner.

•   •   •

THIS PREDICTION SET THE TERMS for the computer age. Ever since, engineers have futilely attempted to build machines capable of passing Turing’s test. For many of those seeking to invent AI, their job is just a heap of mathematics, a thrilling intellectual challenge. But for a significant chunk of others, it’s a theological pursuit. They are at the center of a transformative project that will culminate in the dawning of a new age. The high priest of this religion is a rhetorically gifted, canny popularizer called Ray Kurzweil.

His ecstatic vision for the future was born in the greatest catastrophe of the past. The penumbra of the Holocaust hangs over him. His parents, Viennese Jews, fled on the eve of the Anschluss. The accretion of so many difficult years took its toll on his father, a classical conductor and intellectual. He died of a heart attack at the age of fifty-eight, a loss that never seems far from Kurzweil’s mind. Like many children of parents who have seen the worst, he counteracted the grimness of history with his own willful, supercharged optimism. From the youngest age, he was seized with the spirit of invention. As a seventeen-year-old, he made an appearance on Steve Allen’s game show, I’ve Got a Secret. He played the piano with virtuosity; Allen then asked the panel to guess his concealed truth. Under questioning by the show’s panelists, Kurzweil finally revealed that the music he played was composed by a computer. The audience was gobsmacked by that, but not as much as by the fact that a scrawny teen from Queens had invented the machine revealed on the set. He proudly walked Allen around a noisy hulking pile of wires, flashing lights, and relays, the work of a savant.

Kurzweil was the perfect engineer, confident that he could work out any puzzle put in front of him. As a newly minted graduate of MIT, he proclaimed to a friend that he wanted “to invent things so that the blind could see, and the deaf could hear, and the lame could walk.” At the age of twenty-seven, he created a machine that could read to the blind. To describe the invention hardly captures its audacity. The blind could place their book on a scanner that would then pour the text into a computer, which would then articulate the words—before Kurzweil’s machine, a flatbed scanner hadn’t existed.

This machine made him something of a hero to the blind, whose lives he had transformed. Stevie Wonder, for one, genuflected in Kurzweil’s direction. They became friends. For the sake of his new pal, Kurzweil created a new electronic keyboard, which purportedly matched the quality of the grand pianos in the world’s supreme concert halls.

For all his optimism, however, Kurzweil couldn’t escape his fears—or more precisely, he couldn’t escape the biggest fear of them all. His mind frequently wandered to death, such a “profoundly sad, lonely feeling that I really can’t bear it.” But this, too, he vowed, was a problem that engineering could solve. To prolong his own life, he began manically swallowing pills—vitamins, supplements, enzymes. One hundred fifty or so of these capsules go down his gullet daily. (He also receives a regular injection that he believes will help insulate him from the inevitable.) In a hagiographic documentary about him, we watch him as he glides through a cocktail party, a glass of red wine in hand. He pops pills, as if they were Chex Mix, while making small talk with strangers. We later learn that his ingestion is something of a product placement—he started a company, Ray and Terry’s Longevity Products, that manufactures many of the tablets and elixirs that he consumes.

But pharmaceuticals are just a sideline for Kurzweil. His main business is prophecy. Kurzweil believes fervently in AI, which he studied at MIT with its earliest pioneers, and yearns for the heaven on earth it will create. This paradise has a name—it’s called the singularity. Kurzweil borrowed the term from the mathematician-cum-science-fiction-writer Vernor Vinge, who, in turn, filched it from astrophysics. The singularity refers to a rupture in the time-space continuum—it describes the moment when the finite become infinite. In Kurzweil’s telling, the singularity is when artificial intelligence becomes all-powerful, when computers are capable of designing and building other computers. This superintelligence will, of course, create a superintelligence even more powerful than itself—and so on, down the posthuman generations. At that point, all bets are off—“strong AI and nanotechnology can create any product, any situation, any environment that we can imagine at will.”

As a scientist, Kurzweil believes in precision. When he makes predictions, he doesn’t chuck darts; he extrapolates data. In fact, he’s loaded everything we know about the history of human technology onto his computer and run the numbers. Technological progress, he has concluded, isn’t a matter of linear growth; it’s a never-ending exponential explosion. “Each epoch of evolution has progressed more rapidly by building on the products of the previous stage,” he writes. Kurzweil has named this observation the Law of Accelerating Returns. And in his telling, humanity is about to place a lead foot on its technological accelerator—we’re on the threshold of massive leaps in genetics, nanotechnology, and robotics. These developments will allow us to finally shed our “frail” and “limited” human bodies and brains, what he calls our “version 1.0 biological bodies.” We will fully merge with machines; our existence will become virtual; our brains will be uploaded. Thanks to his scientific reading, he can tell you the singularity will dawn in the year 2045.

Humanity will finally fulfill Descartes’s dreams of liberating the mind from the prison of the body. As Kurzweil puts it, “We will be software, not hardware,” and able to inhabit whatever hardware we like best. There will not be any difference between us and robots. “What, after all, is the difference between a human who has upgraded her body and brain using new nanotechnology, and computational technologies, and a robot who has gained an intelligence and sensuality surpassing her human creators?”

The world will then change quickly: Computers will complete every basic human task, which will permit lives of leisure; pain will disappear, as will death; technology will solve the basic condition of scarcity that has always haunted life on the planet. Even life under the sheets will be better: “Virtual sex will provide sensations that are more intense and pleasurable than conventional sex.” Humans can pretend like they have the power to alter this course, but they are fooling themselves. Peter Diamandis, one of Silicon Valley’s most prestigious thinkers, puts it quite starkly: “Anybody who is going to be resisting this progress forward is going to be resisting evolution. And fundamentally they will die out.”

Kurzweil is aware of the metaphysical implications of his theory. He called one of his treatises The Age of Spiritual Machines. His descriptions of life after the singularity are nothing short of rapturous. “Our civilization will then expand outward, turning all the dumb matter and energy we encounter into sublimely intelligent—transcendent—matter and energy. So in a sense, we can say that the Singularity will ultimately infuse the universe with spirit.” Kurzweil even maintains a storage unit where he has stockpiled his father’s papers, down to his financial ledgers, in anticipation of the day he can resurrect him. When the anthropologist of religion Robert Geraci studied Kurzweil and other singularitarians, he noticed how precisely their belief seemed to echo Christian apocalyptic texts. “Apocalyptic AI is the legitimate heir to these religious promises, not a bastardized version of them,” he concluded. “In Apocalyptic AI, technological research and religious categories come together in a stirringly well-integrated unit.”

The singularity is hardly the state religion of Silicon Valley. In some neighborhoods of techland, Kurzweil is subjected to haughty dismissal. John McCarthy, the godfather of AI, once said that he wanted to live to 102, so that he could laugh at Kurzweil when the singularity fails to arrive at its appointed hour. Still, Kurzweil’s devotees include members of the tech A-list. Bill Gates, for one, calls him “the best person I know at predicting the future of artificial intelligence.” The New York Times’s John Markoff, our most important chronicler of the technologists, says that Kurzweil “represents a community of many of Silicon Valley’s best and brightest,” ranks that include the finest minds at Google.

•   •   •

LARRY PAGE LIKES TO IMAGINE that he never escaped academia. Google, after all, began as a doctoral dissertation—and the inspiration for the search engine came from his connoisseurship of academic papers. As the son of a professor, he knew how researchers judge their own work. They look at the number of times it gets cited by other papers. His eureka moment arrived when he saw how the Web mimicked the professoriate. Links were just like citations—both were, in their way, a form of recommendation. The utility of a Web page could be judged by tabulating the number of links it received on other pages. When he captured this insight in an algorithm, he punningly named it for himself: PageRank.

Research is a pursuit Page cherishes, and in which Google invests vast sums—last year it spent nearly $12.5 billion on R&D and on projects that it won’t foreseeably monetize. The company has built a revolving front door through which superstar professors regularly cycle, joining the company’s most audacious ventures. If there’s tension between profit and the pursuit of scientific purity, Page will make a big show of choosing the path of purity. That is, of course, a source of Google’s success over the years. Where other search engines sold higher placement in their rankings, Google never took that blatantly transactional path. It could plausibly claim that its search results were scientifically derived.

This idealism is a bit for show, but it’s mostly something that originates in the company’s marrow. “Google is not a conventional company. We do not intend to become one,” Page and Brin proclaimed in a letter that they sent to the Securities and Exchange Commission, attached to the company’s initial public offering in 2004. This statement could be read as empty rhetoric, but it gave Wall Street a case of heartburn. Close observers of the company understood that Google abhorred MBA types. It stubbornly resisted the creation of a marketing department. Page prided himself on hiring engineers for business-minded jobs that would traditionally go to someone trained in, say, finance. Even as Google came to employ tens of thousands of workers, Larry Page personally reviewed a file on each potential hire to make sure that the company didn’t veer too far from its engineering roots.

The best expression of the company’s idealism was its oft-mocked motto, “Don’t be evil.” That slogan becomes easier to understand, and a more potent expression of values, when you learn that Google never intended the phrase for public consumption. The company meant to focus employees on the beneficent, ambitious mission of the company—a Post-it note to the corporate self, reminding Google not to behave as selfishly and narrow-mindedly as Microsoft, the king of tech it intended to dethrone. The aphorism became widely known only after the company’s CEO, Eric Schmidt, inadvertently mentioned it in an interview with Wired, an act of blabbing that frustrated many in the company, who understood how the motto would make Google a slow-moving target for ridicule. (Google eventually retired the motto.) When Larry Page issues his pronouncements, they are unusually earnest. And the talking points that he repeats often are a good measure of his true, supersized intentions. He has a talent for sentences that are at once self-effacing and impossibly grandiose: “We’re at maybe 1% of what is possible. Despite the faster change, we’re still moving slow relative to the opportunities we have.”

To understand Page’s intentions, it’s necessary to examine the varieties of artificial intelligence. The field can be roughly divided in two. There’s a school of incrementalists, who cherish everything that has been accomplished to date—victories like the PageRank algorithm or the software that allows ATMs to read the scrawled writing on checks. This school holds out little to no hope that computers will ever acquire anything approximating human consciousness. Then there are the revolutionaries who gravitate toward Kurzweil and the singularitarian view. They aim to build computers with either “artificial general intelligence” or “strong AI.”

For most of Google’s history, it trained its efforts on incremental improvements. During that earlier era, the company was run by Eric Schmidt—an older, experienced manager, whom Google’s investors forced Page and Brin to accept as their “adult” supervisor. That’s not to say that Schmidt was timid. Those years witnessed Google’s plot to upload every book on the planet and the creation of products that are now commonplace utilities, like Gmail, Google Docs, and Google Maps.

But those ambitions never stretched quite far enough to satisfy Larry Page. In 2011, Page shifted himself back into the corner office, the CEO job he held at Google’s birth. And he redirected the company toward singularitarian goals. Over the years, he had befriended Kurzweil and worked with him on assorted projects. After he returned to his old job, Page hired Kurzweil and anointed him Google’s director of engineering. He assigned him the task of teaching computers to read—the sort of exponential breakthrough that would hasten the arrival of the superintelligence that Kurzweil celebrates. “This is the culmination of literally 50 years of my focus on artificial intelligence,” Kurzweil said upon signing up with Google.

When you listen to Page talk to his employees, he returns time and again to the metaphor of the moonshot. The company has an Apollolike program for reaching artificial general intelligence: a project called Google Brain, a moniker with creepy implications. (“The Google policy on a lot of things is to get right up to the creepy line and not cross it,” Eric Schmidt has quipped.) Google has spearheaded the revival of a concept first explored in the sixties, one that has failed until recently: neural networks, which involve computing modeled on the workings of the human brain. Algorithms replicate the brain’s information processing and its methods for learning. Google has hired the British-born professor Geoff Hinton, who has made the greatest progress in this direction. It also acquired a London-based company called DeepMind, which created neural networks that taught themselves, without human instruction, to play video games. Because DeepMind feared the dangers of a single company possessing such powerful algorithms, it insisted that Google never permit its work to be militarized or sold to intelligence services.

How deeply does Google believe in the singularity? Hardly everyone in the company agrees with Kurzweil’s vision. One of the company’s most accomplished engineers, Peter Norvig, has argued against the Law of Accelerating Returns. And Larry Page has never publicly commented on Kurzweil. Yet, there’s an undeniable pattern. In 2008 Google helped bankroll the creation of Singularity University, housed on a NASA campus in Silicon Valley—a ten-week “graduate” program cofounded by Kurzweil to promote his ideas. Google has donated millions so that students can attend SU on a free ride. “If I were a student, this is where I would like to be,” Page has said. The company has indulged a slew of singularitarian obsessions. It has, for instance, invested heavily in Calico, a start-up that wants to solve the problem of death, as opposed to tackling comparatively trivial issues like cancer. “One of the things I thought was amazing is that if you solve cancer, you’d add about three years to people’s average life expectancy,” Page said in an interview with Time. “We think of solving cancer as this huge thing that’ll totally change the world. But when you really take a step back and look at it, yeah, there are many, many tragic cases of cancer, and it’s very, very sad, but in the aggregate, it’s not as big an advance as you might think.” Google will likely achieve very few of its goals—moonshot will prove scattershot. Still, these projects reveal a worldview, a stunningly coherent set of values and beliefs.

The singularity isn’t just a vision of the future. It implies a view of the present. According to Larry Page’s Panglossian theory of life on planet Earth, we’re getting achingly close to a world devoid of scarcity and brimming with wonders—the stakes are such that we would be foolish, unfeeling even, not to hasten the arrival of this new day. Some are blind to the possibilities, out of Luddism or narrowness of imagination. But that’s the nature of scientific revolutions; they are propelled by heretics and rule-breakers. This intense mission is driven by arrogance and a rather shocking carelessness. In its pursuit of the future, Google often finds itself pondering and developing technologies that will significantly alter long-standing human practices. Its approach is to barrel forward with alacrity, confident in its own goodness.

When the company decided to digitize every book in existence, it considered copyright law a trivial annoyance, hardly worth a moment’s hesitation. Of course, Google must have had an inkling of how its project would be perceived. That’s why it went about its mission quietly, to avoid scrutiny. “There was a cloak-and-dagger element to the procedure, soured by a clandestine taint,” Steven Levy recounts of the effort, “like ducking out of a 1950s nightclub to smoke weed.” Google’s trucks would pull up to libraries and quietly walk away with boxes of books to be quickly scanned and returned. “If you don’t have a reason to talk about it, why talk about it?” Larry Page would argue, when confronted with pleas to publicly announce the existence of its program. The company’s lead lawyer on this described bluntly the roughshod attitude of his colleagues: “Google’s leadership doesn’t care terribly much about precedent or law.” In this case precedent was the centuries-old protections of intellectual property, and the consequences were a potential devastation of the publishing industry and all the writers who depend on it. In other words, Google had plotted an intellectual heist of historic proportions.

What motivated Google in its pursuit? On one level, the answer is clear: To maintain dominance, Google’s search engine must be definitive. Here was a massive store of human knowledge waiting to be stockpiled and searched. On the other hand, there are less obvious motives: When the historian of technology George Dyson visited the Googleplex to give a talk, an engineer casually admitted, “We are not scanning all those books to be read by people. We are scanning them to be read by an AI.” If that’s true, then it’s easier to understand Google’s secrecy. The world’s greatest collection of knowledge was mere grist to train machines, a sacrifice for the singularity.

Google is a company without clear boundaries, or rather, a company with ever-expanding boundaries. That’s why it’s chilling to hear Larry Page denounce competition as a wasteful concept and to hear him celebrate cooperation as the way forward. “Being negative is not how we make progress and most important things are not zero sum,” he says. “How exciting is it to come to work if the best you can do is trounce some other company that does roughly the same thing?” And it’s even more chilling to hear him contemplate how Google will someday employ more than one million people, a company twenty times larger than it is now. That’s not just a boast about dominating an industry where he faces no true rivals, it’s a boast about dominating something far vaster, a statement of Google’s intent to impose its values and theological convictions on the world.