WHEN ROBOTS WEEP,
WHO WILL COMFORT THEM?

It’s an Anthropocene magic trick, this extension of our digital selves over the Internet, far enough to reach other people, animals, plants, interplanetary crews, extraterrestrial visitors, the planet’s Google-mapped landscapes, and our habitats and possessions. If we can revive extinct life forms, create analog worlds, and weave new webs of communication—what about new webs of life? Why not synthetic life forms that can sense, feel, remember, and go through Darwinian evolution?

HOD LIPSON IS the only man I know whose first name means “splendor” in Hebrew and a V-shaped wooden trough for carrying bricks over one shoulder in English. The paradox suits him physically and mentally. He looks strong and solid enough to carry a hod full of bricks, but he would be the first to suggest that the bricks might not resemble any you’ve ever known. They might even saunter, reinvent themselves, refuse to be stacked, devise their own mortar, fight back, explore, breed more of their kind, and boast a nimble curiosity about the world. Splendor can be bricklike, if graced by complexity.

His lab building at Cornell University is home to many a skunkworks project in computer sciences or engineering, including some of DARPA’s famous design competitions (agile robots to clean up toxic disasters, superhero exoskeletons for soldiers, etc.). Nearby, two futuristic DARPA Challenge cars have been left like play-worn toys a few steps from a display case of antique engineering marvels and an elevator that’s old and slow as a butter churn.

On the second floor, a black spider-monkey-like robot clings to the top left corner of Lipson’s office door, intriguing but inscrutable, except to the inner circle for whom it’s a wry symbol and tradesman’s sign of the sort colonial shopkeepers used to hang out to identify their business: the apothecary’s mortar and pestle, the chandler’s candles, the cabinetmaker’s hickory-spindled armchair, the roboticist’s apprentice. Though in its prime the leggy bot drew the keen gaze of students, students come and go, as do the smart-bots they work on, which, coincidentally, seem to have a life span of about 3.5 years—how long it takes a student to finish a dissertation and graduate.

A man with curly hair, chestnut-brown eyes, and a dimpled chin, Hod welcomes me into his cheerful office: tall windows, a work desk, a Dell computer with a triptych of screens, window boxes for homegrown tomatoes in summer, and a wall of bookshelves, atop which sits an array of student design projects. To me they look unfamiliar but strangely beautiful and compelling, like the merchandise in an extraterrestrial bazaar. A surprisingly tall white table and its chairs invite one to climb aboard and romp with ideas. At Lipson’s round table, if you’re under six feet tall, your feet will automatically leave the planet, which is good, I think, because even this limited levitation aids the imagination, untying gravity just enough to make magic carpet rides, wing-walkers, and spaceships humble as old rope. There’s a reason we cling to such elevating turns of phrase as “I was walking on air,” “That was uplifting,” “heightened awareness,” “surmounting obstacles,” or “My feet never touched the ground.” The mental mischief of creativity—which thrives on such fare as deep play, risk, a superfluity of ideas, the useful application of obsession, and willingly backtracking or hitting dead ends without losing heart—is also fueled by subtle changes in perception. So why not cast off mental moorings and hover a while each day?

What’s the next hack for a rambunctious species full of whiz kids with digital dreams? Lipson is fascinated by a different branch of the robotic evolutionary tree than the tireless servant, army of skilled hands, or savant of finicky incisions with which we have become familiar. Over ten million Roomba vacuum cleaners have already sold to homeowners (who sometimes find them being ridden as child or cat chariots). We watch with fascination as robotic sea scouts explore the deep abysses (or sunken ships), and NOAA’s robots glide underwater to monitor the strength of hurricanes. Google’s robotics division owns a medley of firms, including some minting life-size humanoids—because, in public spaces, we’re more likely to ask a cherub-faced robot for info than a touchscreen. Both Apple and Amazon are diving into advanced robotics as well. The military has invested heavily in robots as spies, bionic gear, drones, pack animals, and bomb disposers. Robots already work for us with dedicated precision in factory assembly lines and operating rooms. In cross-cultural studies, the elderly will happily adopt robotic pets and even babies, though they aren’t keen on robot caregivers at the moment.

All of that, to Lipson, is child’s play. His focus is on a self-aware species, Robot sapiens. Our own lineage branched off many times from our apelike ancestors, and so will the flowering, subdividing lineage of robots, which perhaps needs its own Linnaean classification system. The first branch in robot evolution could split between AI and AL—artificial intelligence and artificial life. Lipson stands right at that fork in that road, whose path he’s famous for helping to divine and explore in one of the great digital adventures of our age. It’s the ultimate challenge, in terms of engineering, in terms of creation.

“At the end of the day,” he says with a nearly illegible smile, “I’m trying to recreate life in a synthetic environment—not necessarily something that will look human. I’m not trying to create a person who will walk out the door and say ‘Hello!’ with all sorts of anthropomorphic features, but rather features that are truly alive given the principles of life—traits and behaviors they have evolved on their own. I don’t want to build something, turn it on, and suddenly it will be alive. I don’t want to program it.”

A lot of robotics today, and a lot of science fiction, is about a human who schemes at a workbench in a dingy basement, digitally darning scraps, and then figuring out how to command his scarecrow to do his bidding. Or a mastermind who builds the perfect robots that eventually go haywire in barely discernible stages and start to massacre us, sometimes on Earth, often in space. It assumes an infinite power that humans have (and so can lose) over the machine.

Engineering’s orphans, Lipson’s brainchildren would be the first generation of truly self-reliant machines, gifted with free will by their soft, easily damaged creators. These synthetic souls would fend for themselves, learn, and grow—mentally, socially, physically—in a body not designed by us or by nature, but by fellow computers.

That may sound sci-fi, but Lipson is someone who relishes not only pushing the envelope but tinkering with its dimensions, fabric, inertia, and character. For instance, bothered by a question that nags sci-fi buffs, engineers, and harried parents alike—Where are all the robots we were told would be working for us by now?—he decided to go about robotics in a new way. And also in the most ancient of ways, by summoning the “mother of all designers, Evolution,” and asking a primordial soup of robotic bits and pieces to zing through millions of generations of fluky mutations, goaded by natural selection. Of course, natural evolution is a slapdash and glacially slow mother, yielding countless bottlenecks for every success story. But computers can be programmed to “evolve” at great speed with digital finesse, and adapt to all the rigors of their environment.

Would they be able to taste and smell? I wonder, realizing at once how outmoded the very question is. Taste buds rise like flaky volcanoes on different regions of the tongue, with bitter at the back, lest we swallow poisons. How hard would it be to evolve a suite of specialized “taste buds” that bear no resemblance to flesh? Flavor engineers at Nestlé in Switzerland have already created an electronic “taster” of espresso, which analyzes the gas different pulls of ristretto give off when heated, translating each bouquet of ions into such human-friendly, visceral descriptions as “roasted,” “flowery,” “woody,” “toffee,” and “acidy.”

However innovative, Lipson’s entities are still primitive when compared to a college sophomore or a bombardier beetle. But they’re the essential groundwork for a culture, maybe a hundred years from now, in which some robots will do our bidding, and others will share our world as a parallel species, one that’s creative and curious, moody and humorous, quick-witted, multitalented, and 100 percent synthetic. Will we regard them as life, as a part of nature, if they’re not carbon-based—as are all of Earth’s plants and animals? Can they be hot-blooded without blood? How about worried, petulant, sly, envious, downright cussed? The future promises fleets of sovereign silicants and, ultimately, self-governing, self-reliant robotic angels and varmints, sages and stooges. To be able to ponder such possibilities is a testament to the infinite agility of matter and its great untapped potential.

Whenever Lipson talks of robots being truly alive, gently stressing the word, I don’t hear Dr. Frankenstein speaking, at one in the morning, as the rain patters dismally against the panes,

when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs. How can I describe my emotions at this catastrophe, or how delineate the wretch whom with such infinite pains and care I had endeavoured to form?

As in the book’s epigraph, lines from Milton’s Paradise Lost: “Did I request thee, Maker, from my clay / To mould Me man?” Mary Shelley suggests that the parent of a monster is ultimately responsible for all the suffering and evil he has unleashed. From her early years of seventeen to twenty-one, Shelley was herself consumed by physical creation and literally sparking life, becoming pregnant and giving birth repeatedly, only to have three of her four children die soon after birth. She was continually pregnant, nursing, or mourning—creating and being depleted by her own creations. That complex visceral state fed her delicately horrifying tale.

In her day, scientists were doing experiments in which they animated corpses with electricity, fleetingly bringing them back to life, or so it seemed. Whatever the image of Frankenstein’s monster may have meant to Shelley, it has seized the imagination of people ever since, symbolizing something unnatural, Promethean, monstrous that we’ve created by playing God, or from evil motives or through simple neglect (Dr. Frankenstein’s sin wasn’t in creating the monster but in abandoning it). Something we’ve created that, in the end, will extinguish us. And that’s certainly been a key theme in science-fiction novels and films about robots, androids, golems, zombies, and homicidal puppets. Such ethical implications aren’t Lipson’s concern; that’s mainly for seminars and summits in a future he won’t inhabit. But such discussions are already beginning on some campuses. We’ve entered the age of such college disciplines as “robo-ethics” and Lipson’s specialty, “evolutionary robotics.”

Has it come to this, I wonder, creating novel life forms to prove we can, because a restless mind, left to its own devices and given enough time, is bound to create equally restless devices, just to see what happens? It’s a new threshold of creators creating creative beings.

“Creating life is certainly a tall pinnacle to surmount. Is it also a bit like having children?” I ask Lipson.

“In a different way. . . . Having children isn’t so much an intellectual challenge, but other kinds of challenges.” His eyebrows lift slightly to underline the understatement, and a memory seems to flit across his eyes.

“Yes, but you set them in motion and they don’t remake themselves exactly, but . . .”

“You have very little control. You can’t program a child . . .”

“But you can shape its brain, change the wiring.”

“Maybe you can shape some of the child’s experiences, but there are others you can’t control, and a lot of the personality is in the genes: nature, not nurture. Certainly in the next couple of decades we won’t be programming machines, but . . . like children, exactly . . . we’ll shape their experiences a little bit, and they’ll grow on their own and do what they do.”

“And they’ll simply adjust to whatever job is required?”

“Exactly. Adaptation and adjustment, and with that will come other issues, and a lot of problems.” He smiles the smile of someone who has seen dust-ups on a playground. “Emotions will be a big part of that.”

“You think we’ll get to the point where machines have deep emotions?”

“They will have deep emotions,” Hod says, certain as the tides. “But they won’t necessarily be human emotions. And also machines will not always do what we want them to do. This is already happening. Programming something is the ultimate control. You get to make it do exactly what you want when you want it. This is how robots in factories are programmed to work today. But the more we give away some of our control over how the machine learns . . .”

As a cool gust of October air wafts through the screenless window, carrying a faint scent of crumbling magnolia leaves and damp earth, it trails gooseflesh across my wrist.

“Let me close the window.” Hod slides gingerly off the tall chair as if from a soda fountain seat and closes the gaping mouth of the window.

We were making eye contact; how did he notice my gooseflesh? Stare at something and only the center of your vision is in focus; the periphery blurs. Is his visual compass wider than most people’s, or is he just being a thoughtful host and, sensing a breeze himself, reasoning that since I’m sitting closer to the window I might be feeling chillier? As we talk, his astonishingly engineered biological brain—with its flexible, self-repairing, self-assembling, regenerating components that won’t leave toxic metals when they decompose—is working hard on several fronts: picturing what he wants to say in all of its complexity; rummaging through a sea of raw and thought-rinsed ideas; gauging my level of knowledge—very low in his field; choosing the best way to translate his thoughts into words for this newly met and unfamiliar listener; reading my unconscious cues; rethinking some of his words when they’re barely uttered; revising them right as they’re leaving his mouth, in barely perceptible changes to a word’s opening sound; choosing the ones most accurate on several levels (literally, professionally, emotionally, intellectually) whose meaning I may nonetheless give subtle signs of not really understanding—signs visible to him though unconscious to me, as they surface from a dim warehouse of my previous thoughts and experiences and a vocabulary in which each word carries its own unique emotional valence—while at the same time he’s also forming impressions of me, and gauging the impression I might be forming of him . . .

This is called a “conversation,” the spoken exchange of thoughts, opinions, and feelings. It’s hard to imagine robots doing the same on as many planes of meaning, layered emotions, and spring-loaded memories.

Beyond the windows with their magenta-colored accordion blinds, and the narrow Zen roof garden of rounded stones, twenty yards across the courtyard and street, behind a flimsy orange plastic fence, giant earth-diggers and men in hard hats are tearing up rock and soil with the help of machines wielding fierce toothy jaws. Such brutish dinosaurs will one day give way to rational machines that can transform themselves into whatever the specific task requires—perhaps the sudden repair of an unknown water pipe—without a boss telling them what to do. By then the din of jackhammers will also be antiquated, though I’m sure our hackles will still twitch at the scrape of clawlike metal talons on rock.

“When a machine learns from experience, there are few guarantees about whether or not it will learn what you want,” Lipson continues as he remounts his chair. “And it might learn something that you didn’t want it to learn, and yet it can’t forget. This is just the beginning.”

I shudder at the thought of traumatized robots.

He continues, “It’s the unspoken Holy Grail of a lot of roboticists—to create just this kind of self-awareness, to create consciousness.”

What do roboticists like Lipson mean when they speak of “conscious” robots? Neuroscientists and philosophers are still squabbling over how to define consciousness in humans and animals. On July 7, 2012, a group of neuroscientists met at the University of Cambridge to declare officially that nonhuman animals “including all mammals and birds, and many other creatures, including octopuses” are conscious. To formalize their position, they signed a document entitled “The Cambridge Declaration on Consciousness in Non-Human Animals.”

But beyond being conscious, humans are quintessentially self-aware. Some other animals—orangutans and other cousins of ours, dolphins and octopuses, and some birds—are also self-aware. A wily jay might choose to cache a seed more quietly because other jays are nearby and it doesn’t want the treasure stolen; an octopus might take the lid off its habitat at night to go for a stroll and then replace the lid when it returns lest its keepers find out. They possess a theory of mind, and can intuit what a rival might do in a given situation and act accordingly. They exhibit deceit, compassion, the ability to see themselves through another’s eyes. Chimpanzees feel deeply, strategize, plan, think abstractly to a surprising degree, mourn, empathize some, deceive, seduce, and are all too conscious of life’s pressures, if not its chastening illusions. They’re blessed and burdened, as we are, by strong family ties and quirky personalities, from madcap to martinet. They jubilate when happy, mope when sad.

I don’t think they fret and reason endlessly about mental states, as we do. They simply dream a different dream, probably much like the one we used to dream, before we crocheted into our neural circuitry the ability to have ideas about everything. Other animals may know you know something, but they don’t know you know they know. Other mammals may think, but we think about having thoughts. Linnaeus categorized us in the subspecies of Homo sapiens sapiens, adding the extra sapiens because we don’t just know, we know that we know. Our infants respond to their surroundings and other people, and start evolving a sense of self during their first year. Like orangutans, elephants, and even European magpies, they can identify themselves in a mirror, and they gather that others have a personal point of view that differs from their own.

So when people talk about robots being conscious and self-aware, they mean a range of knowing. Some robots may be smarter than humans, more rational, more skillful in designing objects, and better at anything that requires memory and computational skills. I reckon they can be deeply curious (though not exactly the way we are), and will grow even more so. They can already do an equivalent of what we think of as ruminating and obsessing, though in fewer dimensions. Engineers are designing robots with the ability to attach basic feelings to sensory experience, just as we do, by interacting with the world, filing the memory, and using it later to predict the safety of a situation or the actions of others.

Lipson wants his robots to make assumptions and deductions based on past experiences, a skill underlying our much-prized autobiographical memory, and an essential component of learning. Robots will learn through experience not to burn a hand on a hot stove, and to look both ways when crossing the street. There are also subtle, interpersonal clues to decipher. For instance, Lipson uses the British “learnt” instead of the American “learned,” but the American “while” instead of the British “whilst.” So, from past experience, I deduce that he learned English as a child from a British speaker, and assume he has lived in the United States just long enough to rinse away most of the British traces.

Yet however many senses robots may come to possess—and there’s no reason why they shouldn’t have many more than we, including sharper eyesight and the ability to see in the dark—they’ll never be embodied exactly like us, with a thick imperfect sediment of memories, and maybe a handful of diaphanous dreams. Who can say what unconscious obbligato prompts a composer to choose this rhythm or that—an irregular pounding heart, tinnitus in the ears, a lover who speaks a foreign language, fond memories evoked by the crackle of ice in winter, or an all too human twist of fate? There would be no Speak, Memory from Nabokov, or The Gulag Archipelago from Solzhenitsyn, without the sentimental longings of exile. I don’t know if robots will be able to do the sort of elaborate thought experiments that led Einstein to discoveries and Dostoevsky to fiction.

Yet robots may well create art, from who knows what motive, and enjoy it based on their own brand of aesthetics, satire (if they enjoy satire), or humor. We might enjoy it, too, especially if it’s evocative of work by human artists, if it appeals to our senses. Would we judge it differently? For one of its gallery shows, Yale’s art museum accepted paintings inspired by Robert Motherwell, only to change its mind when it learned they’d been painted by a robot in Lipson’s Creative Machines Lab. It would be fun to discover robots’ talents and sensibility. Futurologists like Ray Kurzweil believe, as Lipson does, that a race of conscious robots, far smarter than we, will inhabit Earth’s near-future days, taking over everything from industry, education, and transportation to engineering, medicine, and sales. They already have a foot in the door.

At the 2013 Living Machines Conference, in London, the European RobotCub Consortium introduced their iCub, a robot that has naturally evolved a theory of mind, an important milestone that develops in children at around the age of three or four. Standing about three feet tall, with a bulbous head and pearly white face, programmed to walk and crawl like a child, it engages the world with humanlike limbs and joints, sensitive fingertips, stereo vision, sharp ears, and an autobiographical memory that’s split like ours into the episodic memory of, say, skating on a frozen pond as a child and the semantic memory of how to tilt the skate blades on edge for a skidding stop. Through countless interactions between body and world it codifies knowledge about both. None of that is new. Nor is being able to distinguish between self and other, and intuit the other’s mental state. Engineers like Lipson have programmed that discernment into robots before. But this was the first time a robot evolved the ability all by itself. iCub is just teething on consciousness, to be sure, but it’s intriguing that the bedrock of empathy, deception, and other traits that we regard as conscious can accidentally emerge during a robot’s self-propelled Darwinian evolution.

It happened like this. iCub was created with a double sense of self. If he wanted to lift a cup, his first self told his arm what to do, while predicting the outcome and adjusting his knowledge based on whatever happened. His second—we can call it “interior”—self received exactly the same feedback, but, instead of acting on the instructions, it could only try to predict what would happen in the future. If the real outcome differed from a prediction, the interior self updated its cavernous memory. That gave iCub two versions of itself, an active one and an interior “mental” one. When the researchers exposed iCub’s mental self to another robot’s actions, iCub began intuiting what the other robot might do, based on personal experience. It saw the world through another’s eyes.

As for our much-prized feats of scientific reasoning and insight, Lipson’s lab has created a Eureqa machine, a computer scientist able to make a hypothesis, design an experiment, contemplate the results, and derive laws of nature from them. Plumbing the bottomless depths of chaos, it divines meaning. Assigned a problem in Newtonian physics (how a double pendulum works), “the machine took only a couple of hours to come up with the basic laws of motion,” Lipson says, “a task that occupied Newton for years after he was inspired by an apple falling from a tree.”

Eureqa takes its name from a legendary moment in the annals of science, two thousand years ago, when Archimedes—already a renowned mathematician and inventor with formidable mastery in his field—was soaking in his bathtub, his senses temporarily numbed by warm water and weightlessness, and the solution to a problem came to him in a flash of insight. Leaping from the tub, he supposedly ran naked through the streets of Athens yelling, “Eureka!” (“I have found it!”)

For two thousand years, that’s how traditional science has run: solid learning and mastery, then the kindling of observation and a spark of insight. The Eureqa machine marks a turning point in the future of how science is done. Once upon a time, Galileo studied the movement of the heavenly bodies, Newton watched an apple fall in his garden. Today science is no longer that simple because we wade through oceans of information, generate vast amounts of additional data, and analyze it on an unprecedented scale. Virtuoso number-crunchers, our computers can extract data without bias, boredom, vanity, selfishness, or greed, quickly doing the work that used to take one human a lifetime.

In 1972, when I was writing my first book, The Planets: A Cosmic Pastoral, a suite of scientifically accurate poems based on the planets, I used to hang out in the Space Sciences Building at Cornell. The astronomer Carl Sagan was on my doctoral committee, and he kindly gave me access to NASA photographs and reports. At that time, it was possible in months to learn nearly everything humans knew about the other planets, and the best NASA photos of the outermost planets were only arrows pointing to balls of light. Over the decades, I attended flybys at the Jet Propulsion Laboratory in Pasadena, California, and watched the first exhilarating images roll in from distant worlds as Viking and Voyager reached Mars, Jupiter, Saturn, Neptune, and an entourage of moons. In the 1980s, it was still possible for an amateur to learn everything humans knew about the planets. Today that’s no longer so. The Alps of raw data would take more than one lifetime to summit, passing countless PhD dissertations at campsites along the trail.

But all that changes with a tribe of Eureqa-like machines. A team of scientists at the University of Aberystwyth, led by Professor Ross King, has revealed the first machine able to deduce new scientific knowledge about nature on its own. Named Adam, the two-armed robot designed and performed experiments to investigate the genetics of baker’s yeast. Carrying out every stage of the scientific process by itself without human intervention, it can perform a thousand experiments a day and make discoveries.

More efficient science will solve modern society’s problems faster, King believes, and automation is the key. He points out that “automation was the driving force behind much of the nineteenth- and twentieth-century progress.” In that spirit, King’s second-generation laboratory robot, named Eve, is even faster and nimbler than Adam. It’s easy to become mesmerized watching a webcam of Eve testing drugs, her automated arms and stout squarish body shuffling trays, potions, and tubes with tireless precision, as she peers through ageless nonblinking eyes, while saving the sanity of countless graduate students, spared sleepless nights in the lab tending repetitive experiments.

How extraordinary that we’ve created peripheral brains to discover the truths about nature that we seek. We’re teaching them how to work together calmly as a society, share data at lightning speed, and cooperate so much better than we do, rubbing brains together in the invisible drawing room we sometimes call the “cloud.” Undaunted, despite our physical and mental limitations, we design robots to continue the quest we began long ago: making sense of nature. Some call it Science, but it’s so much larger than one discipline, method, or perspective.

I find it touchingly poetic to think that as our technology grows more advanced, we may grow more human. When labor, science, manufacturing, sales, transportation, and powerful new technologies are mainly handled by savvy machines, humans really won’t be able to compete in those sectors of the economy. Instead we may dominate an economy of interpersonal or imaginative services, in which our human skills shine.

Smart robots are being nurtured and carefully schooled in laboratories all over the world. Thus far, Lipson’s lab has programmed machines to learn things unassisted, teaching themselves the basic skills of how to walk, eat, metabolize, repair wounds, grow, and design others of their kind. At the moment, no one robot can do everything; each pursues its own special destiny. But one day, all the lab machines will merge into a single stouthearted . . . being—what else would we call it?

One of Lipson’s robots knows the difference between self and other, the shape of its physique, and whether it can fit into odd spaces. If it loses a limb, it revises its self-image. It senses, recollects, keeps updating its data, just as we do, so that it can predict future scenarios. That’s a simple form of self-awareness. He’s also created a machine that can picture itself in various situations—very basic thought experiments—and plan what to do next. It’s starting to think about thinking.

“Can I meet it?” I ask.

His eyes say: If only.

Leading me across the hall, into his lab, he stops in front of a humdrum-looking computer on a desk, one of many scattered around the lab.

“All I can show you is this ordinary-looking computer,” he says. “I know it doesn’t look exciting because the drama is unfolding in the software inside the machine. There’s another robot,” he says, gesturing to a laptop, “that can look at a second robot and try to infer what that other robot is thinking, what the other robot is going to do, what the other robot might do in a new situation, based on what it did in a previous situation. It’s learning another’s personality. These are very simple steps, but they’re essential tools as we develop this technology. And with this will come emotions, because emotions, at the end of the day, have to do with the ability to project yourself into different situations—fear, various needs—and anticipate the rewards and pain in many future dramas. I hope that, as the machines learn, eventually they’ll produce the same level of emotions as in humans. They might not be the same type of emotions, but they will be as complex and rich as in humans. But it will be different, it will be alien.”

I’m fascinated by the notion of “other types of emotions.” What would a synthetic species be like without all the lavish commotion of sexual ardor, wooing, jealousy, longing, affectionate bonds, shared experiences? Just as I long to know about the inner (and outer) lives of life forms on distant planets, I long to know about the obsessions, introspections, and emotional muscles that future species of robots might wrestle with. A powerful source of existential grief comes from accepting that I won’t live long enough to find out.

“Emotional robots . . . I’ve got a hunch this isn’t going to happen in my lifetime.” I’m a bit crestfallen.

“Well, it will probably take a century, but that’s a blip in human history, right?” he says in a reassuring tone. “What’s a century? It’s nothing. If you look at the curve of humans on Earth,” he says, curving one hand a few inches off the table, “we’re right there. That’s a hundred years.”

“So much has happened in just the last two hundred years,” I say, shaking my head. “It’s been quite an express ride.”

“Exactly. And the field is accelerating. But there’s good and bad, right? If you say ‘emotions,’ then you have depression, you have deception, you have creativity and curiosity—creativity and curiosity we’re already seeing here in various machines.

“My lab is called the Creative Machines Lab because I want to make machines which are creative, and that’s a very very controversial topic in engineering, because most engineers—close the door, speak quietly—are stuck in the Intelligent Design way of thinking, where the engineer is the intelligent person and the machines are being created, they just do the menial stuff. There’s a very clear division. The idea that a machine can create things—possibly more creatively than the engineer that designed the machine—well, it’s very troubling to some people, it questions a lot of fundamentals.”

Will they grow attached to others, play games, feel empathy, crave mental rest, evolve an aesthetics, value fairness, seek diversion, have fickle palates and restless minds? We humans are so far beyond the Greek myth of Icarus, and its warning about overambition (father-and-son inventors and wax wings suddenly melting in the sun). We’re now strangers in a strange world of our own devising, where becoming a creator, even the Creator, of other species is the ultimate intellectual challenge. Will our future robots also design new species, bionts whose form and mental outlook we can’t yet imagine?

“What’s this?” I ask, momentarily distracted by a wad of plastic nestled on a shelf.

He hands me the strange entanglement of limbs and joints, a small robot with eight stiff black legs that end in white ball feet. The body is filamental, like a child’s game of cat’s cradle gone terribly wrong, and it has no head or tentacles, no bulging eyes, no seedlike brain. It wasn’t designed as an insect. Or designed by humans, for that matter.

Way back in our own evolution, we came from fish that left the ocean and flopped from one puddle to another. In time they evolved legs, a much better way to get around on land. When Lipson’s team asked a computer to invent something that could get from point A to point B—without programming it how to walk—at first it created robots reminiscent of that fish, with multihinged legs, flopping forward awkwardly. A video, posted on YouTube, records its first steps, with Lipson looking on like a proud parent, one who appreciates how remarkable such untutored trials really are. Bits of plastic were urged to find a way to combine, think as one, and move themselves, and they did.

In another video, a critter trembles and skitters, rocks and slides. But gradually it learns to coordinate its legs and steady its torso, inching forward like a worm, and then walking insectlike—except that it wasn’t told to model an insect. It dreamt up that design by itself, as a more fluent way forward. Awkward, but effective. Baby steps were fine. Lipson didn’t expect grace. He could make a spider robot that would run faster, look better, and be more reliable, but that’s not the point. Other robots are bending, flexing, and running, using replica tendons and muscles. DARPA’s “cheetah” was recently clocked at a tireless 30 mph sprint. But that cheetah was programmed; it would be a four-legged junkpile without a human telling it what to do. Lipson wants the robot to do everything on its own, eclipsing what any human can design, unfettered by the paltry ideas of its programmers.

It’s a touching goal. Surpassing human limits is so human a quest, maybe the most ancient one of all, from an age when dreams were omens dipped in moonlight, and godlike voices raged inside one’s head. A time of potent magic in the landscape. Mountains attracted rain clouds and hid sacred herbs, malevolent spirits spat earthquakes or drought, tyrants ruled certain trees or brooks, offended waterholes could ankle off in the night, and most animals parleyed with at least one god or demon. What was human agency compared to that?