[FOUR]
TO INFINITY AND BEYOND: THE POWER OF EXPONENTIAL TRENDS
The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.
—ISAAC ASIMOV
 
 
 
“I decided I would be an inventor when I was five. Other kids were wondering what they would be, but I always had this conceit. And I was very sure of it and I’ve never really deviated from it.”
Ray Kurzweil stuck to his dreams. Growing up in Queens, New York, he wrote his first computer program at the age of twelve. When he was seventeen, he appeared on the game show I’ve Got a Secret. His “secret” was a song composed by a computer that he had built.
Soon after, Kurzweil created such inventions as an automated college application program, the first print-to-speech reading machine for the blind (considered the biggest advancement for the visually impaired since the Braille language in 1829), the first computer flatbed scanner, and the first large-vocabulary speech recognition system. The musician Stevie Wonder, who used one of Kurzweil’s reading machines, then urged him to invent an electronic music synthesizer that could re-create the sounds of pianos and other orchestral instruments. So Kurzweil did. As his inventions piled up, Forbes magazine called him “the Ultimate Thinking Machine” and “rightful heir to Thomas Edison.” Three different U.S. presidents have honored him, and in 2002 he was inducted into the National Inventors Hall of Fame.
Kurzweil has found that the challenge isn’t just inventing something new, but doing so at just the right moment that both technology and the marketplace are ready to support it. “About thirty years ago, I realized that timing was the key to success.... Most inventions and predictions tend to fail because the timing is wrong.”
Kurzweil has now founded a business that centers on figuring out this timing issue. Guessing the future seems a task for a psychic, not a prolific inventor. But Kurzweil comes with a pretty good batting average. In the early 1980s, he made the seemingly absurd forecast that a little-known project called the Arpanet would become a worldwide communications network, linking together humanity in a way previously impossible. Around the same time, he made the equally ridiculous claim that the cold war, which had just heated up with the Soviet invasion of Afghanistan, was going to end in just a few years. The Internet and the fall of the Berlin Wall made Kurzweil look like a clairvoyant.
“You’ll often hear people say that the future is inherently unpredictable and then they will put up some stupid prediction that never came to bear. But actually the parameters of it are highly predictable,” says Kurzweil. He isn’t arguing that he can see into the future exactly and his business plan isn’t to pick lottery numbers. Rather, he argues that the overall flow of the future can be predicted, especially when it comes to technologic change, even if the individual components cannot. He makes a comparison to thermodynamics. Imagine a kettle of water being put on a stove. What each individual molecule of water does as it heats up is inherently unpredictable. But the overall system is predictable; even if we don’t know which water molecule will turn to steam first, we know the kettle will ultimately whistle.
An example of how Kurzweil’s business makes money predicting the future happened in 2002. His research group looked at all the various technology trends and predicted that a pocket-sized reading device would be possible within four years. Such a prediction seemed a bad investment, as the technology wasn’t even invented yet. But they positioned a project to be ready to deliver in 2006, just as the advancing technology made it workable. As he describes, “We use predictions to catch the moving train of technology at the right time.”
To have such a business model, you have to have an immense faith in science. For Kurzweil, this even covers how he plans to extend his own life. Each day, the sixty-year-old takes a mix of some 250 dietary supplements. “I’ve slowed down aging to a crawl,” he says. “By most measures my biological age is about forty, and I have some hormone and nutrient levels of a person in his thirties.” With life spans advancing and technologic breakthroughs happening every day, Kurzweil believes that if he can just hold out long enough, he may even be able to live forever. It sounds crazy, but then again, this is a guy whom Bill Gates described as a “visionary thinker” and to whom thirteen universities have given honorary degrees.
Kurzweil gets a reported $25,000 for every speech he gives on the future of technology. At many of these speeches, he shares the stage with “Ramona.” She is an AI programmed to be his alter ego (that is, if he looked like a twenty-five-year-old female rock star) and projected onto a screen behind him. The presentations, with both him and Ramona interacting with the audience, are considered so revolutionary and creative that they even inspired the 2002 movie S1m0ne, in which Al Pacino played a filmmaker who creates a Ramona-like AI to be the perfect actress.
With such a backstory, Kurzweil can sound a bit like the twenty-first-century technology version of “Professor” Harold Hill from The Music Man. But some serious folks are counting on his understanding of how the future is unfolding. Wall Street investors are pouring money into his FatKat (Financial Accelerating Transactions from Kurzweil Adaptive Technologies), the first hedge fund to make investment decisions using AI predictions. He is also one of five members of the U.S. Army’s Science Board and has thrice given the keynote speech at the army’s annual conference on the future of war.
Kurzweil describes the robots we now see in Iraq and Afghanistan, like the Predator or PackBot, as “only an early harbinger” of greater trends. Just around the curve is a moment for robotics and AI, which will “create qualitative change and social, political, and technological change, changing what human life is like and how we value it.” He expounds, “In just 20 years the boundary between fantasy and reality will be rent asunder.”
Kurzweil recalls that it was in 2002 when he first shared such visions with the army on the future of technology and war. His discussion of AI and robotics becoming the norm in war “was seen as amusing, even entertaining.” Now his predictions of the future are “very much at the mainstream.”

EXPONENTIAL POWER

Kurzweil doesn’t just pull his vision of the future from a crystal ball, but rather from a historic analysis of technology and how it changes the world around us. As opposed to gaining in a linear fashion, he argues that “the pace of change of our human-created technology is accelerating and that its powers are expanding at an exponential pace.”
When something is moving at an “exponential” pace, it grows faster and faster each time it gets bigger. A familiar example is the idea of compound interest. Imagine a genie offers you the choice of either $1 million today or a magic penny that doubles in value every day for one month. The obvious choice would seem to be to take the $1 million. But that would actually be the sucker’s play. Because of the exponential growth, the penny would be worth $10 million at the end of that month.
The challenge of exponential change is that it can be deceptive, as things often start out at a seemingly slow pace. Halfway into the month, the penny would have produced only $300. It’s only as the pace goes up the exponential curve that the change truly accelerates.
Kurzweil’s favorite example to illustrate how understanding exponentials can prove tricky even for scientists is the Human Genome Project. When it started in 1990, it had a fifteen-year goal of sequencing the more than three billion nucleotides that go into our complete DNA. The problem was that at the start of the project, only one ten-thousandth of the genome was mapped. “Skeptics said there’s no way you’re gonna do this by the turn of the century.” Indeed, by year seven, close to the planned halfway point of the project, only 1 percent was complete. Kurzweil says, “People laughed and thought it would take another 693 years to complete. But they didn’t account for the exponential.” By that point the project was doubling its pace every year. “If you double from 1 percent every year over seven years, you get 100 percent. It was right on schedule.” Here again, Kurzweil proved right and the project was completed in time.
Exponential change is most evident perhaps in technology products. A quick look at your cell phone should be persuasive enough. The first commercial mobile phone was the Motorola DynaTAC, which came out in 1983. It cost $3,500 and weighed two and a half pounds; it was nicknamed “The Brick.” By 1996, Motorola was selling the Startac, which cost $500. Today, cell phones fit in your pocket and have gone from a luxury item to commonplace; some two billion people around the world have them and many are even thrown in for free when you purchase long-distance plans.
When it comes to computer technology, exponential progress is encapsulated in “Moore’s law.” In 1965, Gordon Moore, the cofounder of Intel, noticed that the number of transistors on a microchip was roughly doubling every two years. This realization was actually more exciting than it sounds, as each time you doubled the number of transistors on a microchip, you also cut the space between them by half. This meant that the time needed by electric signals to move between them was also cut in half. As companies crammed more and more transistors onto a chip, each and every year, Moore foresaw it would lead to faster and faster chips. Moore predicted that this simple doubling factor would spur everything from more powerful computers to automated cars.
Moore’s prediction of microchip transistor doubling has held true in the four decades since, and has even sped up, now doubling every eighteen months. Showing how far we have come, Tradic, the first computer using transistors, built in 1955, had eight hundred. Almost sixty years later, Moore’s old company Intel released the Montecito, which has 1.72 billion transistors on just one chip. Computers powered by these microchips have gotten more and more capable, again in an exponential way rather than an additive one. For example, the circa 2005 Dell computer I typed this book out on is already antiquated, but it has roughly the same capacity and power as all the computers that the entire Pentagon had in the mid-1960s combined.
But personal computers only tell part of the story of where Moore’s law has taken us. An average PC today works in the scale of megaflops, being able to do millions of calculations per second. This is pretty impressive-sounding. But a present-day supercomputer, such as Purple that runs tests of nuclear weapons at Lawrence Livermore National Labs, can calculate 100 teraflops—100 million million calculations per second. Purple can do calculations in six weeks that would have taken supercomputers ten years ago, like the ones that first beat the human chessmasters, over five thousand years. But today’s supercomputer is tomorrow’s Commodore 64. The Department of Energy has already contracted IBM to build a next-generation supercomputer able to do 1,000 trillion calculations per second, or one petaflop, equivalent to the power of ten Purples.
The corollary to Moore’s law is not just that microchips, and the computers powered by them, are getting more and more powerful, but that they are also getting cheaper. When Moore first wrote on the phenomenon in 1965, a single transistor cost roughly five dollars. By 2005, five dollars bought five million transistors. With lower exponential costs comes greater exponential demand. In 2003, Intel made its one billionth microchip after thirty-five years of continuous production. Only four years later, it had made its next one billion chips. The same changes have happened with the ability to store data. The cost of saving anything from the military’s Predator drone footage of Iraqi insurgents to your old Depeche Mode songs is going down by 50 percent roughly every fifteen months.
Moore’s law explains how and why we have entered a world in which refrigerator magnets that play Christmas jingles have more computing power than the entire NORAD nuclear defense system had in 1965. Exponential change builds upon exponential change and advancements in one field feed advancements in others. And lower prices in one field help feed new development in others. A good example is how advancements in microchips made portable electronics accessible to consumers. As more and more people bought such items as video and then digital cameras, it dropped the cost of equipping robots with the same kind of cameras (their electronic vision systems) by as much as 75 percent. This eliminated the barriers to entry for robots to be used across the marketplace, further dropping costs for robots as a whole, as more people could buy them. Rodney Brooks at iRobot calls this kind of cross-transfer “riding someone else’s exponentials.”

AN EXPONENTIAL WORLD

Historic data shows exponential patterns beyond just Moore’s law, which referred just to semiconductor complexity. For example, the annual number of “important discoveries” as determined by the Patent Office has doubled every twenty years since 1750. Kurzweil calls this pattern of exponential change in our world “The Law of Accelerating Returns.”
This convergence of exponential trends is why technologic change, especially for electronics, comes not only quicker, but in bundles, rather than staying within one category. While microchip performance is now doubling roughly every eighteen months and storage every fifteen months, we are also seeing similar acceleration in categories far and wide. Wireless capacity doubles every nine months. Optical capacity doubles every twelve months. The cost/performance ratio of Internet service providers is doubling every twelve months. Internet bandwidth backbone is doubling roughly every twelve months. The number of human genes mapped per year doubles every eighteen months. The resolution of brain scans (a key to understanding how the brain works, an important part of creating strong AI) doubles every twelve months. And, as a by-product, the number of personal and service robots has so far doubled every nine months.
The darker side of these trends has been exponential change in our capability not merely to create, but also to destroy. The modern-day bomber jet has roughly half a million times the killing capacity of the Roman legionnaire carrying a sword in hand. Even within the twentieth century, the range and effectiveness of artillery fire increased by a factor of twenty, antitank fire by a factor of sixty.
These changes in capabilities then change the way we fight. For instance, exponentially more lethal weapons helped lead to equivalent exponential “stretching” of the battlefield. In antiquity, when you divided the number of people fighting by the area they would typically cover, on average it would take a Greek hoplite and five hundred of his buddies to cover an area the size of a football field. This is why in movies like Spartacus or 300 you can see the entire army during a battle. By the time of the American Civil War, weapons had gained such power, distance, and lethality that roughly twenty soldiers would fight in that same space of a football field. By World War I, it was just two soldiers in that football field. By World War II, a single soldier occupied roughly five football fields to himself. In Iraq in 2008, the ratio of personnel to territory was roughly 780 football fields per one U.S. soldier.
The same exponential change in how we fight has also gone on in the short time that war has taken place in the air. During World War II, roughly 108 planes were needed to take out a single target. By the time of the airstrikes over Afghanistan in 2001, the ratio had flipped; each plane was destroying 4.07 targets on average per flight.
Connectivity is also expanding at an exponential rate, allowing new technologies to change human society quicker and quicker. For example, the wheel first appeared in Sumer around 8500 B.C. But it took roughly three thousand years for the wheel to be commonly used in animal-drawn carts and plows. So the agricultural revolution that made possible human cities, and what we now know as “civilization,” played out over several millennia. By the eighteenth century, communication and transportation had sped up to the point that it took only just under a century for the steam engine to become similarly widespread, launching the Industrial Age. Today, the spread of knowledge is nearly instantaneous. The Internet took roughly a decade to be widely adopted (and Internet traffic doubles every six months). And now that it is in place, an invention is shared across the world in nanoseconds.
And yet this change happened so quickly that we often forget how new it all is. In less than a decade, over a billion people, whether it was soldiers, terrorists, or grandmothers in Peoria, went from (1) never having heard of the Internet, to (2) having heard of it, but never having used it (I still recall my mother asking, “What is this new ‘Inter-web’ thing?” soon to be followed by her asking about sending an “electronic letter”), to (3) trying it out, such as sending their first e-mail (when I was in college, e-mail was primarily used for sending out “Your momma so fat” joke lists), to (4) using it on a regular basis, to (5) that same soldier, terrorist, or grandmother not being able to professionally or socially succeed without it. And with the rise of three-dimensional “virtual worlds” like Second Life, that massive change is already old news.
When Kurzweil did a historic analysis of overall technologic change (measuring its advancement, complexity, and importance to human society), he found that the doubling period of this convergence of invention, communication, and progress happened just about every ten years. Individual technologies certainly move in fits and starts, but the overall flow for the aggregate of technologic change has clocked in at a fairly steady 7 percent annual rate of growth. This means that for the period up to the Industrial Age, the overall weight of technologic change was so slow that no one would significantly notice it within their lifetime. A Roman legionnaire or knight of the Middle Ages could go their entire life with maybe one new technology changing the way they lived, communicated, played, or fought. By the late 1800s, change was playing out over decades and then years, fast enough that people began to call it the “Golden Age of Invention.”
But this change period was just the start of an acceleration up that exponential curve. The current rates of doubling mean that we experienced more technologic change in the 1990s than in the entire ninety years beforehand. To think about it another way, technology in 2000 was roughly one thousand times more advanced, more complex, and more integral to our day-to-day lives than the technology of 1900 was to our great-grandparents. More important, where they had decades and then years to digest each new invention, ours come in ever bigger bundles, in ever smaller periods of time.

“THE SINGULARITY IS NEAR”

“We often say things like, ‘No way this will happen in a hundred years!’ But we are talking in about a hundred years at the current rate of progress,” Ray Kurzweil points out. “If we are using today’s rate, the twentieth century only had about twenty years of progress.”
If Moore’s law continues to play out, some pretty amazing advancements will happen that will shape the world of robots and war. By 2029, $1,000 computers would do twenty million billion calculations a second, equivalent to what a thousand brains can do. This means that the sum total of human brainpower would be less than 1 percent of all the thinking power on the globe.
Likewise, the trends for storing information are leading toward the same direction. Hugo de Garis, the head of the StarBrain AI project, has written a cheerily titled article on this entitled “Building Gods or Building Our Potential Exterminators?” In it he writes, “Within a single human generation, it will very probably be possible to store a single bit of information on a single atom.” If this proves true, an object the size of a disc then would be able to hold a trillion trillion (a 1 with twenty-four zeros after it) bits of information. By comparison, the human brain is created from a genome of roughly twenty-three million bits of information. If computers can match this almost incomprehensible processing speed with such amazing memory, the advantage that human brains have of being so parallel starts to fall by the wayside. Moreover, many of the latest AI research projects, including StarBrain, are modeled after human brains. So they can build this parallelism into their own programs, nullifying our advantage.
With the ability to think faster and source more data, more and more becomes potentially possible for machines. The question then becomes, will computers ever be able to match the human brain in its thinking ability and then surpass it? Think of it this way: if a computer can process and store information billions or trillions times faster than a human, what research could it then accomplish? Would it be inconceivable that it could think up things just a thousand times faster, or even better? Could it even become so advanced as to become self-aware? This is the essence of what the scientists call “strong AI” or what science fiction writers call “HAL.” If you project the current trends even further, Kurzweil claims, we are on track to experience “about twenty thousand years of progress in the twenty-first century, one thousand times more than we did in the twentieth century.”
At a certain point, things become so complex we just don’t know what is going to happen. The numbers become so mind-boggling that they simply lose their meaning. We hit the “Singularity.”

A SINGULAR SENSATION

In astrophysics, a “singularity” is a state in which things become so radically different that the old rules break down and we know virtually nothing. Stephen Hawking, for example, describes black holes as singularities where “the laws of science and our ability to predict the future would break down.”
The historic parallel to singularities is “paradigm shifts,” when some concept or new technology comes along that wipes out the old way of understanding things. Galileo’s proof that the Earth rotated around the sun and not the other way around would be an example for astronomy, much as Einstein’s theory of relativity was for physics. The key is that someone living in a time before a paradigm shift would be unable to understand the world that follows.
An example that many scientists cite is if you asked monks living in 1439 to predict advances in the future. They might predict such slight changes as better quills or ink for their illustrated manuscripts, or how a new well might be built. But they would likely not be able to conceive of how a rickety contraption made that year by Johannes Gutenberg, a German goldsmith, would become what Time magazine called “the most important invention of the millennium.” Before the creation of the printing press and the singular break it created for society, it would have been simply impossible for those monks to imagine such things as mass literacy, the Reformation, or the Sports Illustrated swimsuit issue.
The idea of a singularity in relation to computer technology first came from Vernor Vinge. Vinge is a noted mathematician and computer scientist, as well as an award-winning science fiction writer. His most recent novel, Rainbows End: A Novel with One Foot in the Future, is set in 2025. He describes a world in which people “Google all the time, everywhere, using wearable computers, and omnipresent sensors.” Vinge doesn’t dedicate the book to his wife or parents or cat. Instead, perhaps sucking up to our future owners, he dedicates it to “the Internet-based cognitive tools that are changing our lives—Wikipedia, Google, eBay, and the others of their kind, now and in the future.”
In 1993, Vinge authored a seminal essay. The title he chose, “The Coming Technological Singularity: How to Survive in the Post-Human Era,” pretty much says it all. Vinge described the ongoing explosion in computing power and projected that “within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended.” Once superhuman intelligence gets involved, argued Vinge, the pace of technological development would accelerate even further than the doubling we have gone through for the last generations. There would be a constant feedback loop of artificial intelligence always getting better by improving itself, but with humankind now outside the equation. This would be the “point where our old models must be discarded and a new reality rules.”
Vinge presented the paper at a NASA colloquium, arguing, “We are on the edge of change comparable to the rise of human life on Earth.... Developments that before were thought might only happen ‘in a million years’ (if ever) will likely happen in the next century.” His idea became incredibly influential in both science and science fiction (writers such as Neal Stephenson and William Gibson wrestled with what it would mean for humanity, and such movies as The Matrix are set in a post-Singularity future).
Vinge’s concept underlies how Kurzweil and other futurists envision the coming decades. If the present trends in technology continue, then the current exponential growth is picking up such steam that we hit a paradigm shift. The old models of understanding the world and what is and isn’t possible will no longer hold true. “It’s a future period,” writes Kurzweil, “during which the pace of technological change will be so rapid, its impact so deep that human life will be irreversibly transformed.” When we look at supercomputers and robots, we may well be like those monks seeing Gutenberg’s printing press for the first time, trying to wrap our heads around what such a metal contraption really does signify.
But it is called the Singularity for a reason, as proponents of the idea see the change with AI and robotics as different from all the other paradigm shifts that have come before. Robert Epstein, a psychologist who has also worked on AI, explains, “It’s not merely a technology that will change how we act, but it is a technology that is akin to a new species. It will change everything. Indeed, more than we can imagine because the new entity will be doing the imagining.”
Vinge was ambivalent about whether this Singularity with an uppercase S was a good or bad outcome. He thought it could play out in a way that “fits many of our happiest dreams: a time unending, where we can truly know one another and understand the deepest mysteries.” Or it could lead to the “physical extinction of the human race.” You win some, you lose some.
By contrast, Kurzweil is the ultimate optimist. “At the onset of the twenty-first century, humanity stands on the verge of the most transforming and the most thrilling period in its history. It will be an era in which the very nature of what it means to be human will be both enriched and challenged, as our species breaks the shackles of its genetic legacy and achieves inconceivable heights of intelligence, material progress, and longevity.”
Whether and when this all happens is an issue of debate. Kurzweil thinks the Singularity will become possible in the 2020s, but projects some lag time might be built in. Even then, the potential changes that he projects will occur soon (before most of us pay off the mortgages on our houses) sound pretty stunning. If the current rates of change hold up, by 2045, he writes, “the non-biological intelligence created in that year will be one billion times more powerful than the sum of all human intelligence today.” Another way of thinking about it is that Kurzweil and others are arguing that my generation will be the last generation of humans to be the smartest thing on the planet. “Generation X” takes on a whole new meaning.

QUESTIONING THE RAPTURE

Of course, not everyone buys such projections, or even the idea of the Singularity. Some argue it isn’t possible, and others just mock it. The most stinging may be those who call the Singularity “The Rapture for Nerds.”
That said, an amazing array of people have begun to weigh in on the side of the Singularity. Bill Joy, the cofounder of Sun Microsystems, and thus one of the Internet’s godfathers, is very much a believer. “By 2030 we are likely to be able to build machines a million times as powerful as the personal computers of today.” He then projects that “once an intelligent robot exists, it is only a small step to a robot species—to an intelligent robot that can make evolved copies of itself.” In turn, while doing research for this book, I interviewed a U.S. special operations forces officer, just back from hunting the terrorist Abu Musab al-Zarqawi in Iraq. Our discussion was supposed to be on how his team uses unmanned systems, but at the end of the discussion, he added, “By the way, Joy’s thesis is spot-on.”
The economist Jeremy Rifkin, named by the National Journal as one of the 150 most influential people in shaping U.S. government policy, agrees as well. “Never before in history has humanity been so unprepared for the new technological and economic opportunities, challenges, and risks that lie on the horizon. Our way of life is likely to be more fundamentally transformed in the next several decades than in the previous 1,000 years. By the year 2025, we and our children may be living in a world utterly different from anything human beings have ever experienced in the past.” The Singularity was even the subject of a 2007 U.S. Congress study by the Joint Economic Committee, entitled “The Future Is Coming Sooner Than You Think.”
Rodney Brooks at iRobot acknowledges that the idea of the Singularity seems too futuristic to be true, but then describes a pattern he has noticed again and again. Incredibly bright people often draw a line in the sand on what computers will “never be able to do.” But then technology continually forces them to erase that line and draw a new one. He likes to cite the story of Hubert Dreyfus as instructive for those who doubt the potential of technology.
Dreyfus is a noted philosopher at the University of California-Berkeley, located in the heart of Singularity fandom. In 1967, he famously predicted that no computer would ever beat him at chess. It turns out he wasn’t the greatest of players and lost to a computer in his first and only match soon after. Dreyfus, who went on to author the 1972 book What Computers Can’t Do, was undeterred. He revised his prediction to say that a computer would never be able to beat a skilled chess player, a nationally ranked player. A computer soon did. When that happened, he revised his prediction again (as well as his book title, which in 1992 was reissued as What Computers Still Can’t Do), claiming that while computers may be able to beat most humans, they would never be able to beat the very best, such as the world champion chessmaster. Of course, this then happened in 1997 with IBM’s Deep Blue.
Psychologist and AI expert Robert Epstein, a Singularity proponent who administers the Turing test program, acknowledges that “some people, smart people, say I am full of crap. My response is that someday you are going to be having that argument with a computer. As soon as you open your mouth, you’ve lost. In that context, you can’t win. The only person able to deny the changes occurring around us is the one who hides, the one who has their head in the sand.”

THE MILITARY AND THE SINGULARITY

The question as to whether the Singularity will come and when depends on whether the same sort of exponential growth that happened in the past will continue in the years ahead. Does an exponential past necessarily mean an exponential future?
Between now and the Singularity (or not), all sorts of things could happen, from an asteroid hitting the Earth to World War III (then again, wars tend to spur technologic change to go even faster). More pertinently, it would seem that Moore’s law can’t stay true forever. At a certain point, around 2020 in the projections, the number of transistors packed onto a microchip must move down into the atomic level; that is, there may be no space left between the atoms themselves for electric signals. Overheating is another problem at this density, as the electric currents have to run through ever more tightly packed transistors.
Yet, again, technology may well leap over and around the problem. In 2007, IBM and Intel found a way to use hafnium (the same isomer used for novel UAV nuclear power systems) to build a next generation of microchips with circuits as small as 45 nanometers, about one two-thousandth the width of a human hair. Other breakthroughs have been made in subatomic circuitry. Instead of switching an electric current on and off, to create the 0s and 1s that make up binary language, these take electricity out of the equation and use magnets to control the direction in which electrons spin. Not only is there no overheating, but it also means the chip can work for as long as it keeps its magnetic charge. That is, while an electric charge needs to be linked to some power source, a magnet keeps its charge even after you pull the plug. Here again, the credit goes to the military, with DARPA pouring more than $200 million into such quantum research.
For this reason people like Microsoft founder Bill Gates are uniformly optimistic that each of the various hurdles to robotics will be knocked out in the coming years. “The challenges facing the robotics industry are similar to those we tackled in computing three decades ago.” Or as military robotics developer Robert Finkelstein puts it, finding the solutions needed to take robots to the next level and beyond “doesn’t require us to try to discover new laws of physics, antimatter, or cold fusion. It’s just a matter of proper funding and dedication.” Which brings us back to the military.
Some believe the military is an integral part of bringing the Singularity into being, because of the massive investments it has made in R&D for things like artificial intelligence and sensors, as well as the immense marketplace it has created for hardware. I asked an executive at one defense contractor whether he agreed with the crazy ideas being bandied about on singularities and robots becoming as smart as humans. He replied, “If this war keeps going on a few more years, then yes.”
Robert Epstein sees the military’s role as more than simply funding the Singularity. It is the most likely integrator needed to bring it all together. He describes how there are all sorts of research programs and companies around the globe, working on various technologies, from pattern recognition software and robotic sensors to artificial intelligence and subatomic microchips. “When you marry all that up with the strategic planning that the military brings to the table, you will end up with a qualitative advance like no other. At that point prediction of what comes next becomes difficult. . . . That’s when you hit the Singularity, where all the rules change, in part because we are no longer making the rules.”
In the end, we don’t now know yet whether computer, AI, and robotics development will reach a singularity or the Singularity. Indeed, this could be the one prediction that Kurzweil and his cohort simply get wrong. We do know, however, that major shifts are already going on in computing power and machine intelligence. And if the trends for the future do hold true even at the most minimal level, then things are going to get real interesting in the not too distant future.