Craig Venter is sixty-five years old, of average height, with a thick frame, a full beard, and a wide smile. His dress is casual; his eyes are not. They are blue and deep set, and when coupled to the slash of gray running through his right eyebrow and the mild arch to his left, he has the appearance of a modern-day wizard—like Gandalf with a solid stock portfolio and a pair of flip-flops.
Today, besides the flip-flops, Venter is also sporting a bright Hawaiian shirt and faded jeans. This is his tour guide attire, as today he’s touring me around his namesake: the J. Craig Venter Institute (JCVI for short). Located in San Diego’s “biology alley,” JCVI’s West Coast arm is a modest two-story research facility, housing sixty scientists and one miniature poodle. The poodle’s name is Darwin, and he’s a few steps ahead of us, now darting through the building’s main entrance hall. He stops at the bottom of a flight of stairs, directly beside an architectural model of a four-tiered building. A plaque beside the model reads: “The first carbon-neutral, green laboratory facility.” This is JCVI 2.0, Craig’s vision for his future institute.
“If I can get it funded,” says Venter, “that’s what I want to build.”
The price tag on this dream runs north of $40 million, but he’ll get it funded. Venter is to biology what Steve Jobs was to computers. Genius with repeat success.
In 1990 the US Department of Energy (DOE) and the National Institutes of Health (NIH) jointly launched the Human Genome Project, a fifteen-year program with the goal of sequencing the three billion base pairs making up the human genome. Some thought the project impossible; others predicted that it would take a half century to complete. Everyone agreed it would be expensive. A budget of $10 billion was set aside, but many felt it wasn’t enough. They might still be feeling this way too, except that in 2000 Venter decided to get into the race.
It wasn’t even much of a race. Building on work that had come before, Venter and his company, Celera, delivered a fully sequenced human genome in less than one year (tying the government’s ten-year effort) for just under $100 million (while the government spent $1.5 billion). Commemorating the occasion, President Bill Clinton said, “Today we are learning the language with which God created life.”
As an encore, in May 2010 Venter announced his next success: the creation of a synthetic life form. He described it as “the first self-replicating species we’ve had on the planet whose parent is a computer.” In less than ten years, Venter both unlocked the human genome and created the world’s first synthetic life form—genius with repeat success.
To pull off this second feat, Venter strung together over a million base pairs, creating the largest piece of manmade genetic code to date. After engineering this code, it was sent to Blue Heron Biotechnology, a company that specializes in synthesizing DNA. (You can literally email Blue Heron a long string of As, Ts, Cs, and Gs—the four letters of the genetic alphabet—and they will return a vial filled with copies of that exact strand of DNA.) Venter then took the Blue Huron strand and inserted it into a host bacterial cell. The host cell “booted up” the synthetic program and began generating proteins specified by the new DNA. As replication proceeded, each new cell carried only the synthetic instructions, a fact that Venter authenticated by embedding a watermark into the sequence. The watermark, a coded sequence of Ts, Cs, Gs, and As, contains instructions for translating DNA code into English letters (with punctuation) and an accompanying coded message. When translated, this message spells the names of the forty-six people who worked on the project; quotations from novelist James Joyce, as well as physicists Richard Feynman and Robert Oppenheimer; and a URL for a website that anyone who deciphers the code can email.
But the real objective was neither secret messages nor synthetic life. This project was merely the first step. Venter’s actual goal is the creation of a very specific new kind of synthetic life—the kind that can manufacture ultra-low-cost fuels. Rather than drilling into the Earth to extract oil, Venter is working on a novel algae, whose molecular machinery can take carbon dioxide and water and create oil or any other kind of fuel. Interested in pure octane? Aviation gasoline? Diesel? No problem. Give your designer algae the proper DNA instructions and let biology do the rest.
To further this dream, Venter has also spent the past five years sailing his research yacht, Sorcerer II, around the globe, scooping up algae along the way. The algae is then run through a DNA sequencing machine. Using this technique, Venter has built a library of over forty million different genes, which he can now call upon for designing his future biofuels.
And these fuels are only one of his goals. Venter wants to use similar methods to design human vaccines within twenty-four hours rather than the two to three months currently required. He’s thinking about engineering food crops with a fiftyfold production improvement over today’s agriculture. Low-cost fuels, high-performing vaccines, and ultrayield agriculture are just three of the reasons that the exponential growth of biotechnology is critical to creating a world of abundance. In the chapters to come, we’ll examine this in greater depth, but for now, let’s turn to the next category on our list.
It’s fall 2009, and Vint Cerf, chief Internet evangelist for Google, is at Singularity University to talk about the future of networks and sensors. In Silicon Valley, where T-shirts and jeans are the normal uniform, Cerf’s preference for double-breasted suits and bow ties is unusual. But it’s not just his dress that makes him stand out. Nor the fact that he’s won the National Medal of Technology, the Turing Award, and the Presidential Medal of Freedom. Rather, what truly sets Cerf apart is that he’s one of the people most associated with the design, creation, promotion, guidance, and growth of the Internet.
During his graduate student years, Cerf worked in the networking group that connected the first two nodes of the Advanced Research Projects Agency Network (Arpanet). Next he became a program manager for the Defense Advanced Research Projects Agency (DARPA), funding various groups to develop TCP/IP technology. During the late 1980s, when the Internet began its transition to a commercial opportunity, Cerf moved to the long-distance telephone company MCI, where he engineered the first commercial email service. He then joined ICANN (Internet Corporation for Assigned Names and Numbers), the key US governance organization for the web, and served as chairman for more than a decade. For all of these reasons, Cerf is considered one of the “fathers of the Internet.”
These days, Father is excited about the future of his creation—that is, the future of networks and sensors. A network is any interconnection of signals and information, of which the Internet is the most significant example. A sensor is a device that detects information—temperature, vibration, radiation, and such—that, when hooked up to a network, can also transmit this information. Taken together, the future of networks and sensors is sometimes called the “Internet of things,” often imagined as a self-configuring, wireless network of sensors interconnecting, well, all things.
In a recent talk on the subject, Mike Wing, IBM’s vice president of strategic communications, describes it this way: “Over the past century but accelerating over the past couple of decades, we have seen the emergence of a kind of global data field. The planet itself—natural systems, human systems, physical objects—has always generated an enormous amount of data, but we weren’t able to hear it, to see it, to capture it. Now we can because all of this stuff is now instrumented. And it’s all interconnected, so now we can actually have access to it. So, in effect, the planet has grown a central nervous system.”
This nervous system is the backbone of the Internet of things. Now imagine its future: trillions of devices—thermometers, cars, light switches, whatever—all connected through a gargantuan network of sensors, each with its own IP addresses, each accessible through the Internet. Suddenly Google can help you find your car keys. Stolen property becomes a thing of the past. When your house is running out of toilet paper or cleaning products or espresso beans, it can automatically reorder supplies. If prosperity is really saved time, then the Internet of things is a big pot of gold.
As powerful as it will be, the impact the Internet of things will have on our personal lives is dwarfed by its business potential. Soon, companies will be able to perfectly match product demand to raw materials orders, streamlining supply chains and minimizing waste to an extraordinary degree. Efficiency goes through the roof. With critical appliances activated only when needed (lights that flick on as someone approaches a building), the energy-saving potential alone would be world changing. And world saving. A few years ago, Cisco teamed up with NASA to put sensors all over the planet to provide real-time information about climate change.
To take the Internet of things to the level predicted—with a projected planetary population of 9 billion and the average person surrounded by 1,000 to 5,000 objects—we’ll need 45 thousand billion unique IP addresses (45 × 1012). Unfortunately, today’s IP version 4 (IPv4), invented by Cerf and his colleagues in 1977, can provide only about 4 billion addresses (and is likely to run out by 2014). “My only defense,” says Cerf, “is that the decision was made at a time when it was uncertain if the Internet would work,” later adding that “even a 128-bit address space seemed excessive back then.”
Fortunately, Cerf has been leading the charge for the next generation of Internet protocols (creatively called Ipv6), which has enough room for 3.4 × 1038 (340 trillion trillion trillion) unique addresses—roughly 50,000 trillion trillion addresses per person. “Ipv6 enables the Internet of things,” he says, “which in turn holds the promise for reinventing almost every industry. How we manufacture, how we control our environment, and how we distribute, use, and recycle resources. When the world around us becomes plugged in and effectively self-aware, it will drive efficiencies like never before. It’s a big step toward a world of abundance.”
It’s Saturday, July 2010, and Junior is driving me around Stanford University. He’s a smooth operator: staying on his side of the road, making elegant turns, stopping at traffic lights, avoiding pedestrians, dogs, and bicyclists. This may not sound like much, but Junior is not your typical driver. Specifically, he’s not human. Rather, Junior is an artificial intelligence, an AI, embodied in a 2006 Volkswagen Diesel Passat wagon, to be inexact. To be exact, well, that’s a little trickier.
Sure, Junior has all the standard stylings of German engineering, but he also has a Velodyne HD LIDAR system strapped to the roof—which alone costs $80K and generates 1.3 million 3-D data points of information every second. Then there’s an omnidirectional HD 6 video camera system; six radar detectors for picking out long-range objects; and one of the most technologically advanced Global Positioning Systems on the planet (worth $150K). Furthermore, Junior’s backseat has two 22-inch monitors and six core Intel Xeons, giving him the processing power of a small supercomputer. And he needs all of this, because Junior is an autonomous vehicle, known in hacker slang as a “robo car.”
Junior was built in 2007 at Stanford University by the Stanford Racing Team. He is the second autonomous vehicle built by the team. The first was another VW named Stanley. In 2005 Stanley won DARPA’s Grand Challenge, a $2 million incentive prize competition for the fastest autonomous vehicle to complete a 130-mile off-road course. The competition was organized after the 2001 invasion of Afghanistan, to help design robotic vehicles for troop resupply. Junior is the second iteration, designed for DARPA’s 2007 follow-up, Urban Challenge (a 60-mile race through a cityscape), in which he placed second.
So successful was the Grand Challenge—and so lucratively tantalizing is the Department of Defense’s desire for AI-driven vehicles—that almost every major car company now has an autonomous division. And military applications are only part of the picture. In June 2011 Nevada’s governor approved a bill that requires the state to enact regulations that would allow autonomous vehicles to operate on public roads. If the experts have their timing right, that should happen around 2020. Sebastian Thrun, previously the director of the Stanford Artificial Intelligence Laboratory, and now the head of Google’s autonomous car lab, feels the benefits will be significant. “There are nearly 50 million auto accidents worldwide each year, with over 1.2 million needless deaths. AI applications such as automatic breaking or lane guidance will keep drivers from injuring themselves when falling asleep at the wheel. This is where artificial intelligence can help save lives every day.”
Robocar evangelist Brad Templeton feels that saved lives are just the beginning. “Each year, we spend 50 billion hours and $230 billion in accident costs—or 2 percent to 3 percent of the GDP—because of human driver error. Plus, these vehicles make the adoption of alternative fuel technologies considerably easier. Who cares if the nearest hydrogen filling station is twenty-five miles away, if your car can refuel itself while you sleep?” In the fall of 2011, to further this process along, the X PRIZE Foundation announced its intent to design an annual “human versus machine car race” through a dynamic obstacle course to mark the point in time when autonomous drivers begin outperforming the best human race car drivers in the world.
And autonomous cars are but a small slice of a much larger picture. Diagnosing patients, teaching our children, serving as the backbone for a new energy paradigm—the list of ways that AI will reshape our lives in the years ahead goes on and on. The best proof of this, by the way, is the list of ways that AI has already reshaped our lives. Whether it’s the lightning-fast response of the Google search engine or the speech recognition used for directory information calls, we are already AI codependent. While some ignore these “weak AI” applications, waiting instead for the “strong AI” of Arthur C. Clarke’s HAL 9000 computer from 2001: A Space Odyssey, it’s not like we haven’t made progress. “Consider the man-versus-machine chess competition between Garry Kasparov and IBM’s Deep Blue,” says Kurzweil. “In 1992, when the idea that a computer could play against a world chess champion was first proposed, it was dismissed outright. But the constant doubling of computer power every year enabled the Deep Blue supercomputer to defeat Kasparov only five years later. Today you can buy a championship-level Chess AI for your iPhone for less than ten dollars.”
So when will we have true HAL-esque AI? It’s hard to say. But IBM recently unveiled two new chip technologies that move us in this direction. The first integrates electrical and optical devices on the same piece of silicon. These chips communicate with light. Electrical signals require electrons, which generate heat, which limits the amount of work a chip can perform and requires a lot of power for cooling. Light has neither limitation. If IBM’s estimations are correct, over the next eight years, its new chip design will accelerate supercomputer performance a thousandfold, taking us from our current 2.6 petaflops to an exaflop (that’s 10 to the 18th, or a quintillion operations per second)—or one hundred times faster than the human brain.
The second is SyNAPSE, Big Blue’s brain-mimicking silicon chip. Each chip has a grid of 256 parallel wires representing dendrites and a perpendicular set of wires for axons. Where these wires intersect are the synapses and one chip has 262,144 of them. In preliminary tests, the chips were able to play a game of Pong, control a virtual car on a racecourse, and identify an image drawn on a screen. These are all tasks that computers have accomplished before, but these new chips don’t need specialized programs to complete each task; instead they respond to real-world circumstances and learn from their experience.
Certainly there’s no guarantee that these things will be enough to create HAL—strong AI may require more than just a brute force solution—but it’s definitely going to rocket us up the abundance pyramid. Just think about what this will mean for the diagnostic potential in personalized medicine; or the educational potential in personalized education. (If you’re having trouble imagining these concepts, just hang on for a few chapters, and I’ll describe them in detail.) Yet as intriguing as all of this might seem, it’s nothing compared to the benefits that AI will provide when combined with our next exponential category: robotics.
Scott Hassan is in his midthirties, medium height, with jet-black hair and large almond-shaped eyes. He is a systems programmer, considered one of the best in the business, but his real passion is for building robots. Not industrial car-building machines, or small, cute Roombas, mind you, but real World’s Fair, I, Robot, help-you-around-the-house type robots.
Certainly we’ve been striving to create such bots for years. Along the way, we’ve learned a number of lessons: first, that these robots are a lot harder to build than expected; second, that they’re also considerably more expensive. But in both categories, Hassan has an advantage.
In 1996, as a computer science student at Stanford, Hassan met Larry Page and Sergey Brin. The duo was then working on a small side project: the search engine predecessor to Google. Hassan helped with the code, and the Google founders issued him shares. He started eGroups, which was later bought by Yahoo! for $412 million. The bottom line is that unlike other wannabe bot builders, Hassan has the capital needed to dent this field.
Furthermore, he’s spent that capital gathering the best and the brightest to his company, Willow Garage (which takes its name from its Willow Road address in Menlo Park). Willow Garage’s main project is a personal robot known by the exotic name PR2 (Personal Robot 2). The PR2 has head-mounted stereo cameras and LIDAR, two large arms, two wide shoulders, a broad and rectangular torso, and a four-wheel base. The whole thing looks sort of human, and sort of like R2D2 on steroids. Sure, this might not sound like much, but Hassan’s invention is literally a whole new breed of bot.
For decades, robotics progress has been hampered because researchers lacked a stable platform for experimentation. Early computer hackers had the Commodore 64 in common, so innovations could be shared by all. This hasn’t been the case with robotics, but that’s where the PR2 comes in. Not designed for consumers, Willow Garage’s robot is a research and development platform, created specifically so that geeks could go to town. And town is where they have gone. A quick tour of YouTube shows the PR2 opening doors, folding laundry, fetching a beer, playing pool, and cleaning house.
But the bigger breakthrough may be the code that runs the PR2. Instead of making his source code proprietary, Hassan has open-sourced the project. “Proprietary systems slow things down,” he says. “We want the best minds around the world working on this problem. Our goal is not to control or own this technology but to accelerate it; put the pedal to the metal to make this happen as soon as possible.”
So what’s going to happen next, and what does it have to do with a world of abundance? Hassan has a list of beneficial applications, including mechanical nurses taking care of the elderly, and mechanized physicians making health care affordable and accessible. But he is most enthralled by economic possibilities. “In 1950 the global world product was roughly four trillion dollars,” he says. “In 2008, fifty-eight years later, it was sixty-one trillion dollars. Where did this fifteenfold increase come from? It came from increased productivity in our factories equipped with automation. About ten years ago, while visiting Japan, I toured a Toyota car manufacturing plant that was able to produce five hundred cars per day with only four hundred employees because of automation. I thought to myself, ‘Imagine if you could take this automation and productivity out of the factory and put it into our everyday lives?’ I believe this will increase our global economy by orders of magnitude in the decades ahead.”
In June 2011 President Obama announced the National Robotics Initiative (NRI), a $70 million multistakeholder effort to “accelerate the development and use of robots in the United States that work beside, or cooperatively, with people.” Just like Willow Garage’s attempt to create a stable platform for development in the P2P, the NRI is structured around “critical enablers”: anchoring technologies that allow manufacturers to standardize processes and products, thus cutting development time and increasing performance. As Helen Greiner, president of the Robotics Technology Consortium, told PCWorld magazine: “Investing in robotics is more than just money for research and development, it is a vehicle to transform American lives and revitalize the American economy. Indeed, we are at a critical juncture where we are seeing robotics transition from the laboratory to generate new businesses, create jobs, and confront the important challenges facing our nation.”
Carl Bass has been making things for the past thirty-five years: buildings, boats, machines, sculpture, software. He’s the CEO of Autodesk, which makes software used by designers, engineers, and artists everywhere. Today he’s touring me around his company’s demonstration gallery in downtown San Francisco. We pass advanced architectural imaging systems powered by Autodesk’s code; screens playing scenes from Avatar created with their tools, and ultimately up to a motorcycle and an aircraft engine, both manufactured by a 3-D printer, running—you guessed it—Autodesk software.
3-D printing is the first step toward Star Trek’s fabled replicators. Today’s machines aren’t powered by dilithium crystals, but they can precisely manufacture extremely intricate three-dimensional objects far cheaper and faster than ever before. 3-D printing is the newest form of digital manufacturing (or digital fabrication), a field that has been around for decades. Traditional digital manufacturers utilize computer-controlled routers, lasers, and other cutting tools to precisely shape a piece of metal, wood, or plastic by a subtractive process: slicing and dicing until the desired form is all that’s left. Today’s 3-D printers do the opposite. They utilize a form of additive manufacturing, where a three-dimensional object is created by laying down successive layers of material.
While early machines were simple and slow, today’s versions are quick and nimble and able to print an exceptionally wide range of materials: plastic, glass, steel, even titanium. Industrial designers use 3-D printers to make everything from lamp shades and eyeglasses to custom-fitted prosthetic limbs. Hobbyists are producing functioning robots and flying autonomous aircraft. Biotechnology firms are experimenting with the 3-D printing of organs; while inventor Behrokh Khoshnevis, an engineering professor at the University of Southern California, has developed a large-scale 3-D printer that extrudes concrete for building ultra-low-cost multiroom housing in the developing world. The technology is also poised to leave our world. A Singularity University spin-off, Made in Space, has demonstrated a 3-D printer that works in zero gravity, so astronauts aboard the International Space Station can print spare parts whenever the need arises.
“What gets me most excited,” says Bass, “is the idea that every person will soon have access to one of these 3-D printers, just like we have ink-jet printers today. And once that happens, it will change everything. See something on Amazon you like? Instead of placing an order and waiting twenty-four hours for your FedEx package, just hit print and get it in minutes.”
3-D printers allow anyone anywhere to create physical items from digital blueprints. Right now the emphasis is on novel geometric shapes; soon we’ll be altering the fundamental properties of the materials themselves. “Forget the traditional limitations imposed by conventional manufacturing, in which each part is made of a single material,” explains Cornell University associate professor Hod Lipson in an article for New Scientist. “We are making materials within materials, and embedding and weaving multiple materials into complex patterns. We can print hard and soft materials in patterns that create bizarre and new structural behaviors.”
3-D printing drops manufacturing costs precipitously, as it makes possible an entirely new prototyping process. Previously, invention was a linear game: create something in your head, build it in the real world, see what works, see what fails, start over on the next iteration. This was time consuming, creatively restricting, and prohibitively expensive. 3-D printing changes all of that, enabling “rapid prototyping,” so that inventors can literally print dozens of variations on a design with little additional cost and in a fraction of the time previously required for physical prototyping.
And this process will be vastly amplified when coupled to what Carl Bass calls “infinite computing.” “For most of my life,” he explains, “computing has been treated as a scarce resource. We continue to think about it that way, though it’s no longer necessary. My home computer, including electricity, costs less than two-tenths of a penny per CPU core hour. Computing is not only cheap, it’s getting cheaper, and we can easily extrapolate this trend to where we come to think of computing as virtually free. In fact, today it’s the least expensive resource we can throw at a problem.
“Another dramatic improvement is the scalability now accessible through the cloud. Regardless of the size of the problem, I can deploy hundreds, even thousands of computers to help solve it. While not quite as cheap as computing at home, renting a CPU core hour at Amazon costs less than a nickel.”
Perhaps most impressive is the ability of infinite computing to find optimal solutions to complex and abstract questions that were previously unanswerable or too expensive to even consider. Questions such as “How can you design a nuclear power plant able to withstand a Richter 10 earthquake?” or “How can you monitor global disease patterns and detect pandemics in their critical early stages?”—while still not easy—are answerable. Ultimately, though, the most exciting development will be when infinite computing is coupled with 3-D printing. This revolutionary combination thoroughly democratizes design and manufacturing. Suddenly an invention developed in China can be perfected in India, then printed and utilized in Brazil on the same day—giving the developing world a poverty-fighting mechanism unlike anything yet seen.
In 2008 the WHO announced that a lack of trained physicians in Africa will threaten the continent’s future by the year 2015. In 2006 the Association of American Medical Colleges reported that America’s aging baby boomer population will create a massive shortage of 62,900 doctors by 2015, which will rise to 91,500 by 2020. The scarcity of nurses could be even worse. And these are just a few of the reasons why our dream of health care abundance cannot come from traditional wellness professionals.
How do we fill this gap? For starters, we are counting on Lab-on-a-Chip (LOC) technologies. Harvard professor George M. Whitesides, a leader in this emerging field, explains why: “We now have drugs to treat many diseases, from AIDS and malaria to tuberculosis. What we desperately need is accurate, low-cost, easy-to-use, point-of-care diagnostics designed specifically for the sixty percent of the developing world that lives beyond the reach of urban hospitals and medical infrastructures. This is what Lab-on-a-Chip technology can deliver.”
Because LOC technology will likely be part of a wireless device, the data it collects for diagnostic purposes can be uploaded to a cloud and analyzed for deeper patterns. “For the first time,” says Dr. Anita Goel, a professor at MIT whose company Nanobiosym is working hard to commercialize LOC technology, “we’ll have the ability to provide real-time, worldwide disease information that can be uploaded to the cloud and used for detecting and combating the early phase of pandemics.”
Now imagine what happens when artificial intelligence gets added to this equation. Sound like a fairy tale? Already, in 2009 the Mayo Clinic used an “artificial neural network” to help physicians rule out the need for invasive procedures by diagnosing patients previously believed to suffer endocarditis, a dangerous heart condition, with 99 percent accuracy. Similar programs have been used to do everything from reading computed tomography (CT) scans to screening for heart murmurs in children. But combining AI, cloud computing, and LOC technology will offer the greatest benefit. Now your cell-phone-sized device can not only analyze blood and sputum but also have a conversation with you about your symptoms, offering a far more accurate diagnosis than was ever before possible and potentially making up for our coming shortage of doctors and nurses. Since patients will be able to use this technology in their own homes, it will also free up time and space in overcrowded emergency rooms. Epidemiologists will have access to incredibly rich data sets, allowing them to make incredibly robust predictions. But the real benefit is that medicine will be transformed from reactive and generic to predictive and personalized.
Most historians date nanotechnology—that is, the manipulation of matter at the atomic scale—to physicist Richard Feynman’s 1959 speech “There’s Plenty of Room at the Bottom.” But it was K. Eric Drexler’s 1986 book, Engines of Creation: The Coming Era of Nanotechnology, that really put the idea on the map. The basic notion is simple: build things one atom at a time. What sort of things? Well, for starters, assemblers: little nanomachines that build other nanomachines (or self-replicate). Since these replicators are also programmable, after one has built a billion copies of itself, you can direct those billion to build whatever you want. Even better, because building takes place on an atomic scale, these nanobots, as they are called, can start with whatever materials are on hand—soil, water, air, and so on—pull them apart atom by atom, and use those atoms to construct, well, just about anything you desire.
At first glance, this seems a bit like science fiction, but almost everything we’re asking nanobots to do has already been mastered by the simplest life-forms. Duplicate itself a billion times? No problem, the bacteria in your gut will do that in just ten hours. Extract carbon and oxygen out of the air and turn it into a sugar? The scum on top of any pond has been at it for a billion years. And if Kurzweil’s exponential charts are even close to accurate, then it won’t be long now before our technology surpasses this biology.
Of course, a number of experts feel that once nanotechnology reaches this point, we may lose our ability to properly control it. Drexler himself described a “gray goo” scenario, wherein self-replicating nanobots get free and consume everything in their path. This is not a trivial concern. Nanotechnology is one of a number of exponentially growing fields (also biotechnology, AI, and robotics) with the potential to pose grave dangers. These dangers are not the subject of this book, but it would be a significant oversight not to mention them. Therefore, in our reference section, you’ll find a lengthy appendix discussing all of these issues. Please use this as a launch pad for further reading.
While concerns about nanobots and gray goo are decades away (most likely beyond the time line of this book), nanoscience is already giving us incredible returns. Nanocomposites are now considerably stronger than steel and can be created for a fraction of the cost. Single-walled carbon nanotubes exhibit very high electron mobility and are being used to boost power conversion efficiency in solar cells. And Buckminsterfullerenes (C60), or Buckyballs, are soccer-ball-shaped molecules containing sixty carbon atoms with potential uses ranging from superconductor materials to drug delivery systems. All told, as a recent National Science Foundation report on the subject pointed out, “nanotechnology has the potential to enhance human performance, to bring sustainable development for materials, water, energy, and food, to protect against unknown bacteria and viruses, and even to diminish the reasons for breaking the peace [by creating universal abundance].”
As exciting as these breakthroughs are, there was no place anyone could go to learn about them in a comprehensive manner. It was for this reason I organized the founding conference for Singularity University at the NASA Ames Research Center in September 2008. There were representatives from NASA; academics from Stanford, Berkeley, and other institutions; and industry leaders from Google, Autodesk, Microsoft, Cisco, and Intel. What I remember most clearly from the event was an impromptu speech given by Google’s cofounder Larry Page near the end of the first day. Standing before about one hundred attendees, Page made an impassioned speech that this new university must focus on addressing the world’s biggest problems. “I now have a very simple metric I use: are you working on something that can change the world? Yes or no? The answer for 99.99999 percent of people is ‘no.’ I think we need to be training people on how to change the world. Obviously, technologies are the way to do that. That’s what we’ve seen in the past; that’s what drives all the change.”
And that’s what we built. That founding conference gave way to a unique institution. We run graduate studies programs and executive programs and already have over one thousand graduates. Page’s challenge has become embedded in the university’s DNA. Each year, the graduate students are challenged to develop a company, product, or organization that will positively affect the lives of a billion people within ten years. I call these “ten to the ninth-plus” (or 109+) companies. While none of these startups has yet to reach its mark (after all, we’re only three years in), great progress is being made.
Because of the exponential growth rate of technology, this progress will continue at a rate unlike anything we’ve ever experienced before. What all this means is that if the hole we’re in isn’t even a hole, the gap between rich and poor is not much of a gap, and the current rate of technological progress is moving more than fast enough to meet the challenges we now face, then the three most common criticisms against abundance should trouble us no more.