APPENDIX
DANGERS OF THE EXPONENTIALS

image

Why the Future Doesn’t Need Us

One of the first well-constructed examinations of the dangers of exponential technology appeared in April 2000 in Wired, when Bill Joy (then chief scientist at Sun Microsystems) wrote his now famous article “Why the Future Doesn’t Need Us.” Joy’s argument is that the most powerful twenty-first-century technologies—robotics, nanotech, and genetic engineering—all threaten the human species, leaving us only one clear course of action:

The experiences of the atomic scientists clearly show the need to take personal responsibility, the danger that things will move too fast, and the way in which a process can take on a life of its own. We can, as they did, create insurmountable problems in almost no time flat. We must do more thinking up front if we are not to be similarly surprised and shocked by the consequences of our inventions … We are being propelled into this new century with no plan, no control, no brakes.

… The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.

While I disagree with Joy’s prescription (for reasons we’ll get to), he’s not wrong in his appraisal. Exponential technologies can pose grave dangers. Although those dangers are not the focus of this work, it would be a significant oversight to pass them by without discussion. This, then, is the portion of the text devoted to examining those issues. I will warn you in advance that the discussion of these threats and potential mitigating factors put forward here are woefully inadequate, given the importance of the subject. My goal is simply to make you aware of the major concerns and challenges, and provide a macroscopic overview to stimulate your further reading.

Imagining these dangers isn’t hard, as Hollywood has already done much of the heavy lifting. Films like I, Robot, The Terminator, and The Matrix are classic stories of evil, intelligent robots dominating humanity, while Blade Runner, Gattaca, and Jurassic Park focus on the downside of genetic manipulation. Nanotech, it seems, is slightly less cinematic, and shows up only in the 2008 remake of The Day the Earth Stood Still. But the film gives us a fairly accurate version of Eric Drexler’s “grey goo” scenario, wherein self-replicating nanobots get free and consume everything in their path. While it is true that Hollywood has played fast and loose with the facts, it does a pretty fair job in assessing the dangers. Simply put: the wrong technology in the wrong hands leads nowhere good.

Every year at SU, I lead a series of workshops discussing this topic. In these sessions, we try to list and prioritize near-term and medium-term doomsday scenarios. Three near-term concerns consistently rise to the top and are therefore going to be our focus here: the fear of biotechnology in the hands of terrorists; the continued rise of cyber crime; and the loss of jobs resulting from advances in robotics and AI. We’ll take them one at a time.

Bioterrorism

Earlier in this book, I described how high school and college students participating in today’s International Genetically Engineered Machine (iGEM) competition are using genetic engineering to manipulate simple life forms to do useful or interesting things. For example, previous competition winners have built life forms that blink fluorescent green, consume oil spills, or manufacture ulcer-preventing vaccines. But that’s where we are today. Tomorrow is quite a different story.

“There is a new generation of biohackers coming online who will use genetic engineering to start amazing companies,” says Andrew Hessel, cochair of the Biotechnology at Singularity University and an eloquent advocate for today’s DIY-bio movement. “At the same time, however, as the technology becomes easier to use and cheaper to access, biological attacks and hacks are inevitable.”

And the technology is already cheap enough. DNA sequencing and synthesizing machines are available to anyone who can afford a used car. This might be fine, save for the fact that some pretty nasty nucleotide sequences such as the Ebola virus and the 1918 influenza (which killed over fifty million worldwide) are accessible on line. British cosmologist and astronomer royal Lord Martin Rees thinks the danger so grave that in 2002 he placed a $1,000 bet with Wired magazine that “by the year 2020, an instance of bio-error or bio-terror will have killed a million people.”

Rees and Hessel have every right to sound the alarm. Dr. Larry Brilliant—who helped lead the WHO team that successfully eradicated smallpox and now runs Jeff Skoll’s Urgent Threats Fund (which focuses, among other things, on pandemics and bioterrorism)—summed up everyone’s fears in a recent article for the Wall Street Journal: “Genetic engineering of viruses is much less complex and far less expensive than sequencing human DNA. Bioterror weapons are cheap and do not need huge labs or government support. They are the poor man’s WMD.”

And terrorists won’t even have to actually create the virus to cause the damage. “The widespread media frenzy around the H1N1 [flu virus] in 2009 panicked the public and saw pharmaceutical companies waste billions to make vaccines that were ultimately ineffective,” explains Hessel. “Fear and ignorance of biological agents can lead to reactive and disruptive societal responses with real-world consequences, even if the agent itself isn’t that harmful.” In effect, just the threat of a biological attack can be severely damaging, producing negative economic, societal, and psychological impact.

One instinctive reaction to this threat has been a call for more regulation on the distribution of technology and reagents, but there’s little proof that such measures will have the desired effect. The first problem is that banning anything tends to create a black market and a criminal workforce dedicated to exploiting that market. In 1919, when America made the manufacture, sale, and transportation of intoxicating liquors illegal, organized crime was the main result. Prison populations soared by 366 percent; total expenditures on penal institutions jumped 1,000 percent; even drunk driving went up by 88 percent. All told, as John D. Rockefeller Jr. (once a vocal proponent of the idea) pointed out: “[D]rinking has generally increased; the speakeasy has replaced the saloon; a vast army of lawbreakers has appeared; many of our best citizens have openly ignored Prohibition; respect for the law has been greatly lessened; and crime has increased to a level never seen before.”

Currently beyond those drugs that increase athletic performance, there isn’t much of a black market for biologicals. Stricter regulation would change that in a hurry. It would also create a brain drain, as researchers interested in these areas would move to places were the work wasn’t illegal—something we already saw with stem cells. Moreover, there are serious economic considerations. Regulation hurts small businesses most, and it is small businesses that make most economies run. Industrial biotech is a rapidly growing market sector, but that will taper off if we start hamstringing these operations with too many rules—and this decline would hurt more than just our wallets.

“Our greatest resource to combat emerging natural and artificial biological threats is an open and broadly distributed technological capability,” writes synthetic biology pioneer Rob Carlson in a recent overview of the field: “Synthetic Biology 101.” “Regulation that is demonstrably ineffective in improving security could easily end up stifling the technological innovation required to improve security. And make no mistake: we desperately require new technologies to provide for an adequate bio-defense capability.”

Beyond this dark prognosis, a few bright spots are beginning to emerge. For starters, viruses spread only at the speed of human travel—going from infected host to the soon-to-be-infected target. Simulations show that a pandemic, even in a local region, can take months to peak. Meanwhile, warnings and news can spread at the speed of Twitter, Facebook, and CNN. Already systems like Google Flu Trends monitor search data for terms like “flu,” “coughing,” “influenza,” and so on, and can identify early outbreaks. In the near future, Lab-on-a-Chip technologies, which can be used to detect, sequence, and effectively serve as a pandemic early-warning system, will feed data to organizations such as the Centers for Disease Control.

“If regional facilities are put in place to rapidly manufacture and distribute vaccines and antiviral drugs in towns and cities worldwide,” continues Hessel, “then we can imagine providing an effective treatment along the same lines that Norton Antivirus broadcasts an update to protect our computers at home.”

Work on exactly these sorts of facilities is already starting. In May 2011 the UCLA School of Public Health launched a state-of-the-art, $32 million, high-speed, high-volume automated laboratory designed to be the next weapon against bioterrorism and infectious diseases. This global biolab is designed to test high volumes of deadly agents very quickly. “For example,” says UCLA School of Public Health dean Linda Rosenstock, “to find out where an agent came from. Did it originate in Mexico? Did it start in Asia? How’s it changing over time? How might we develop a vaccine to protect against it? Really, the possibilities are endless.”

This is only a piece in what will have to be a much larger puzzle. Larry Brilliant imagines a scenario wherein air filters in major public facilities such as airports and concert halls will be attached to biological monitoring systems. Sneeze in a restroom at Yankee Stadium, and the system will automatically analyze your germs for known and unknown pathogens. Making Brilliant’s idea that much more feasible, in August 2011, researchers at MIT’s Lincoln Laboratory invented a new kind of biosensor that can detect airborne pathogens like anthrax, plague, and smallpox in less than three minutes—a vast improvement over previous efforts.

Despite such progress, a thoroughly robust pathogen-monitoring system will take a few years, maybe even a few decades. In the meantime, another important defense against biological attack may be the telltale electronic droppings that a would-be terrorist generates in his efforts to acquire equipment, supplies, and information. For this reason, the loss of privacy arising from social media and web searches may turn out, ironically, to be a major protector of our freedom and health.

The fact remains that any new technology carries a novel risk. Mostly we live with these trade-offs. The automobile kills about forty thousand Americans a year, while dumping one and a half billion tons of CO2 into the atmosphere, but we have little inclination to ban these machines. The most potent painkillers we’ve developed have both saved lives and ended lives. Even something as straightforward as processed sugar is a double-edged sword, giving us a brilliant array of new foods yet contributing to a bevy of killer diseases. As comic book artist Stan Lee pointed out so many years ago in the first issue of Spider-Man: “With great power there must also come great responsibility.” One thing of which we are certain: biotechnology is a very great power.

Cyber Crime

Marc Goodman is a cyber crime specialist with an impressive résumé. He has worked with the Los Angeles Police Department, Interpol, NATO, and the State Department. He is the chief cyber criminologist at the Cybercrime Research Institute, founder of the Future Crime Institute, and now head of the policy, law, and ethics track at SU. When breaking down this threat, Goodman sees four main categories of concern.

The first issue is personal. “In many nations,” he says, “humanity is fully dependent on the Internet. Attacks against banks could destroy all records. Someone’s life savings could vanish in an instant. Hacking into hospitals could cost hundreds of lives if blood types were changed. And there are already 60,000 implantable medical devices connected to the Internet. As the integration of biology and information technology proceeds, pacemakers, cochlear implants, diabetic pumps, and so on, will all become the target of cyber attacks.”

Equally alarming are threats against physical infrastructures that are now hooked up to the net and vulnerable to hackers (as was recently demonstrated with Iran’s Stuxnet incident), among them bridges, tunnels, air traffic control, and energy pipelines. We are heavily dependent on these systems, but Goodman feels that the technology being employed to manage them is no longer up to date, and the entire network is riddled with security threats.

Robots are the next issue. In the not-too-distant future, these machines will be both commonplace and connected to the Internet. They will have superior strength and speed and may even be armed (as is the case with today’s military robots). But their Internet connection makes them vulnerable to attack, and very few security procedures have been implemented to prevent such incidents.

Goodman’s last area of concern is that technology is constantly coming between us and reality. “We believe what the computer tells us,” says Goodman. “We read our email through computer screens; we speak to friends and family on Facebook; doctors administer medicines based upon what a computer tells them the medical lab results are; traffic tickets are issued based upon what cameras tell us a license plate says; we pay for items at stores based upon a total provided by a computer; we elect governments as a result of electronic voting systems. But the problem with all this intermediated life is that it can be spoofed. It’s really easy to falsify what is seen on our computer screens. The more we disconnect from the physical and drive toward the digital, the more we lose the ability to tell the real from the fake. Ultimately, bad actors (whether criminals, terrorists, or rogue governments) will have the ability to exploit this trust.”

While we have not yet discovered any silver bullet solutions, Goodman does believe there are a few steps that would greatly reduce our peril. The first is better technology and more responsibility. “It’s insane that we allow developers to release crappy software,” he says. “We’re making life hard on consumers and easy on criminals. We have to accept the fact that in today’s world, our lives depend on software, and to allow companies to release products riddled with security flaws in today’s climate doesn’t make any sense.”

The next issue is how we handle the security flaws that still make it through. Right now the responsibility for patching old code is left up to the consumer, but people don’t get around to it as often as they should. Goodman explains: “Ninety-five percent of all hacks exploit old security flaws—flaws for which patches already exist. We need software that automatically updates itself, plugs holes, and thwarts hackers. You have to automate this stuff, put the responsibility on the developer and not the consumer.”

Goodman also feels that it’s time to start considering some type of global liability law that covers software security. To this end, on September 9, 2011, Connecticut Democrat Richard Blumenthal introduced the Personal Data Protection and Breach Accountability Act in the Senate. This would enable the US Justice Department to fine companies with more than ten thousand customers $5,000 per day (for a maximum of $20 million in violations) for lax security. If the bill passes, standards would be set, and businesses would be required to test their security systems on a regular basis—although who performs the testing and how, and who owns the resulting data, remain thorny concerns.

An international net-based police force able to operate across borders in the same way that the Internet enables criminals to operate across borders is Goodman’s last suggestion. “The Internet has made the world a borderless place,” he says, “but all of our law-enforcement agencies are trapped in the old world—the one where borders still matter a great deal. This makes it almost impossible for law enforcement to deal with cyber criminals. I don’t think we’ll ever completely defeat cyber crime, but if the playing field remains this uneven, we don’t even have a fighting chance.”

Goodman is aware that this proposal make many uneasy. “Everybody’s main concern is a cop from El Salvador being able to arrest people in Switzerland, but if you made it a net-based policing mechanism (and left arrests up to home country officers), you could sidestep this issue. Certainly there’s still a lot of international law to consider—spewing Nazi propaganda, for example, is free speech in America and illegal in Germany—but we live in a globally connected world. These problems are going to keep on coming up. Isn’t it time we get ahead of the curve?”

Robotics, AI, and the Unemployment Line

There are some curves we might not be able to get ahead of. It won’t be long now before robots make up the majority of the blue-collar workforce. Whether it’s shelf-stocking robots maintaining inventory at Costco or burger-slinging robots serving lunch at McDonald’s, we’re less than a decade away from their arrival. Afterward, humans are going to have a hard time competing. These robots work 24/7, and they don’t get sick, make mistakes, or go on strike. They never get too drunk on Friday night to come to work Saturday morning, and—bad news for the drug-testing industry—have no interest in mind-altering substances. Certainly there will be companies that continue to employee humans out of principle or charity, but it’s hard to envision a scenario where they remain cost competitive for long. So what becomes of these millions of blue-collar workers?

No one is entirely certain, although it’s helpful to remember that this isn’t the first time automation changed the employment landscape. In 1862, 90 percent of our workforce were farmers. By the 1930s, the number was 21 percent. Today it’s less than 2 percent. So what happened to the farm jobs that were displaced by automation? Nothing fancy. The old low-skill jobs were replaced by new higher-skilled jobs, and the workforce was trained to fill them. This is the way of progress. In a world of ever-increasing specialization, we are constantly creating anew. “At a high level,” says Second Life creator Philip Rosedale, “humans have consistently demonstrated an ability to find new things to do that are of greater value when jobs have been outsourced or automated. The industrial revolution, outsourced IT work, China’s low-cost labor force all ultimately created more interesting new jobs than they displaced.”

Vivek Wadhwa, director of research at the Center for Entrepreneurship at Duke University, agrees. “Jobs that can be automated are always at risk. Society’s challenge is to keep moving up the ladder, into higher realms. We need to create new jobs that use human creativity rather than human labor. I admit that it’s difficult to conceive of the jobs of the future because we have no idea what technology will emerge and change the world. I doubt anyone could have predicted two decades ago that countries like India would go from being seen as lands of beggars and snake charmers to an employment threat for the developed world. Americans no longer tell their children to think about starving Indians before wasting the food on their plates, they tell them to study math and science or the Indians will take their white-collar jobs away.”

In addition to training up, others might simply retire. SU AI expert Neil Jacobstein explains, “Exponential technologies may eventually permit people to not need jobs to have a high standard of living. People will have many choices with how they utilize their time and develop a sense of self-esteem—ranging from leisure normally associated with retirement, to art, music, or even restoring the environment. The emphasis will be less on making money and more on making contributions, or at least creating an interesting life.”

This may seem a fairly future-forward opinion, but in a 2011 special report for CNN, media specialist Douglas Rushkoff argued that this transition is already under way:

I understand we all want paychecks—or at least money. We want food, shelter, clothing, and all the things that money buys us. But do we all really want jobs?

We’re living in an economy where productivity is no longer the goal, employment is. That’s because, on a very fundamental level, we have pretty much everything we need. America is productive enough that it could probably shelter, feed, educate, and even provide health care for its entire population with just a fraction of us actually working.

According to the UN Food and Agriculture Organization, there is enough food produced to provide everyone in the world with 2,720 kilocalories per person per day. And that’s even after America disposes of thousands of tons of crop and dairy just to keep market prices high. Meanwhile, American banks overloaded with foreclosed properties are demolishing vacant dwellings to get the empty houses off their books.

Our problem is not that we don’t have enough stuff—it’s that we don’t have enough ways for people to work and prove that they deserve this stuff.

Part of the problem is that most contemporary thinking about money and markets and such has its roots in the scarcity model. In fact, one of the most commonly used definitions of economics is “the study of how people make choices under conditions of scarcity, and the results of those choices for society.” As traditional economics (which believes that markets are equilibrium systems) gets replaced by complexity economics (which both fits the data significantly better and believes that markets are complex, adaptive systems), we may begin to uncover a postscarcity framework for assessment, but there’s no guarantee that such thinking will result in either more jobs or a different resource allocation system.

And this is merely where we are today. The bigger question is what happens once strong AI, ubiquitous robotics, and the Internet of everything—a combination that many feel will be able to handle every job in every market—comes online? Strong AI brings the possibility of computers with intelligence superior to that of humans, meaning that even the creative jobs that remains for us humans may soon be in jeopardy. “When you look at the possibility of us creating beings more intelligent than we are,” says Philip Rosedale, “there is a fear that if we are enslaved by our descendant machines, we will be forced to do things we like less than what we are doing now, but it seems hard to imagine exactly what those things would be. In an age of abundance, where we are increasingly exploiting cheaper and cheaper ways of creating and modeling the world around us (virtual reality or nanotech, for example), is there really anything we could do to help the machines, even if we are left behind as their ancestors? I would suggest that the most likely outcome is that even if we are faced with smarter machines than us being a part of our lives, we may exist on two sides of a sort of digital IQ divide, with our lives being relatively unaffected.”

So what’s left for the humans? I see two clear possibilities. In one future, society takes a turn for the Luddite. We take Bill Joy’s advice, follow the designs of the slow food movement, and begin to backtrack with the Amish. But this option will work only for those willing to forgo the vast benefits afforded by all this technology. This desire for the “good old days” will be tempered by the realities of disease, ignorance, and missed opportunities.

In the second future, the majority of humanity will end up merging with technology, enhancing themselves both physically and cognitively. A lot of people recoil at the sound of this, but this transformation has been going on for eons. The act of writing, for example, is simply the act of using technology to outsource memory. Eyeglasses, contact lenses, artificial body parts (stretching from the wooden peg leg to Scott Summit’s 3-D printed prosthetics), cosmetic implants, cochlear implants, the US Army’s “super soldier” program, and a thousand more examples have only continued this trend. As AI and robotics guru Marvin Minsky writes in Scientific American, “In the past, we have tended to see ourselves as a final product of evolution, but our evolution has not ceased. Indeed, we are now evolving more rapidly, though not in the familiar, slow Darwinian way. It is time that we started to think about our new emerging.”

Soon the vast majority of us will be augmented in one way or another, and this will thoroughly change the economic landscape. This newly enhanced self, plugged into the net, working in both virtual and physical worlds, will generate value for society in ways we cannot even imagine today. Right now four thousand people are making a living designing clothing for Second Life avatars, but the day is not far off when a great many of us will be using digital doppelgängers. So while four thousand people doesn’t sound like much of a market, what happens when avatars are representing us at international conferences and major business meetings? How much money are we spending on virtual clothing and accessories then?

Unstoppable

Considering the issues explored in the past few sections, Bill Joy’s suggestion “to limit development of the technologies that are too dangerous” doesn’t sound so bad. But the tools of yesterday are not designed to meet the problems of tomorrow. Considering the gravity of these concerns and the continued march of technology, reigning in our imaginations seems the worst possible plan for survival. We’re going to need those future tools to solve future problems if we’re serious about future survival.

Moreover, putting the brakes on technology just won’t work. As the Bush administration’s ban on human embryonic stem cells bore out, attempting to silence technology in one place only drives it elsewhere. In an interview about the impact of that ban, Susan Fisher, a professor at the University of California, San Francisco, said recently, “Science is like a stream of water, because it finds its way. And now it has found its way outside the United States.” All the Bush pronouncement did was outsource what was originally a domestic product to countries like Sweden, Israel, Finland, South Korea, and the United Kingdom. What did the White House ban achieve? Only a reduction in US scientific preeminence.

There are also psychological reasons why it’s nearly impossible to stop the spread of technology—specifically, how do you squelch hope? Ever since we figured out how to make fire, technology has been how humans dream into the future. If 150,000 years of evolution is anything to go by, it’s how we dream up the future. People have a fundamental desire to have a better life for themselves and their families; technology is often how we make that happen. Innovation is woven into the fabric of who we are. We can no more stomp it out than we could shut off our instinct to survive. As Matt Ridley concludes in the final pages of The Rational Optimist, “It will be hard to snuff out the flame of innovation, because it is such an evolutionary, bottom-up phenomenon in such a networked world. So long as human exchange and specialization are allowed to thrive somewhere, then culture evolves where leaders help it or hinder it, and the result is that prosperity spreads, technology progresses, poverty declines, disease retreats, fecundity falls, happiness increases, violence atrophies, freedom grows, knowledge flourishes, the environment improves, and wilderness expands.”

Sure, there are always going to be a few holdouts (again, the Amish), but the vast majority of us are here for the ride. And, as should be clear by now, it’s going to be quite a ride.