Chapter 11. RAY KURZWEIL

RAY KURZWEIL

The scenario that I have is that we will send medical nanorobots into our bloodstream. [...] These robots will also go into the brain and provide virtual and augmented reality from within the nervous system rather than from devices attached to the outside of our bodies.

DIRECTOR OF ENGINEERING AT GOOGLE

Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists. He has received 21 honorary doctorates, and honors from three US presidents. He is the recipient of the MIT Lemelson Prize for innovation and in 1999, he received the National Medal of Technology, the nation’s highest honor in technology, from President Clinton. Ray is also a prolific writer, authoring 5 national bestsellers. In 2012, Ray became a Director of Engineering at Google—heading up a team of engineers developing machine intelligence and natural language understanding. Ray’s first novel, Danielle, Chronicles of a Superheroine, is being published in early 2019. Another book by Ray, The Singularity is Nearer, is expected to be published in late 2019.

MARTIN FORD: How did you come to start out in AI?

RAY KURZWEIL: I first got involved in AI in 1962, which was only 6 years after the term was coined by Marvin Minsky and John McCarthy at the 1956 Dartmouth Conference in Hanover, New Hampshire.

The field of AI had already bifurcated into two warring camps: the symbolic school and the connectionist school. The symbolic school was definitely in the ascendancy with Marvin Minsky regarded as its leader. The connectionists were the upstarts, and one such person was Frank Rosenblatt at Cornell University, who had the first popularized neural net called the perceptron. I wrote them both letters and they both invited me to come up, so I first went to visit Minsky, where he spent all day with me and we struck up a rapport that would last for 55 years. We talked about AI, which at the time was a very obscure field that nobody was really paying attention to. He asked who I was going to see next, and when I mentioned Dr. Rosenblatt, he said that I shouldn’t bother.

I then went to go and see Dr. Rosenblatt, who had this single-layer neural net called the perceptron; it was a hardware device that had a camera. I brought some printed letters to my meeting with Dr. Rosenblatt where his device recognized them perfectly as long as they were in Courier 10.

Other type styles didn’t work as well, and he said, “Don’t worry, I can take the output of the perceptron and feed it as the input to a secondary perceptron, then we can take the output of that and feed it to a third layer, and as we add layers it’ll get smarter and generalize and be able to do all these remarkable things.” I responded saying, “Have you tried that?”, and he said, “well, not yet, but it’s high on our research agenda.”

Things didn’t move quite as quickly back in the 1960s as they do today, and sadly he died 9 years later in 1971 never having tried that idea. The idea was remarkably prescient, however. All of the excitement we see now in neural nets is due to these deep neural networks with many layers. It was a pretty remarkable insight, as it really was not obvious that it would work.

In 1969, Minsky wrote his book, Perceptrons, with his colleague, Seymour Papert. The book basically proved a theorem that a perceptron could not devise answers that required the use of the XOR logical function, nor could they solve the connectedness problem. There are two maze-like images on the cover of that book, and if you look carefully, you can see one is fully connected, and the other is not. Making that classification is called the connectedness problem. The theorem proved that a perceptron could not do that. The book was very successful in killing all funding for connectionism for the next 25 years, which is something Minsky regretted, as shortly before he died he told me that he now appreciated the power of deep neural nets.

MARTIN FORD: Marvin Minsky did work on early connectionist neural nets back in the ‘50s, though, right?

RAY KURZWEIL: That’s right, but he became disillusioned with them by the 1960s, and really didn’t appreciate the power of multi-layer neural nets. It was not apparent until decades later when 3-layer neural nets were tried and they worked somewhat better. There was a problem going with too many layers, because of the exploding gradient or vanishing gradient problem, which is basically where the dynamic range of the values of the coefficients would decline because the numbers got too big or too small.

Geoffrey Hinton and a group of mathematicians solved that problem and now we can go to any number of levels. Their solution was that you recalibrate the information after each level, so it doesn’t outstrip the range of values that can be represented and these 100-layer neural nets have been very successful. There’s still a problem though, which is summarized by the motto, “Life begins at a billion examples.”

One of the reasons I’m here at Google is that we do have a billion examples of some things like pictures of dogs and cats and other image categories that are annotated, but there are also lots of things we don’t have a billion examples of. We have lots of examples of language, but they’re not annotated with what they mean, and how could we annotate them anyway using language that we can’t understand in the first place? There’s a certain category of problems where we can work around that, and playing Go is a good example. The DeepMind system was trained on all of the online moves, which is in the order of a million moves. That’s not a billion. That created a fair amateur player, but they need another 999 million examples, so where are they going to get them from?

MARTIN FORD: What you’re getting at is that deep learning right now is very dependent on labeled data and what’s called supervised learning.

RAY KURZWEIL: Right. One way to work around it is if you can simulate the world you’re working in, then you can create your own training data, and that’s what DeepMind did by having it play itself. They could annotate the moves with traditional annotation methods. Subsequently AlphaZero actually trained a neural net to improve on the annotation, so it was able to defeat AlphaGo 100 games to 0 starting with no human training data.

The question is, in what situations can you do that in? For example, another situation where we can do that is math, because we can simulate math. The axioms of number theory are no more complicated than the rules of Go.

Another situation is self-driving cars, even though driving is much more complex than a board game or the axioms of a math system. The way that worked is that Waymo created a pretty good system with a combination of methods and then drove millions of miles with humans at the wheel ready to take over. That generated enough data to create an accurate simulator of the world of driving. They’ve now driven on the order of a billion miles with simulated vehicles in the simulator, which has generated training data for a deep neural net designed to improve the algorithms. This has worked even though the world of driving is much more complex than a board game.

The next exciting area to attempt to simulate is the world of biology and medicine. If we could simulate biology, and it’s not impossible, then we could do clinical trials in hours rather than years, and we could generate our own data just like we’re doing with self-driving cars or board games or math.

That’s not the only approach to the problem of providing sufficient training data. Humans can learn from much less data because we engage in transfer learning, using learning from situations which may be fairly different from what we are trying to learn. I have a different model of learning based on a rough idea of how the human neocortex works. In 1962 I came up with a thesis on how I thought the human brain works, and I’ve been thinking about thinking for the last 50 years. My model is not one big neural net, but rather many small modules, each of which can recognize a pattern. In my book, How to Create a Mind, I describe the neocortex as basically 300 million of those modules, and each can recognize a sequential pattern and accept a certain amount of variability. The modules are organized in a hierarchy, which is created through their own thinking. The system creates its own hierarchy.

That hierarchical model of the neocortex can learn from much less data. It’s the same with humans. We can learn from a small amount of data because we can generalize information from one domain to another.

Larry Page, one of the co-founders of Google, liked my thesis in How to Create a Mind and recruited me to Google to apply those ideas to understanding language.

MARTIN FORD: Do you have any real-world examples of you applying those concepts to a Google product?

RAY KURZWEIL: Smart Reply on Gmail (which provides three suggestions to reply to each email) is one application from my team that uses this hierarchical system. We just introduced Talk to Books (https://books.google.com/talktobooks/), where you ask a question in natural language and the system then reads 100,000 books in a half-second—that’s 600 million sentences—and then returns the best answers that it can find from those 600 million sentences. It’s all based on semantic understanding, not keywords.

At Google we’re making progress in natural language, and language was the first invention of the neocortex. Language is hierarchical; we can share the hierarchical ideas we have in our neocortex with each other using the hierarchy of language. I think Alan Turing was prescient in basing the Turing test on language because I think it does require the full range of human thinking and human intelligence to create and understand language at human levels.

MARTIN FORD: Is your ultimate objective to extend this idea to actually build a machine that can pass the Turing test?

RAY KURZWEIL: Not everybody agrees with this, but I think the Turing test, if organized correctly, is actually a very good test of human-level intelligence. The issue is that in the brief paper that Turing wrote in 1950, it’s really just a couple of paragraphs that talked about the Turing test, and he left out vital elements. For example, he didn’t describe how to actually go about administering the test. The rules of the test are very complicated when you actually administer it, but if a computer is to pass a valid Turing test, I believe it will need to have the full range of human intelligence. Understanding language at human levels is the ultimate goal. If an AI could do that, it could read all documents and books and learn everything else. We’re getting there a little bit at a time. We can understand enough of the semantics, for example to enable our Talk to Books application to come up with reasonable answers to questions, but it’s still not at human levels. Mitch Kapor and I have a long-range bet on this for $20,000, with the proceeds to go to the charity of the winner’s choice. I’m saying that an AI will pass the Turing test by 2029, whereas he’s saying no.

MARTIN FORD: Would you agree that for the Turing test to be an effective test of intelligence, there probably shouldn’t be a time limit at all? Just tricking someone for 15 minutes seems like a gimmick.

RAY KURZWEIL: Absolutely, and if you look at the rules that Mitch Kapor and I came up with, we gave a number of hours, and maybe even that’s not enough time. The bottom line is that if an AI is really convincing you that it’s human, then it passes the test. We can debate how long that needs to be—probably several hours if you have a sophisticated judge—but I agree that if the time is too short, then you might get away with simple tricks.

MARTIN FORD: I think it’s easy to imagine an intelligent computer that just isn’t very good at pretending to be human because it would be an alien intelligence. So, it seems likely that you could have a test where everyone agreed that the machine was intelligent, even though it didn’t actually seem to be human. And we would probably want to recognize that as an adequate test as well.

RAY KURZWEIL: Whales and octopi have large brains and they exhibit intelligent behavior, but they’re obviously not in a position to pass the Turing test. A Chinese person who speaks mandarin and not English would not pass the English Turing test, so there are lots of ways to be intelligent without passing the test. The key statement is the converse: In order to pass the test, you have to be intelligent.

MARTIN FORD: Do you believe that deep learning, combined with your hierarchical approach, is really the way forward, or do you think there needs to be some other massive paradigm shift in order to get us to AGI/human-level intelligence?

RAY KURZWEIL: No, I think humans use this hierarchical approach. Each of these modules is capable of doing learning, and I actually make the case in my book that in the brain they’re not doing deep learning in each module, they’re doing something equivalent to a Markov process, but it actually is better to use deep learning.

In our systems at Google we use deep learning to create vectors that represent the patterns in each module and then we have a hierarchy that goes beyond the deep learning paradigm. I think that’s sufficient for AGI, though. The hierarchical approach is how the human brain does it in my view, and there’s a lot of evidence now for that from the brain reverse engineering projects.

There’s an argument that human brains follow a rule-based system rather than a connectionist one. People point out that humans are capable of having sharp distinctions and we’re capable of doing logic. A key point is that connectionism can emulate a rule-based approach. A connectionist system in a certain situation might be so certain of its judgment that it looks and acts like a rule-based system, but then it’s also able to deal with rare exceptions and the nuances of its apparent rules.

A rule-based system really cannot emulate a connectionist system, so the converse statement is not the case. Doug Lenat’s “Cyc” is an impressive project, but I believe that it proves the limitations of a rule-based system. You reach a complexity ceiling, where the rules get so complex that if you try to fix one thing, you break three other things.

MARTIN FORD: Cyc is the project where people are manually trying to enter logic rules for common sense?

RAY KURZWEIL: Right. I’m not sure of the count, but they have a vast number of rules. They had a mode where it could print out its reasoning for a behavior and the explanations would go on for a number of pages and are very hard to follow. It’s impressive work, but it does show that this is really not the approach, at least not by itself, and it’s not how humans achieve intelligence. We don’t have cascades of rules that we go through, we have this hierarchical self-organizing approach.

I think another advantage of a hierarchical, but connectionist approach is that it’s better at explaining itself because you can look at the modules in the hierarchy and see which module influences which decision. When you have these massive 100-layer neural nets, they act like a big black box. It’s very hard to understand its reasoning, though there have been some attempts to do that. I do think that this hierarchical spin on a connectionist approach is an effective approach, and that’s how humans think.

MARTIN FORD: There are some structures, though, in the human brain, even at birth. For example, babies can recognize faces.

RAY KURZWEIL: We do have some feature generators. For example, in our brains we have this module called the fusiform gyrus that contains specialized circuitry and computes certain ratios, like the ratio of the tip of the nose to the end of the nose, or the distance between the eyes. There is set of a dozen or so fairly simple features, and experiments have shown that if we generate those features from images and then generate new images that have the same features—the same ratios—then people will immediately recognize them as a picture of that same person, even though other details have changed quite a bit in the image. There are various feature generators like that, some with audio information that we compute certain ratios and recognize partial overtones, and these features then feed into the hierarchical connectionist system. So, it is important to understand these feature generators, and there are some very specific features in recognizing faces, and that’s what babies rely on.

MARTIN FORD: I’d like to talk about the path and the timing for Artificial General Intelligence (AGI). I’m assuming AGI and human-level AI are equivalent terms.

RAY KURZWEIL: They’re synonyms, and I don’t like the term AGI because I think it’s an implicit criticism of AI. The goal of AI has always been to achieve greater and greater intelligence and ultimately to reach human levels of intelligence. As we’ve progressed, though, we’ve spun off separate fields. For example, once we mastered recognizing characters, it became the separate field of OCR. The same happened with speech recognition and robotics, and it was felt that the overarching field of AI was no longer focusing on general intelligence. My view is always that we’ll get to general intelligence step by step by solving one problem at a time.

Another bit of color on that is that human performance in any type of task is a very broad range. What is the human performance level in Go? It’s a broad range from a child who’s playing their first game to the world champion. One thing we’ve seen is that once a computer can achieve human levels, even at the low end of that range, it very quickly soars past human performance. A little over a year ago computers were playing at a low-level in Go and then they quickly soared past that. More recently, AlphaZero soared past AlphaGo and beat it 100 games to 0, after training for a few hours.

Computers are also improving in their language understanding, but not at the same rate, because they don’t yet have sufficient real-world knowledge. Computers currently can’t do multi-chain reasoning very well, basically taking inferences from multiple statements while at the same time considering real-world knowledge. For example, on a third-grade language understanding test, a computer didn’t understand that if a boy had muddy shoes he probably got them muddy by walking in the mud outside and if he got the mud on the kitchen floor it would make his mother mad. That may all seem obvious to us humans because we may have experienced that, but it’s not obvious to the AI.

I don’t think the process will be as quick to go from the average adult comprehension performance that we have now for computers on some language tests to superhuman performance because I think there are more fundamental issues to solve to do that. Nonetheless, human performance is a broad range, as we’ve seen, and once computers get in that range they can ultimately soar past it to become superhuman. The fact that they’re performing at any kind of adult level in language understanding is very impressive because I feel that language requires the full range of human intelligence, and has the full range of human ambiguity and hierarchical thinking. To sum up, yes, AI is making very rapid progress and yes, all of this is using connectionist approaches.

I just had a discussion with my team here about what we have to do to pass the Turing test beyond what we’ve already done. We already have some level of language understanding. One key requirement is multi-chain reasoning—being able to consider the inferences and implications of concepts—that’s a high priority. That’s one area where chatbots routinely fail.

If I say I’m worried about my daughter’s performance in nursery school, you wouldn’t want to then ask three turns later, do you have any children? Chatbots do that kind of thing because they’re not considering all the inferences of everything that has been said. As I mentioned, there is also the issue of real-world knowledge, but if we could understand all the implications of language, then real-world knowledge could be gained by reading and understanding the many documents available online. I think we have very good ideas on how to do those things and we have plenty of time to do them.

MARTIN FORD: You’ve been very straightforward for a long time that the year when you think human-level AI is going to arrive is 2029. Is that still the case?

RAY KURZWEIL: Yes. In my book, The Age of Intelligent Machines, which came out in 1989, I put a range around 2029 plus or minus a decade or so. In 1999 I published The Age of Spiritual Machines and made the specific prediction of 2029. Stanford University held a conference of AI experts to deal with this apparently startling prediction. At that time, we didn’t have instant polling machines, so we basically had a show of hands. The consensus view then was it would take hundreds of years, with about a quarter of the group saying it would never happen.

In 2006 there was a conference at Dartmouth College celebrating the 50th anniversary of the 1956 Dartmouth conference, which I mentioned earlier, and there we did have instant polling devices and the consensus was about 50 years. 12 years later, in 2018 the consensus view now is about 20 to 30 years, so anywhere from 2038 to 2048, so I’m still more optimistic than the consensus of AI experts, but only slightly. My view and the consensus view of AI experts is getting closer together, but not because I’ve changed my view. There’s a growing group of people who think I’m too conservative.

MARTIN FORD: 2029 is only 11 years away, which is not that far away really. I have an 11-year-old daughter, which really brings it into focus.

RAY KURZWEIL: The progress is exponential; look at the startling progress just in the last year. We’ve made dramatic advances in self-driving cars, language understanding, playing Go and many other areas. The pace is very rapid, both in hardware and software. In hardware, the exponential progression is even faster than for computation generally. We have been doubling the available computation for deep learning every three months over the past few years, compared to a doubling time of one year for computation in general.

MARTIN FORD: Some very smart people with a deep knowledge of AI are still predicting that it will take over 100 years, though. Do you think that is because they are falling into that trap of thinking linearly?

RAY KURZWEIL: A) they are thinking linearly, and B) they are subject to what I call the engineer’s pessimism—that is being so focused on one problem and feeling that it’s really hard because they haven’t solved it yet, and extrapolating that they alone are going to solve the problem at the pace they’re working on. It’s a whole different discipline to consider the pace of progress in a field and how ideas interact with each other and study that as a phenomenon. Some people are just not able to grasp the exponential nature of progress, particularly when it comes to information technology.

Halfway through the human genome project, 1% had been collected after 7 years, and mainstream critics said, “I told you this wasn’t going to work. 1% in 7 years means it’s going to take 700 years, just like we said.” My reaction was, “We finished one percent—We’re almost done. We’re doubling every year. 1% is only 7 doublings from 100%.” And indeed, it was finished 7 years later.

A key question is why do some people readily get this, and other people don’t? It’s definitely not a function of accomplishment or intelligence. Some people who are not in professional fields understand this very readily because they can experience this progress just in their smartphones, and other people who are very accomplished and at the top of their field just have this very stubborn linear thinking. So, I really don’t actually have an answer for that.

MARTIN FORD: You would agree though that it’s not just about exponential progress in terms of computing speed or memory capacity? There are clearly some fundamental conceptual breakthroughs that have to happen in terms of teaching computers to learn from real time, unstructured data the way that human beings do, or in reasoning and imagination?

RAY KURZWEIL: Well, progress in software is also exponential, even though it has that unpredictable aspect that you’re alluding to. There’s a cross-fertilization of ideas that is inherently exponential, and once we have established performance at one level, ideas emerge to get to the next level.

There was a study done by the Obama administration scientific advisory board on this question. They examined how hardware and software progress compares. They took a dozen classical engineering and technical problems and looked at the advance quantitatively to see how much was attributable to hardware. Generally, over the previous 10 years from that point, it was about 1,000 to 1 in hardware, which is consistent with the implication of doubling in price performance every year. The software, as you might expect, varied, but in every case, it was greater than the hardware. Advances tend to be exponential. If you make an advance in software, it doesn’t progress linearly; it progresses exponentially. On the overall progress is the product of the progress in hardware and software.

MARTIN FORD: The other date that you’ve given as a projection is 2045 for what you referred to as the singularity. I think most people associate that with an intelligence explosion or the advent of a true superintelligence. Is that the right way to think about it?

RAY KURZWEIL: There are actually two schools of thought on the singularity: there’s a hard take off school and a soft take off school. I’m actually in the soft take off school that says we will continue to progress exponentially, which is daunting enough. The idea of an intelligence explosion is that there is a magic moment where a computer can access its own design and modify it and create a smarter version of itself, and that it keeps doing that in a very fast iterative loop and just explodes in its intelligence.

I think we’ve actually been doing that for thousands of years, ever since we created technology. We are certainly smarter as a result of our technology. Your smartphone is a brain extender, and it does make us smarter. It’s an exponential process. A thousand years ago paradigm shifts and advances took centuries, and it looked like nothing was happening. Your grandparents lived the same lives you did, and you expected your grandchildren to do the same. Now, we see changes on an annual basis if not faster. It is exponential and that results in an acceleration of progress, but it’s not an explosion in that sense.

I think we will achieve a human level of intelligence by 2029 and it’s immediately going to be superhuman. Take for example our Talk to Books, you ask it a question and it reads 600 million sentences, 100,000 books, in half a second. Personally, it takes me hours to read 100,000 books!

Your smartphone right now is able to do searching based on keywords and other methods and search all human knowledge very quickly. Google search already goes beyond keyword search and has some semantic capability. The semantic understanding is not yet at human levels, but it’s a billion times faster than human thinking. And both the software and the hardware will continue to improve at an exponential pace.

MARTIN FORD: You’re also well known for your thoughts on using technology to expand and extend human life. Could you let me know more about that?

RAY KURZWEIL: One thesis of mine is that we’re going to merge with the intelligent technology that we are creating. The scenario that I have is that we will send medical nanorobots into our bloodstream. One application of these medical nanorobots will be to extend our immune systems. That’s what I call the third bridge to radical life extension. The first bridge is what we can do now, and bridge two is the perfecting of biotechnology and reprogramming the software of life. Bridge three constitutes these medical nanorobots to perfect the immune system. These robots will also go into the brain and provide virtual and augmented reality from within the nervous system rather than from devices attached to the outside of our bodies. The most important application of the medical nanorobots is that we will connect the top layers of our neocortex to synthetic neocortex in the cloud.

MARTIN FORD: Is this something that you’re working on at Google?

RAY KURZWEIL: The projects I have done with my team here at Google use what I would call crude simulations of the neocortex. We don’t have a perfect understanding of the neocortex yet, but we’re approximating it with the knowledge we have now. We are able to do interesting applications with language now, but by the early 2030s we’ll have very good simulations of the neocortex.

Just as your phone makes itself a million times smarter by accessing the cloud, we will do that directly from our brain. It’s something that we already do through our smartphones, even though they’re not inside our bodies and brains, which I think is an arbitrary distinction. We use our fingers and our eyes and ears, but they are nonetheless brain extenders. In the future, we’ll be able to do that directly from our brains, but not just to perform tasks like search and language translation directly from our brains, but to actually connect the top layers of our neocortex to synthetic neocortex in the cloud.

Two million years ago, we didn’t have these large foreheads, but as we evolved we got a bigger enclosure to accommodate more neocortex. What did we do with that? We put it at the top of the neocortical hierarchy. We were already doing a very good job at being primates, and now we were able to think at an even more abstract level.

That was the enabling factor for us to invent technology, science, language, and music. Every human culture that we have discovered has music, but no primate culture has music. Now that was a one-shot deal, we couldn’t keep growing the enclosure because birth would have become impossible. This neocortical expansion two million years ago actually made birth pretty difficult as it was.

This new extension in the 2030s to our neocortex will not be a one-shot deal. Even as we speak, the cloud is doubling in power every year. It’s not limited by a fixed enclosure, so the non-biological portion of our thinking will continue to grow. If we do the math, we will multiply our intelligence a billion-fold by 2045, and that’s such a profound transformation that it’s hard to see beyond that event horizon. So, we’ve borrowed this metaphor from physics of the event horizon and the difficulty of seeing beyond it.

Technologies such as Google Search and Talk to Books are at least a billion times faster than humans. It’s not at human levels of intelligence yet, but once we get to that point, AI will take advantage of the enormous speed advantage which already exists and an ongoing exponential increase in capacity and capability. So that’s the meaning of the singularity, it’s a soft take off, but exponentials nonetheless become quite daunting. If you double something 30 times, you’re multiplying by a billion.

MARTIN FORD: One of the areas where you’ve talked a lot about the singularity having an impact is in medicine and especially in the longevity of human life, and this is maybe one area where you’ve been criticized. I heard a presentation you gave at MIT last year where you said that within 10 years, most people might be able to achieve what you call “longevity escape velocity,” and you also said that you think you personally might have achieved that already? Do you really believe it could happen that soon?

RAY KURZWEIL: We are now at a tipping point in terms of biotechnology. People look at medicine, and they assume that it is just going to plod along at the same hit or miss pace that they have been used to in the past. Medical research has essentially been hit or miss. Drug companies will go through a list of several thousand compounds to find something that has some impact, as opposed to actually understanding and systematically reprogramming the software of life.

It’s not just a metaphor to say that our genetic processes are software. It is a string of data, and it evolved in an era where it was not in the interest of the human species for each individual to live very long because there were limited resources such as food. We are transforming from an era of scarcity to an era of abundance

Every aspect of biology as an information process has doubled in power every year. For example, genetic sequencing has done that. The first genome cost US $1 billion, and now we’re close to $1,000. But our ability to not only collect this raw object code of life but to understand it, to model it, to simulate it, and most importantly to reprogram it, is also doubling in power every year.

We’re now getting clinical applications—it’s a trickle today, but it’ll be a flood over the next decade. There are hundreds of profound interventions in process that are working their way through the regulatory pipeline. We can now fix a broken heart from a heart attack, that is, rejuvenate a heart with a low ejection fraction after a heart attack using reprogrammed adult stem cells. We can grow organs and are installing them successfully in primates. Immunotherapy is basically reprogramming the immune system. On its own, the immune system does not go against cancer because it did not evolve to go after diseases that tend to get us later on in life. We can actually reprogram it and turn it on to recognize cancer and treat it as a pathogen. This is a huge bright spot in cancer treatment, and there are remarkable trials where virtually every person in the trial goes from stage 4 terminal cancer to being in remission.

Medicine is going to be profoundly different in a decade from now. If you’re diligent, I believe you will be able to achieve longevity escape velocity, which means that we’ll be adding more time than is going by, not just to infant life expectancy but to your remaining life expectancy. It’s not a guarantee, because you can still be hit by the proverbial bus tomorrow, and life expectancy is actually a complicated statistical concept, but the sands of time will start running in rather than running out. In another decade further out, we’ll be able to reverse aging processes as well.

MARTIN FORD: I want to talk about the downsides and the risks of AI. I would say that sometimes you are unfairly criticized as being overly optimistic, maybe even a bit Pollyannaish, about all of this. Is there anything we should worry about in terms of these developments?

RAY KURZWEIL: I’ve written more about the downsides than anyone, and this was decades before Stephen Hawking or Elon Musk were expressing their concerns. There was extensive discussion of the downsides of GNR—Genetics, Nanotechnology, and Robotics (which means AI)—in my book, The Age of Spiritual Machines, which came out in 1999 that led Bill Joy to write his famous Wired cover story in January 2000 titled, Why the Future Doesn’t Need Us.

MARTIN FORD: That was based upon a quote from Ted Kaczynski, the Unabomber, wasn’t it?

RAY KURZWEIL: I have a quote from him on one page that sounds like a very level-headed expression of concern, and then you turn the page, and you see that this is from the Unabomber Manifesto. I discussed in quite some detail in that book the existential risk of GNR. In my 2005 book, The Singularity is Near, I go into the topic of GNR risks in a lot of detail. Chapter 8 is titled, “The Deeply Intertwined Promise versus Peril of GNR.”

I’m optimistic that we’ll make it through as a species. We get far more profound benefit than harm from technology, but you don’t have to look very far to see the profound harm that has manifested itself, for example, in all of the destruction in the 20th century—even though the 20th century was actually the most peaceful century up to that time, and we’re in a far more peaceful time now. The world is getting profoundly better, for example, poverty has been cut 95% in the last 200 years and literacy rates have gone from under 10% to over 90% in the world.

People’s algorithm for whether the world is getting better or worse is “how often do I hear good news versus bad news?”, and that’s not a very good method. There was a poll taken of 24,000 people in about 26 countries asking this question, “Is poverty worldwide getting better or worse over the last 20 years?” 87% said, incorrectly, that it’s getting worse. Only 1% said correctly that it’s fallen by half or more in the last 20 years. Humans have an evolutionary preference for bad news. 10,000 years ago, it was very important that you paid attention to bad news, for example that little rustling in the leaves that might be a predator. That was more important to pay attention to than studying that your crops are half a percent better than last year, and we continue to have this preference for bad news.

MARTIN FORD: There’s a step-change, though, between real risks and existential risks.

RAY KURZWEIL: Well, we’ve also done reasonably well with existential risks from information technology. Forty years ago, a group of visionary scientists saw both the promise and the peril of biotechnology, neither of which was close at hand at the time, and they held the first Asilomar Conference on biotechnology ethics. These ethical standards and strategies have been updated on a regular basis. That has worked very well. The number of people who have been harmed by intentional or accidental abuse or problems with biotechnology has been close to zero. We’re now beginning to get the profound benefit that I alluded to, and that’s going to become a flood over the next decade.

That’s a success for this approach of comprehensive ethical standards, and technical strategies on how to keep the technology safe, and much of that is now baked into law. That doesn’t mean we can cross danger from biotechnology off our list of concerns; we keep coming up with more powerful technologies like CRISPR and we have to keep reinventing the standards.

We had our first AI ethics Asilomar conference about 18 months ago where we came up with a set of ethical standards. I think they need further development, but it’s an overall approach that can work. We have to give it a high priority.

MARTIN FORD: The concern that’s really getting a lot of attention right now is what’s called the control problem or the alignment problem, where a superintelligence might not have goals that are aligned with what’s best for humanity. Do you take that seriously, and should work be done on that?

RAY KURZWEIL: Humans don’t all have aligned goals with each other, and that’s really the key issue. It’s a misconception to talk about AI as a civilization apart, as if it’s an alien invasion from Mars. We create tools to extend our own reach. We couldn’t reach food at that higher branch 10,000 years ago, so we made a tool that extended our reach. We can’t build a skyscraper with our bare hands, so we have machines that leverage the range of our muscles. A kid in Africa with a smartphone is connected to all of the human knowledge with a few keystrokes.

That is the role of technology; it enables us to go beyond our limitations, and that’s what we are doing and will continue to do with AI. It’s not us versus the AIs, which has been the theme of many AI futurist dystopian movies. We are going to merge with it. We already have. The fact that your phone is not physically inside your body and brain is a distinction without a difference, because it may as well be. We don’t leave home without it, we’re incomplete without it, nobody could do their work, get their education, or keep their relationships without their devices today, and we’re getting more intimate with them.

I went to MIT because it was so advanced in 1965 that it had a computer. I had to take my bicycle across the campus to get to it and show my ID to get into the building, and now half a century later we’re carrying them in our pockets, and we’re using them constantly. They are integrated into our lives and will ultimately become integrated into our bodies and brains.

If you look at the conflict and warfare we’ve had over the millennia, it’s been from humans having disagreements. I do think technology tends to actually create greater harmony and peace and democratization. You can trace the rise of democratization to improvements in communication. Two centuries ago, there was only one democracy in the world. There were half a dozen democracies one century ago. Now there are 123 democracies out of 192 recognized countries, that’s 64% of the world. The world’s not a perfect democracy, but democracy has actually been accepted as the standard today. It is the most peaceful time in human history, and every aspect of life is getting better, and this is due to the effect of technology which is becoming increasingly intelligent, and it’s deeply integrated into who we are.

We have conflict today between different groups of humans, each of whom are amplified by their technology. That will continue to be the case, although I think there’s this other theme that better communication technology harnesses our short-range empathy. We have a biological empathy for small groups of people, but that’s now amplified by our ability to actually experience what happens to people half a world away. I think that’s the key issue; we still have to manage our human relations as we increase our personal powers through technology.

MARTIN FORD: Let’s talk about the potential for economic and job market disruption. I personally do think there’s a lot of potential for jobs to be lost or deskilled and for greatly increasing inequality. I actually think it could be something that will be disruptive on the scale of a new Industrial Revolution.

RAY KURZWEIL: Let me ask you this: how did that last Industrial Revolution work out? Two hundred years ago, the weavers had enjoyed a guild that was passed down from generation to generation for hundreds of years. Their business model was turned on its head and disrupted when all these thread-spinning and cloth-weaving machines came out that completely upended their livelihoods. They predicted that more machines would come out and that most people would lose their jobs, and that employment would be enjoyed just by an elite. Part of that prediction came true—more textile machines were introduced and many types of skills and jobs were eliminated. However, employment went up, not down as society became more prosperous.

If I were a prescient futurist in 1900 I would point out that 38% of you work on farms and 25% of you work in factories, but I predict that 115 years from now, in 2015, that’ll be 2% on farms, and 9% in factories. Everybody’s reaction would be, “Oh my god I’m going to be out of work!” I would then say “Don’t worry, the jobs that are eliminated are at the bottom of the skill ladder, and we are going to create an even larger number of jobs at the top of the skill ladder.”

People would say, “Oh really, what new jobs?”, and I’d say, “Well I don’t know, we haven’t invented them yet.” People say we’ve destroyed many more jobs than we’ve created but that’s not true, we’ve gone from 24 million jobs in 1900 to 142 million jobs today, and as a percentage of the population that goes from 31% to 44%. How do these new jobs compare? Well, for one thing, the average job today pays 11 times as much in constant dollars per hour than in 1900. As a result, we’ve shortened the work year from about 3,000 hours to 1,800 hours. People still make 6 times as much per year in constant dollars, and the jobs have become much more interesting. I think that’s going to continue to be the case even in the next Industrial Revolution.

MARTIN FORD: The real question is whether this time it’s different. What you say about what happened previously is certainly true, but it is also true, according to most estimates, that maybe half or more of the people in the workforce are doing things that are fundamentally predictable and relatively routine, and all those jobs are going to be potentially threatened by machine learning. Automating most of those predictable jobs does not require human-level AI.

There may be new kinds of work created for robotics engineers and deep learning researchers and all of that, but you cannot take all the people that are now flipping hamburgers or driving taxis and realistically expect to transition them into those kinds of jobs, even assuming that there are going to be a sufficient number of these new jobs. We’re talking about a technology that can displace people cognitively, displace their brainpower, and it’s going to be extraordinarily broad-based.

RAY KURZWEIL: Your model that’s implicit in your prediction is us-versus-them, and what are the humans going to do versus the machines. We’ve already made ourselves smarter in order to do these higher-level types of jobs. We’ve made ourselves smarter not with things connected directly into our brains yet, but with intelligent devices. Nobody can do their jobs without these brain extenders, and the brain extenders are going to extend our brains even further, and they’re going to be more closely integrated into our lives.

One thing that we did to improve our skills is education. We had 68,000 college students in 1870 and today we have 15 million. If you take them and all the people that service them, such as faculty and staff, it is about 20 percent of the workforce that is just involved in higher education, and we are constantly creating new things to do. The whole app economy did not exist about six years ago, and that forms a major part of the economy today. We’re going to make ourselves smarter.

A whole other thesis that needs to be looked at in considering this question is the radical abundance thesis that I mentioned earlier. I had an on-stage dialogue with Christine Lagarde, the managing director of the IMF, at the annual International Monetary Fund meeting and she said, “Where’s the economic growth associated with this? The digital world has these fantastic things, but fundamentally you can’t eat information technology, you can’t wear it, you can’t live in it,” and my response was, “All that’s going to change.”

“All those types of nominally physical products are going to become an information technology. We’re going to grow food with vertical agriculture in AI-controlled buildings with hydroponic fruits and vegetables, and in vitro cloning of muscle tissue for meat, providing very high-quality food without chemicals at very low cost, and without animal suffering. Information technology has a 50% deflation rate; you get the same computation, communication, genetic sequencing that you could purchase a year ago for half the price, and this massive deflation is going to attend to these traditionally physical products.”

MARTIN FORD: So, you think that technologies like 3D printing or robotic factories and agriculture could drive costs down for nearly everything?

RAY KURZWEIL: Exactly, 3D printing will print out clothing in the 2020s. We’re not quite there yet for various reasons, but all that’s moving in the right direction. The other physical things that we need will be printed out on 3D printers, including modules which will snap together a building in a matter of days. All the physical things we need will ultimately become facilitated by these AI-controlled information technologies.

Solar energy is being facilitated by applying deep learning to come up with better materials, and as a result, the cost of both energy storage and energy collection is coming down rapidly. The total amount of solar energy is doubling every two years, and the same trend exists with wind energy. Renewable energy is now only about five doublings, at two years per doubling, away from meeting 100% of our energy needs, by which time it will use one part in thousands of the energy from the sun or from the wind.

Christine Lagarde said, “OK, there is one resource that will never be an information technology, and that’s land. We are already crowded together.” I responded “That’s only because we decided to crowd ourselves together and create cities so we could work and play together.” People are already spreading out as our virtual communication becomes more robust. Try taking a train trip anywhere in the world and you will see that 95% of the land is unused.

We’re going be able to provide a very high quality of living that’s beyond what we consider a high standard of living today for everyone, for all of the human population, as we get to the 2030s. I made a prediction at TED that we will have universal basic income, which won’t actually need to be that much to provide a very high standard of living, as we get into the 2030s.

MARTIN FORD: So, you’re a proponent of a basic income, eventually? You agree that there won’t be a job for everyone, or maybe everyone won’t need a job, and that there’ll be some other source of income for people, like a universal basic income?

RAY KURZWEIL: We assume that a job is a road to happiness. I think the key issue will be purpose and meaning. People will still compete to be able to contribute and get gratification.

MARTIN FORD: But you don’t necessarily have to get paid for the thing that you get meaning from?

RAY KURZWEIL: I think we will change the economic model and we are already in the process of doing that. I mean, being a student in college is considered a worthwhile thing to do. It’s not a job, but it’s considered a worthwhile activity. You won’t need income from a job in order to have a very good standard of living for the physical requirements of life, and we will continue to move up Maslow’s hierarchy. We have been doing that, just compare today to 1900.

MARTIN FORD: What do you think about the perceived competition with China to get to advanced AI? China does have advantages in terms of having less regulation on things like privacy. Plus, their population is so much larger, which generates more data and also means they potentially have a lot more young Turings or von Neumanns in the pipeline.

RAY KURZWEIL: I don’t think it’s a zero-sum game. An engineer in China who comes up with a breakthrough in solar energy or in deep learning benefits all of us. China is publishing a lot just as the United States is, and the information is actually shared pretty widely. Look at Google, which put its TensorFlow deep learning framework into the public domain, and we did that in our group with the technology underlying Talk to Books and Smart Reply being made open source so people can use that.

I personally welcome the fact that China is emphasizing economic development and entrepreneurship. When I was in China recently the tremendous explosion of entrepreneurship was apparent. I would encourage China to move in the direction of free exchange of information. I think that’s fundamental for this type of progress. All around the world we see Silicon Valley as a motivating model. Silicon Valley really is just a metaphor for entrepreneurship, the celebrating of experimenting, and calling failure experience. I think that’s a good thing, I really don’t see it as an international competition.

MARTIN FORD: But do you worry about the fact that China is an authoritarian state, and that these technologies do have, for example, military applications? Companies like Google and certainly DeepMind in London have been very clear that they don’t want their technology used in anything that is even remotely military. Companies like Tencent and Baidu in China don’t really have the option to make that choice. Is that something we should worry about, that there’s a kind of asymmetry going forward?

RAY KURZWEIL: Military use is a different issue from authoritarian government structure. I am concerned about the authoritarian orientation of the Chinese government, and I would encourage them to move toward greater freedom of information and democratic ways of governing. I think that will help them and everyone economically.

I think these political and social and philosophical issues remain very important. My concern is not that AI is going to go off and do something on its own, because I think it’s deeply integrated with us. I’m concerned about the future of the human population, which is already a human technological civilization. We’re going to continue to enhance ourselves through technology, and so the best way to assure the safety of AI is to attend to how we govern ourselves as humans.

RAY KURZWEIL is widely recognized as one of the world’s foremost inventors and futurists. Ray received his engineering degree from MIT, where he was mentored by Marvin Minsky, one of the founding fathers of the field of artificial intelligence. He went on to make major contributions in a variety of areas. He was the principal inventor of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.

Among Ray’s many honors, he received a Grammy Award for outstanding achievements in music technology; he is the recipient of the National Medal of Technology (the nation’s highest honor in technology), was inducted into the National Inventors Hall of Fame, holds twenty-one honorary doctorates, and honors from three US presidents.

Ray has written five national best-selling books, including New York Times bestsellers The Singularity Is Near (2005) and How To Create A Mind (2012). He is Co-Founder and Chancellor of Singularity University and a Director of Engineering at Google, heading up a team developing machine intelligence and natural language understanding.

Ray is known for his work on exponential progress in technology, which he has formalized as “The Law of Accelerating Returns.” Over the course of decades, he has made a number of important predictions that have proven to be accurate.

Ray’s first novel, Danielle, Chronicles of a Superheroine, is being published in early 2019. Another book by Ray, The Singularity is Nearer, is expected to be published in late 2019.