Chapter 9. ANDREW NG

ANDREW NG

The rise of supervised learning has created a lot of opportunities in probably every major industry. Supervised learning is incredibly valuable and will transform multiple industries, but I think there is a lot of room for something even better to be invented.

CEO, LANDING AI & GENERAL PARTNER, AI FUND ADJUNCT PROFESSOR COMPUTER SCIENCE, STANFORD

Andrew Ng is widely recognized for his contributions to artificial intelligence and deep learning, as both an academic researcher and an entrepreneur. He co-founded both the Google Brain project and the online education company, Coursera. He then became the chief scientist at Baidu, where he built an industry-leading AI research group. Andrew played a major role in the transformation of both Google and Baidu into AI-driven organizations. In 2018 he established AI Fund, a venture capital firm focused on building startup companies in the AI space from scratch.

MARTIN FORD: Let’s start by talking about the future of AI. There’s been remarkable success, but also enormous hype, associated with deep learning. Do you feel that deep learning is the way forward and—the primary idea that will continue to underlie progress in AI? Or is it possible that an entirely new approach will replace it in the long run?

ANDREW NG: I really hope there’s something else out there better than deep learning. All of the economic value driven by this recent rise of AI is down to supervised learning—basically learning input and output mappings. For example, with self-driving cars the input is a video picture of what’s in front of your car, and the output is the actual position of the other cars. There are other examples, speech recognition has an input of an audio clip and an output of a text transcript, machine translation has an input of English text and an output of Chinese text, say.

Deep learning is incredibly effective for learning these input/output mappings and this is called supervised learning, but I think that artificial intelligence is much bigger than supervised learning.

The rise of supervised learning has created a lot of opportunities in probably every major industry. Supervised learning is incredibly valuable and will transform multiple industries, but I think that there is a lot of room for something even better to be invented. It’s hard to say right now exactly what that would be, though.

MARTIN FORD: What about the path to artificial general intelligence? What would you say are the primary breakthroughs that have to occur for us to get to AGI?

ANDREW NG: I think the path is very unclear. One of the things we will probably need is unsupervised learning. For example, today in order to teach a computer what a coffee mug is we show it thousands of coffee mugs, but no child’s parents, no matter how patient and loving, ever pointed out thousands of coffee mugs to that child. The way that children learn is by wandering around the world and soaking in images and audio. The experience of being a child allows them to learn what a coffee mug is. The ability to learn from unlabeled data, without parents or labelers pointing out thousands of coffee mugs, will be crucial to making our systems more intelligent.

I think one of the problems in AI is that we’ve made a lot of progress in building specialized intelligence or narrow intelligence, and very little progress towards AGI. The problem is, both of these things are called AI. AI turns out to be incredibly valuable for online advertising, speech recognition and self-driving cars, but it’s specialized intelligence, not general. Much of what the public sees is progress in building specialized intelligence and they think that we are therefore making rapid progress toward artificial general intelligence. It’s just not true.

I would love to get to AGI, but the path is very unclear. I think that individuals that are less knowledgeable about AI have used very simplistic extrapolations, and that has led to unnecessary amounts of hype about AI.

MARTIN FORD: Do you expect AGI to be achieved in your lifetime?

ANDREW NG: The honest answer is that I really don’t know. I would love to see AGI in my lifetime, but I think there’s a good chance it’ll be further out than that.

MARTIN FORD: How did you become interested in AI? And how did that lead to such a varied career trajectory?

ANDREW NG: My first encounter with neural networks was when I was in high school where I did an office assistant internship. There may not seem like an obvious link between an internship and neural networks, but during the course of my internship I thought about how we could automate some of the work that I was doing, and that was the earliest time I was thinking about neural networks. I wound up doing my bachelor’s at Carnegie Mellon, my master’s from MIT and a PhD, with a thesis titled, Shaping and Policy Search in Reinforcement Learning, from the University of California, Berkeley.

For about the next twelve years I taught at the Stanford University Department of Computer Science and the Department of Electrical Engineering as a professor. Then between 2011 and 2012, I was a founding member of the Google Brain team, which helped transform Google into the AI company that we now perceive it to be.

MARTIN FORD: And Google Brain was the first attempt to really use deep learning at Google, correct?

ANDREW NG: To an extent. There had been some small-scale projects based around neural networks, but the Google Brain team really was the force that took deep learning into many parts of Google. The first thing I did when I was leading the Brain team was to teach a class within Google for around 100 engineers. This helped teach a lot of Google engineers about deep learning, and it created a lot of allies and partners for the Google Brain team and opened up deep learning to a lot more people.

The first two projects we did were partnering with the speech team, which I think helped transform speech recognition at Google, and working on unsupervised learning, which led to the somewhat infamous Google cat. This is where we set an unsupervised neural network free on YouTube data and it learned to recognize cats. Unsupervised learning isn’t what actually creates the most value today, but that was a nice technology demonstration of the type of scale we could achieve using Google’s compute cluster at the time. We were able to do very large-scale deep learning algorithms.

MARTIN FORD: You stayed at Google until 2012. What came next for you?

ANDREW NG: Towards the end of my time at Google, I felt that deep learning should move toward GPUs. As a result, I wound up doing that work at Stanford University rather than at Google. In fact, I remember a conversation that I had with Geoff Hinton at NIPS, the annual conference on Neural Information Processing Systems, where I was trying to use GPUs, and I think that later influenced his work with Alex Krizhevsky and influenced, quite a lot of people to then adopt GPUs for deep learning.

I was lucky to be teaching at Stanford at the time because being here in Silicon Valley, we saw the signals that GPGPU (general-purpose GPU) computing was coming. We were in the right place at the right time and we had friends at Stanford working on GPGPUs, so we saw the ability of GPUs to help scale up deep learning algorithms earlier than almost everyone else.

My former student at Stanford, Adam Coates was actually the reason I decided to pitch the Google Brain team to Larry Page in a bid to get Larry to approve me using a lot of their computers to build a very large neural network. It was really one figure that was generated by Adam Coates, where the x-axis was the amount of data, and the y-axis was the performance of an algorithm. Adam generated this figure showing that the more data we could train these deep learning algorithms on, the better they’d perform.

MARTIN FORD: After that you went on to start Coursera with Daphne Koller, who is also interviewed in this book. Then you moved on to Baidu. Can you describe your path through those roles?

ANDREW NG: Yes, I helped to start Coursera with Daphne because I wanted to scale online teaching both around AI and other things to millions of people around the world. I felt that the Google Brain team already had tremendous momentum at that point, so I was very happy to hand the reins over to Jeff Dean and move on to Coursera. I worked at building Coursera from the ground up for a couple of years until 2014 when I stepped away from my day-to-day work there to go and work at Baidu’s AI Group. Just as Google Brain helped transform Google into the AI company you perceive it to be today, the Baidu AI group did a lot of work to transform Baidu into the AI company that a lot of people now perceive Baidu to be. At Baidu, I built a team that built technology, supported existing business units, and then systematically initiated new businesses using AI.

After three years there the team was running very well, so I decided to move on again this time becoming the CEO of Landing AI and a general partner at AI Fund.

MARTIN FORD: You’ve been instrumental in transforming both Google and Baidu into AI-driven companies, and it sounds like now you want to scale that out and transform everything else. Is that your vision for AI Fund and Landing AI?

ANDREW NG: Yes, I’m done transforming large web search engines, and now I’d rather go and transform some other industries. At Landing AI, I help to transform companies using AI. There are a lot of opportunities in AI for incumbent companies, so Landing AI is focused on helping those companies that already exist to transform and embrace those AI opportunities. AI Fund takes this a step further, looking at the opportunities for new startups and new businesses to be created from scratch built around AI technologies.

These are very different models with different opportunities. For example, if you look at the recent major technological transformation of the internet, incumbent companies like Apple and Microsoft did a great job transforming themselves to be internet companies. However, you only have to look at how big the “startups,” like Google, Amazon, Baidu, and Facebook are now and how they did such a great job building incredibly valuable businesses based on the rise of the internet.

With the rise of AI there will also be some incumbent companies, ironically many of them were startups in the previous age, like Google, Amazon, Facebook, and Baidu, that’ll do very well with the rise of AI. AI Fund is trying to create the new startup companies that leverage these new AI capabilities we have. We want to find or create the next Google or Facebook.

MARTIN FORD: There are a lot of people who say that the incumbents like Google and Baidu are essentially unshakable because they have access to so much data, and that creates a barrier to entry for smaller companies. Do you think startups and smaller companies are going to struggle to get traction in the AI space?

ANDREW NG: That data asset that the large search engines have definitely creates a highly defensible barrier to the web search business, but at the same time, it’s not obvious how web search clickstream data is useful for medical diagnosis or for manufacturing or for personalized educational tutors, for example.

I think data is actually verticalized, so building a defensible business in one vertical can be done with a lot of data from that vertical. Just as electricity transformed multiple industries 100 years ago, AI will transform multiple industries, and I think that there is plenty of room for multiple companies to be very successful.

MARTIN FORD: You mentioned AI Fund, which you founded recently and which I think operates differently from other venture capital funds. What is your vision for AI Fund, and how is it unique?

ANDREW NG: Yes. AI Fund is extremely different from most venture capital funds, and I think most venture capital funds are in the business of trying to identify winners, while we’re in the business of creating winners. We build startups from scratch, and we tell entrepreneurs that if you already have a pitch deck, you’re probably at too late a stage for us.

We bring in teams as employees and work with them, mentor them, and support them, whatever is needed to try and build a successful startup from scratch. We actually tell people that if you’re interested in working with us, don’t send us a pitch deck, send us a resume and then we’ll work together to flesh out the startup idea.

MARTIN FORD: Do most people that come to you already have an idea, or do you help them come up with something?

ANDREW NG: If they have an idea we’re happy to talk about it, but my team has a long list of ideas that we think are promising but we don’t have the bandwidth to invest in. When people join us, we’re very happy to share this long list of ideas with them to see which ones fit.

MARTIN FORD: It sounds like your strategy is to attract AI talent in part by offering the opportunity and infrastructure to found a startup venture.

ANDREW NG: Yes, building a successful AI company takes more than AI talent. We focus so much on the technology because it’s advancing so quickly, but building a strong AI team often needs a portfolio of different skills ranging from the tech, to the business strategy, to product, to marketing, to business development. Our role is building full stack teams that are able to build concrete business verticals. The technology is super important, but a startup is much more than technology.

MARTIN FORD: So far, it seems that any AI startup that demonstrates real potential gets acquired by one of the huge tech firms. Do you think that eventually there’ll be AI startups that will go on to have IPOs and become public companies?

ANDREW NG: I really hope there’ll be plenty of great AI startups that are not just acquired by much larger startups. Initial public offering as a tactic is not the goal, but I certainly hope that there’ll be many very successful AI startups that will end up thriving as standalone entities for a long time. We don’t really have a financial goal; the goal is to do something good in the world. I’d be really sad if every AI startup ends up being acquired by a bigger company, and I don’t think we’re headed there.

MARTIN FORD: Lately, I’ve heard a number of people express the view that deep learning is over-hyped and might soon “hit a wall” in terms of continued progress. There have even been suggestions that a new AI Winter could be on the horizon. Do you think that’s a real risk? Could disillusionment lead to a big drop off in investment?

ANDREW NG: No, I don’t think there’ll be another AI winter, but I do think there needs to be a reset of expectations about AGI. In the earlier AI winters, there was a lot of hype about technologies that ultimately did not really deliver. The technologies that were hyped were really not that useful, and the amount of value created by those earlier generations of technology was vastly less than expected. I think that’s what caused the AI winters.

In the current era, if you look at the number of people actually working on deep learning projects to date, it’s much greater than six months ago, and six months ago, it was much greater than six months before that. The number of concrete projects in deep learning, the number of people researching it, the number of people learning it, and the number of companies being built on it means the amount of revenue being generated is actually growing very strongly.

The fundamentals of the economics support continued investment in deep learning. Large companies are continuing to back deep learning strongly, and it’s not based on just hopes and dreams, it’s based on the results we’re already seeing. That will see confidence continue to grow. Now, I do think we need to reset the expectations about AI as a whole, and AGI in particular. I think the rise of deep learning was unfortunately coupled with false hopes and dreams of a sure path to achieving AGI, and I think that resetting everyone’s expectations about that would be very helpful.

MARTIN FORD: So, aside from unrealistic expectations about AGI, do you think we will continue to see consistent progress with the use of deep learning in more narrow applications?

ANDREW NG: I think there are a lot of limitations to the current generation of AI. AI is a broad category, though, and I think when people discuss AI, what they really mean is the specific toolset of backpropagation, supervised learning, and neural networks. That is the most common piece of deep learning that people are working on right now.

Of course, deep learning is limited, just like the internet is limited, and electricity is limited. Just because we invented electricity as a utility, it didn’t suddenly solve all of the problems of humanity. In the same way, backpropagation will not solve all the problems of humanity, but it is turning out to be incredibly valuable, and we’re nowhere near done building out all the things we could do with neural networks trained by backpropagation. We’re just in the early phases of figuring out the implications of even the current generation of technology.

Sometimes, when I’m giving a talk about AI, the first thing I say is “AI is not magic, it can’t do everything.” I think it’s very strange that we live in a world where anyone even has to say sentences like that—that there’s a technology that cannot do everything.

The huge problem that AI has had is what I call the communications problem. There’s been tremendous progress in narrow artificial intelligence and also real progress in artificial general intelligence, but both of these things are called AI. So, tremendous progress in economics and value through narrow artificial intelligence is rightly causing people to see that there’s tremendous progress in AI, but it’s also causing people to falsely reason that there’s tremendous progress in AGI as well. Frankly, I do not see much progress. Other than having faster computers and data, and progress at a very general level, I do not see specific progress toward AGI.

MARTIN FORD: There seem to be two general camps with regard to the future of AI. Some people believe it will be neural networks all the way, while others think a hybrid approach that incorporates ideas from other areas, for example symbolic logic, will be required to achieve continued progress. What’s your view?

ANDREW NG: I think it depends on whether you’re talking short term or long term. At Landing AI we use hybrids all the time to build solutions for industrial partners. There’s often a hybrid of deep learning tools together with, say, traditional computer vision tools because when your datasets are small, deep learning by itself isn’t always the best tool. Part of the skill of being an AI person is knowing when to use a hybrid and how to put everything together. That’s how we deliver tons of short-term useful applications.

On balance, there’s been a shift from traditional tools toward deep learning, especially when you have a lot of data, but there are still plenty of problems in the world where you have only small datasets, and then the skill is in designing the hybrid and getting the right mix of techniques.

I think in the long term, if we ever move toward more human-level intelligence, maybe not for AGI but more flexible learning algorithms, I think that we’ll continue to see a shift toward neural networks, but one of the most exciting things yet to be invented will be other algorithms that are much better than backpropagation. Just like alternating current power is incredibly limited, but also incredibly useful, I think backpropagation is also incredibly limited, but incredibly useful, and I don’t see any contradiction in those circumstances.

MARTIN FORD: So, as far as you’re concerned, neural networks are clearly the best technology to take AI forward?

ANDREW NG: I think that for the foreseeable future, neural networks will have a very central place in the AI world. I don’t see any candidates on the horizon for replacing neural networks, that’s not to say that there won’t be something on the horizon in the future.

MARTIN FORD: I recently spoke with Judea Pearl, and he believes very strongly that AI needs a causal model in order to progress and that current AI research isn’t giving enough attention to that. How would you respond to that view?

ANDREW NG: There are hundreds of different things that deep learning doesn’t do, and causality is one of them. There are other things, such as not doing explainability well enough; we need to sort out how to defend against adversarial attacks; we need to get a lot better at learning from small datasets rather than big datasets; we need to get much better at transfer or multitask learning; we need to figure out how to use unlabeled data better. So yes, there are a lot of things that backpropagation doesn’t do well, and again causality is one of them. When I look at the amount of high value projects being created, I don’t see causality as a hindering factor in them, but of course we’d love to make progress there. We’d love to make progress in all of those things I mentioned.

MARTIN FORD: You mentioned adversarial attacks. I’ve seen research indicating that it is fairly easy to trick deep learning networks using manufactured data. Is that going to be a big problem as this technology becomes more prevalent?

ANDREW NG: I think it is already a problem, especially in anti-fraud. When I was head of the Baidu AI team we were constantly fighting against fraudsters both attacking AI systems and using AI tools to commit fraud. This is not a futuristic thing. I’m not fighting that war right now, because I’m not leading an anti-fraud team, but I have led teams and you feel very adversarial and very zero-sum when you’re fighting against fraud. The fraudsters are very smart and very sophisticated, and just as we think multiple steps ahead, they think multiple steps ahead. As the technology evolves, the attacks and the defenses will both have to evolve. This is something that those of us shipping products in the AI community have been dealing with for a few years already.

MARTIN FORD: What about privacy issues? In China especially, facial recognition technology is becoming ubiquitous. Do you think we run the risk that AI is going to be deployed to create an Orwellian surveillance state?

ANDREW NG: I’m not an expert on that, so I’ll defer to others. One thing that I would say, is that one trend we see with many rises in technology is the potential for greater concentration of power. I think this is true of the internet, and this is true again with the rise of AI. It becomes possible for smaller and smaller groups to be more and more powerful. The concentration of power can happen at the level of corporations, where corporations with relatively few employees can have a bigger influence, or at the level of governments.

The technology available to small groups is more powerful than ever before. For example, one of the risks of AI that we have already seen is the ability of a small group to influence the way very large numbers of people vote, and the implications of that on democracy is something that we need to pay close attention to, to make sure that democracy is able to defend itself so that votes are truly fair and representative of the interests of the population. What we saw in the recent US election was based more on internet technologies rather than AI technologies, but the opportunity is there. Before that, television had a huge effect on democracy and how people voted. As technology evolves, the nature and texture of governance and democracy changes, which is why we have to constantly refresh our commitment to protecting society from its abuse.

MARTIN FORD: Let’s talk about one of the highest-profile applications of AI: self-driving cars. How far off are they really? Imagine you’re in a city and you’re going to call for a fully autonomous car that will take you from one random location to another. What’s the time frame for when you think that becomes a widely available service?

ANDREW NG: I think that self-driving cars in geofenced regions will come relatively soon, possibly by the end of this year, but that self-driving cars in more general circumstances will be a long way off, possibly multiple decades.

MARTIN FORD: By geofenced, you mean autonomous cars that are running essentially on virtual trolley tracks, or in other words only on routes that have been intensively mapped?

ANDREW NG: Exactly! A while back I co-authored a Wired article talking about Train Terrain (https://www.wired.com/2016/03/self-driving-cars-wont-work-change-roads-attitudes/) about how I think self-driving cars might roll out. We’ll need infrastructure changes, and societal and legal changes, before we’ll see mass adoption of self-driving cars.

I have been fortunate to have seen the self-driving industry evolve for over 20 years now. As an undergraduate at Carnegie Mellon in the late ‘90s, I did a class with Dean Pomerleau working on their autonomous car project that steered the vehicle based an input video image. The technology was great, but it wasn’t ready for its time. Then at Stanford, I was a peripheral part of the DARPA Urban Challenge in 2007.

We flew down to Victorville, and it was the first time I saw so many self-driving cars in the same place. The whole Stanford team were all fascinated for the first five minutes, watching all these cars zip around without drivers, and the surprising thing was that after five minutes, we acclimatized to it, and we turned our backs to it. We just chatted with each other while self-driving cars zipped passed us 10 meters away, and we weren’t paying attention. One thing that’s remarkable about humanity is how quickly we acclimatize to new technologies, and I feel that it’s not going to be too long before self-driving cars are no longer called self-driving cars, they’re just called cars.

MARTIN FORD: I know you’re on the board of directors of the self-driving car company Drive.ai. Do you have an estimate for when their technology will be in general use?

ANDREW NG: They’re driving round in Texas right now. Let’s see, what time is it? Someone’s just taken one and gone for lunch. The important thing is how mundane that is. Someone’s just gone out for lunch, like any normal day, and they’ve done it by getting in a self-driving car.

MARTIN FORD: How do you feel about the progress you’ve seen in self-driving cars so far? How has it compared with your expectations?

ANDREW NG: I don’t like hype, and I feel like a few companies have spoken publicly and described what I think of as unrealistic timelines about the adoption of self-driving cars. I think that self-driving cars will change transportation, and will make human life much better. However, I think that everyone having a realistic roadmap to self-driving cars is much better than having CEOs stand on stage and proclaim unrealistic timelines. I think the self-driving world is working toward more realistic programs for bringing the tech to market, and I think that’s a very good thing.

MARTIN FORD: How do you feel about the role of government regulation, both for self-driving cars and AI more generally?

ANDREW NG: The automotive industry has always been heavily regulated because of safety, and I think that the regulation of transportation needs to be rethought in light of AI and self-driving cars. Countries with more thoughtful regulation will advance faster to embrace the possibilities enabled by, for example, AI-driven healthcare systems, self-driving cars, or AI-driven educational systems, and I think countries that are less thoughtful about regulation will risk falling behind.

Regulation should be in these specific industry verticals because we can have a good debate about the outcomes. We can more easily define what we do and do not want to happen. I find it less useful to regulate AI broadly. I think that the act of thinking through the impact of AI in specific verticals for regulation will not only help the verticals grow but will also help AI develop the right solutions and be adopted faster across verticals.

I think self-driving cars are only a microcosm of a broader theme here, which is the government. Every time there is a technological breakthrough, regulators must act. Regulators have to act to make sure that democracy is defended, even in the era of the internet and the era of artificial intelligence. In addition to defending democracy, governments must act to make sure that their countries are well positioned for the rise of AI.

Assuming that one of governments’ primary responsibilities is the well-being of their citizens, I think that governments that act wisely can help their nations ride the rise of AI, to much better outcomes for their people. In fact, even today, some governments use the internet much better than other governments. This is about external websites and services to citizens, as well as internal ones, in terms of, how are your government IT services organized?

Singapore has an integrated healthcare system, where every patient has a unique patient ID, and this allows for the integration of healthcare records in a way that is the envy of many other nations. Now, Singapore’s a small country, so maybe it’s easier for Singapore than a larger country, but the way the Singapore government has shifted the healthcare system to use the internet better, has a huge impact on the healthcare system, and on the health of the Singaporean citizens.

MARTIN FORD: It sounds like you think the relationship between government and AI should extend beyond just regulating the technology.

ANDREW NG: I think governments have a huge role to play in the rise of AI and in making sure that first, governance is done well with AI. For instance, should we better allocate government personnel using AI? How about the forestry resources, can we allocate that better using AI? Can AI help us set better economic policies? Can the government weed out fraud—maybe tax fraud—better and more efficiently using AI? I think AI will have hundreds of applications in governance, just as AI has hundreds of applications in the big AI companies. Governments should use AI well for themselves.

For the ecosystem as well, I think public-private partnerships will accelerate the growth of domestic industry, and governments that make thoughtful regulation about self-driving cars will see self-driving accelerate in their communities. I’m very committed to my home state of California, but California regulations do not allow self-driving car companies to do certain things, which is why many self-driving car companies can’t have their home bases in California and are now almost forced to operate outside of California.

I think that both at the state level as well as at the nation level, countries that have thoughtful policies about self-driving cars, about drones, and about the adoption of AI in payment systems and in healthcare systems, for example—those countries with thoughtful policies in all of these verticals will see much faster progress in how these amazing new tools can be brought to bear on some of the most important problems for their citizens. Beyond regulation and public-private partnership, to accelerate the adoption of these amazing tools, I think governments also need to come up with solutions in education and on the jobs issue.

MARTIN FORD: The impact on jobs and the economy is an area that I’ve written about a lot. Do you think we may be on the brink of a massive disruption that could result in widespread job losses?

ANDREW NG: Yes, and I think it’s the biggest ethical problem facing AI. Whilst the technology is very good at creating wealth in some segments of society, we have frankly left large parts of the United States and also large parts of the world behind. If we want to create not just a wealthy society but a fair one, then we still have a lot of important work to do. Frankly, that’s one of the reasons why I remain very engaged in online education.

I think our world is pretty good at rewarding people who have the required skills at a particular time. If we can educate people to reskill even as their jobs are displaced by technology, then we have a much better chance of making sure that this next wave of wealth creation ends up being distributed in a more equitable way. A lot of the hype about evil AI killer robots distracts leaders from the much harder, but much more important conversation about what we do about jobs.

MARTIN FORD: What do you think of a universal basic income as part of a solution to that problem?

ANDREW NG: I don’t support a universal basic income, but I do think a conditional basic income is a much better idea. There’s a lot about the dignity of work and I actually favor a conditional basic income in which unemployed individuals can be paid to study. This would increase the odds that someone that’s unemployed will gain the skills they need to re-enter the workforce and contribute back to the tax base that is paying for the conditional basic income.

I think in today’s world, there are a lot of jobs in the gig economy, where you can earn enough of a wage to get by, but there isn’t much room for lifting up yourself or your family. I am very concerned about an unconditional basic income causing a greater proportion of the human population to become trapped doing this low-wage, low-skilled work.

A conditional basic income that encourages people to keep learning and keep studying will make many individuals and families better off because we’re helping people get the training they need to then do higher-value and better-paying jobs. We see economists write reports with statistics like “in 20 years, 50% of jobs are at risk of automation,” and that’s really scary, but the flip side is that the other 50% of jobs are not at risk of automation.

In fact, we can’t find enough people to do some of these jobs. We can’t find enough healthcare workers, we can’t find enough teachers in the United States, and surprisingly we can’t seem to find enough wind turbine technicians.

The question is, how do people whose jobs are displaced take on these other great-paying, very valuable jobs that we just can’t find enough people to do? The answer is not for everyone to learn to program. Yes, I think a lot of people should learn to program, but we also need to skill up more people in those areas of healthcare, education, and wind turbine technicians, and other in-demand rising categories of jobs.

I think we’re moving away from a world where you have one career in your lifetime. Technology changes so fast that there will be people that thought they were doing one thing when they went to college that will realize that the career they set out toward when they were 17-years-old is no longer viable, and that they should branch into a different career.

We’ve seen how millennials are more likely to hop among jobs, where you go from being a product manager in one company to the product manager of a different company. I think that in the future, increasingly we’ll see people going from being a material scientist in one company to being a biologist in a different company, to being a security researcher in a third company. This won’t happen overnight, it will take a long time to change. Interestingly, though, in my world of deep learning, I already see many people doing deep learning that did not major in computer science, they did subjects like physics, astronomy, or pure mathematics.

MARTIN FORD: Is there any particular advice you’d give to a young person who is interested in a career in AI, or in deep learning specifically? Should they focus entirely on computer science or is brain science, or the study of cognition in humans also important?

ANDREW NG: I would say to study computer science, machine learning, and deep learning. Knowledge of brain science or physics is all useful, but the most time-efficient route to a career in AI is computer science, machine learning and deep learning. Because of YouTube videos, talks, and books, I think it’s easier than ever for someone to find materials and study by themselves, just step by step. Things don’t happen overnight, but step by step, I think it’s possible for almost anyone to become great at AI.

There are a couple of pieces of advice that I tend to give to people. Firstly, people don’t like to hear that it takes hard work to master a new field, but it does take hard work, and the people who are willing to work hard at it will learn faster. I know that it’s not possible for everyone to learn a certain number of hours every week, but people that are able to find more time to study will just learn faster.

The other piece of advice I tend to give people is that let’s say you’re currently a doctor and you want to break into AI—as a doctor you’d be uniquely positioned to do very valuable work in healthcare that very few others can do. If you are currently a physicist, see if there are some ideas on AI applied to physics. If you’re a book publisher, see if there’s some work you can do with AI in book publishing, because that’s one way to leverage your unique strengths and to complement that with AI, rather than competing on a more even playing field with the fresh college grad stepping into AI.

MARTIN FORD: Beyond the possible impact on jobs, what are the other risks associated with AI that you think we should be concerned about now or in the relatively near future?

ANDREW NG: I like to relate AI to electricity. Electricity is incredibly powerful and on average has been used for tremendous good, but it can also be used to harm people. AI is the same. In the end, it’s up to individuals, as well as companies and governments, to try to make sure we use this new superpower in positive and ethical ways.

I think that bias in AI is another major issue. AI that learns from human-generated text data can pick up on health, gender, and racial stereotypes. AI teams are aware of this and are actively working on this, and I am very encouraged that today we have better ideas for reducing bias in AI than we do for reducing bias in humans.

MARTIN FORD: Addressing bias in people is very difficult, so it does seem like it might be an easier problem to solve in software.

ANDREW NG: Yes, you can zero a number in an AI piece of software and it will exhibit much less gender bias, we don’t have similarly effective ways of reducing gender bias in people. I think that soon we might see that AI systems will be less biased than many humans. That is not to say that we should be satisfied with just having less bias, there’s still a lot of work to do and we should keep on working to reduce that bias.

MARTIN FORD: What about the concern that a superintelligent system might someday break free of our control and pose a genuine threat to humanity?

ANDREW NG: I’ve said before that worrying about AGI evil killer robots today is like worrying about overpopulation on the planet Mars. A century from now I hope that we will have colonized the planet Mars. By that time, it may well be overpopulated and polluted, and we might even have children dying on Mars from pollution. It’s not that I’m heartless and don’t care about those dying children—I would love to find a solution to that, but we haven’t even landed on the planet yet, so I find it difficult to productively work on that problem.

MARTIN FORD: You don’t think then that there’s any realistic fear of what people call the “fast takeoff” scenario, where an AGI system goes through a recursive self-improvement cycle and rapidly becomes superintelligent?

ANDREW NG: A lot of the hype about superintelligence and exponential growth were based on very naive and very simplistic extrapolations. It’s easy to hype almost anything. I don’t think that there is a significant risk of superintelligence coming out of nowhere and it happening in a blink of an eye, in the same way that I don’t see Mars becoming overpopulated overnight.

MARTIN FORD: What about the question of competition with China? It’s often pointed out that China has certain advantages, like access to more data due to a larger population and fewer concerns about privacy. Are they going to outrun us in AI research?

ANDREW NG: How did the competition for electricity play out? Some countries like the United States have a much more robust electrical grid than some developing economies, so that’s great for the United States. However, I think the global AI race is much less of a race than the popular press sometimes presents it to be. AI is an amazing capability, and I think every country should figure out what to do with this new capability, but I think that it is much less of a race than the popular press suggests.

MARTIN FORD: AI clearly does have military applications, though, and potentially could be used to create automated weapons. There’s currently a debate in the United Nations about banning fully autonomous weapons, so it’s clearly something people are concerned about. That’s not futuristic AGI-related stuff, but rather something we could see quite soon. Should we be worried?

ANDREW NG: The internal combustion engine, electricity, and integrated circuits all created tremendous good, but they were all useful for the military. It’s the same with any new technology, including AI.

MARTIN FORD: You’re clearly an optimist where AI is concerned. I assume you believe that the benefits are going to outweigh the risks as artificial intelligence advances?

ANDREW NG: Yes, I do. I’ve been fortunate to be on the front lines, shipping AI products for the last several years and I’ve seen firsthand the way that better speech recognition, better web search, and better optimized logistics networks help people.

This is the way that I think about the world, which may be a very naïve way. The world’s gotten really complicated, and the world’s not the way I want it to be. Frankly, I miss the times when I could listen to political leaders and business leaders, and take much more of what they said at face value.

I miss the times when I had greater confidence in many companies and leaders to behave in an ethical way and to mean what they say and say what they mean. If you think about your as-yet-unborn grandchildren or your unborn great-great-grandchildren, I don’t think the world is yet the way that you want it to be for them to grow up in. I want democracy to work better, and I want the world to be fairer. I want more people to behave ethically and to think about the actual impact on other people, and I want the world to be fairer, for everyone to have access to and gain an education. I want people to work hard, but to work hard and to keep studying, and to do work that they find meaningful, and I think many parts of the world are not yet the way I think we would all like it to be.

Every time there’s a technological disruption, it gives us the opportunity to make a change. I would like my teams, as well as other people around the world to take a shot at making the world a better place in the ways that we want it to be. I know that sounds like I’m a dreamer, but that’s what I actually want to do.

MARTIN FORD: I think that’s a great vision. I guess the problem is that it’s a decision for society as a whole to set us on the path to that kind of optimistic future. Are you confident that we’ll make the right choices?

ANDREW NG: I don’t think it will be in a straight line, but I think there are enough honest, ethical, and well-meaning people in the world to have a very good shot at it.

ANDREW NG is one of the most recognizable names in AI and machine learning. He co-founded the Google Brain deep learning project as well as the online education company Coursera. Between 2014 and 2017, he was a vice president and chief scientist at Baidu, where he built the company’s AI group into an organization with several thousand people. He is generally credited with playing a major role in the transformation of both Google and Baidu into AI-driven companies.

Since leaving Baidu, Andrew has undertaken a number of projects including launching deeplearning.ai, an online education platform geared toward educating deep learning experts, as well as Landing AI, which seeks to transform enterprises with AI. He’s currently the chairman of Woebot, a startup focused on mental health applications for AI and is on the board of directors of self-driving car company Drive.ai. He is also the founder and General Partner at AI Fund, a venture capital firm that builds new AI startups from the ground up.

Andrew is currently an adjunct professor, and formerly the associate professor and Director of the AI Lab at Stanford University. He received his undergraduate degree in computer science from Carnegie Mellon University, his master’s degree from MIT, and his PhD from The University of California, Berkeley.