Games are just our training domain. We’re not doing all this work just to solve games; we want to build these general algorithms that we can apply to real-world problems.
CO-FOUNDER & CEO OF DEEPMIND AI RESEARCHER AND NEUROSCIENTIST
Demis Hassabis is a former child chess prodigy, who started coding and designing video games professionally at age 16. After graduating from Cambridge University, Demis spent a decade leading and founding successful startups focused on video games and simulation. He returned to academia to complete a PhD in cognitive neuroscience at University College London, followed by postdoctoral research at MIT and Harvard. He co-founded DeepMind in 2010. DeepMind was acquired by Google in 2014 and is now part of Alphabet’s portfolio of companies.
MARTIN FORD: I know you had a very strong interest in chess and video games when you were younger. How has that influenced your career in AI research and your decision to found DeepMind?
DEMIS HASSABIS: I was a professional chess player in my childhood with aspirations of becoming the world chess champion. I was an introspective kid and I wanted to improve my game, so I used to think a lot about how my brain was coming up with these ideas for moves. What are the processes that are going on there when you make a great move or a blunder? So, very early on I started to think a lot about thinking, and that led me to my interest in things like neuroscience later on in my life.
Chess, of course, has a deeper role in AI. The game itself has been one of the main problem areas for AI research since the dawn of AI. Some of the early pioneers in AI like Alan Turing and Claude Shannon were very interested in computer chess. When I was 8 years old, I purchased my first computer using the winnings from the chess tournaments that I entered. One of the first programs that I remember writing was for a game called Othello—also known as Reversi—and while it’s a simpler game than chess, I used the same ideas that those early AI pioneers had been using in their chess programs, like alpha-beta search, and so on. That was my first exposure to writing an AI program.
My love of chess and games got me into programming, and specifically into writing AI for games. The next stage for me was to combine my love of games and programming into writing commercial videogames. One key theme that you’ll see in a lot of my games, from Theme Park (1994) to Republic: The Revolution (2003), was that they had simulation at the heart of their gameplay. The games presented players with sandboxes with characters in them that reacted to the way that you played. It was AI underpinning those characters, and that was always the part that I worked on specifically.
The other thing that I was doing with games was training my mind on certain capabilities. For example, with chess, I think it’s a great thing for kids to learn at school because it teaches problem-solving, planning, and all sorts of other meta-skills that I think are then useful and translatable to other domains. Looking back, perhaps all of that information was in my subconscious when I started DeepMind and started using games as a training environment for our AI systems.
The final step for me, before starting DeepMind, was taking undergraduate computer science course at Cambridge University. At the time, which was the early 2000s, I felt that as a field we didn’t have quite enough ideas to try and attempt to climb the Everest of AGI. This led me to my PhD in Neuroscience because I felt we needed a better understanding of how the brain solved some of these complex capabilities, so that we could be inspired by that to come up with new algorithmic ideas. I learned a lot about memory and imagination—topics that we didn’t at the time, and in some cases still don’t, know how to get machines to do. All those different strands then came together into DeepMind.
MARTIN FORD: Your focus then, right from the beginning, has been on machine intelligence and especially AGI?
DEMIS HASSABIS: Exactly. I’ve known I wanted to do this as a career since my early teens. That journey started with my first computer. I realized straight away that a computer was a magical tool because most machines extend your physical capability, but here was a machine that could extend your mental capabilities.
I still get excited by the fact that you can write a program to crunch a scientific problem, set it running, go off to sleep, and then when you wake up in the morning it’s solved it. It’s almost like outsourcing your problems to the machine. This led me to think of AI as the natural next step, or even the end step, where we get machines to be smarter in themselves so they’re not just executing what you’re giving them, but they’re actually able to come up with their own solutions.
I’ve always wanted to work on learning systems that learn for themselves, and I’ve always been interested in the philosophical idea of what is intelligence and how can we recreate that phenomena artificially, which is what led me to create DeepMind.
MARTIN FORD: There aren’t many examples of pure AGI companies around. One reason is that there’s not really a business model for doing that; it’s hard to generate revenue in the short term. How did DeepMind overcome that?
DEMIS HASSABIS: From the beginning, we were an AGI company, and we were very clear about that. Our mission statement of solving intelligence was there from the beginning. As you can imagine, trying to pitch that to standard venture capitalists was quite hard.
Our thesis was that because what we were building was a general-purpose technology, if you could build it powerfully enough, general enough, and capable enough, then there should be hundreds of amazing applications for it. You’d be inundated with incoming possibilities and opportunities, but you would require a large amount of upfront research first from a group of very talented people that we’d need to get together. We thought that was defensible because of the small number of people in the world that could actually work on this, especially if you think back to 2009 and 2010 when we first started out. You could probably count less than 100 people that could contribute to that type of work. Then there was the question of can we demonstrate clear and measurable progress?
The problem with having a large and long-term research goal is how do your funders get confidence that you actually know what you’re talking about? With a typical company, your metric is your product and the number of users, something that’s easily measurable. The reason why a company like DeepMind is so rare is that’s very hard for an external non-specialist, like a venture capitalist, to judge whether you’re making sense and your plan really is sensible, or whether you’re just crazy.
The line is very thin, especially when you’re going very far out, and in 2009 and 2010 no one was talking about AI. AI was not the hot topic that it is today. It was really difficult for me to get my initial seed funding because of the previous 30 years of failed promises in AI. We had some very strong hypotheses as to why that was, and those were the pillars that we were basing DeepMind on. Things like taking inspiration from neuroscience, which had massively improved our understanding of the brain in the last 10 years; doing learning systems not traditional expert systems; using benchmarking and simulations for the rapid development and testing of AI. There was a set of things that we committed to that turned out to be correct and were our explanations for why AI hadn’t improved in the previous years. Another very powerful thing was that these new techniques required a lot of computing power, which was now becoming available in the form of GPUs.
Our thesis made sense to us, and in the end, we managed to convince enough people, but it was hard because we were operating at that point within a very skeptical, non-fashionable domain. Even in academia, AI was frowned upon. It had been rebranded “machine learning,” and people who worked on AI were considered to be fringe elements. It’s amazing to see how quickly all of that has changed.
MARTIN FORD: Eventually you were able to secure the funding to be viable as an independent company. But then you decided to let Google acquire DeepMind. Can you tell me about the rationale behind the acquisition and how that happened?
DEMIS HASSABIS: It’s worth noting that we had no plans to sell, partly because we figured no big corporate would understand our value until DeepMind started producing products. It’s also not fair to say that we didn’t have a business model. We did, we just hadn’t gone very far down the line of executing it. We did already have some cool technology, DQN (deep Q-network—our first general-purpose learning model) and our Atari work had already been done by 2013. But then Larry Page, the Co-Founder of Google, heard about us through some of our investors and out of the blue in 2013 I received an email from Alan Eustace, who was running search and research at Google, saying that Larry’s heard of DeepMind and he’d like to have a chat.
That was the start, but the process took a long time because there were a lot of things I wanted to be sure of before we joined forces with Google. But at the end of the day, I became convinced that by combining with Google’s strengths and resources—their computing power and their ability to construct a much bigger team, we would be able to execute on our mission much more quickly. It wasn’t to do with money, our investors were willing to increase funding to keep us going independently, but DeepMind has always been about delivering AGI and using it for the benefit of the world, and there was an opportunity with Google to accelerate that.
Larry and the people at Google were just as passionate about AI as I was, and they understood how important the work we would do would be. They agreed to give us autonomy as to our research roadmap and our culture, and also to staying in London, which was very important to me. Finally, they also agreed to have an ethics board concerning our technology, which was very unusual but very prescient of them.
MARTIN FORD: Why did you choose to be in London, and not Silicon Valley? Is that a Demis Hassabis or a DeepMind thing?
DEMIS HASSABIS: Both really. I’m a born-and-bred Londoner, and I love London, but at the same time, I thought it was a competitive advantage because the UK and Europe have amazing universities in the field of AI like Cambridge and Oxford. But also, at the time there was no real ambitious research company in the UK, or really in Europe, so our hiring prospects were high, especially with all these universities outputting great postgraduate and graduate students.
In 2018 there are now a number of companies in Europe, but we were the first in AI who were doing deep research. But more culturally, I think it’s important that we have more stakeholders and cultures involved in making AI, not just Silicon Valley in the United States, but also European sensibilities and Canadian, and so on. Ultimately, this is going to be of global significance and having different voices about how to use it, what to use it for, and how to distribute the proceeds, is important.
MARTIN FORD: I believe you’re also opening up labs in other European cities?
DEMIS HASSABIS: We’ve opened a small research lab in Paris, which is our first continental European office. We’ve also opened two labs in Canada in Alberta and Montreal. More recently, since joining Google, we now have an applied team office in Mountain View, California who are right next to the Google teams that we work with.
MARTIN FORD: How closely do you work with the other AI teams at Google?
DEMIS HASSABIS: Google’s a huge place, and there are thousands of people working on every aspect of machine learning and AI, from both a very applied perspective to a pure research point of view. As a result of that, there are a number of team leads who all know each other, and there’s a lot of cross-collaboration, both with product teams and research teams. It tends to be ad hoc, so it depends on individual researchers or individual topics, but we keep each other informed at a high level of our overall research directions.
At DeepMind, we’re quite different from other teams in that we’re pretty focused around this one moonshot goal of AGI. We’re organized around a long-term roadmap, which is our neuroscience-based thesis, which talks about what intelligence is and what’s required to get there.
MARTIN FORD: DeepMind’s accomplishments with AlphaGo are well documented. There’s even a documentary film about it (https://www.alphagomovie.com/), so I wanted to focus more on your latest innovation, AlphaZero, and on your plans for the future. It seems to me that you’ve demonstrated something very close to a general solution for information-complete two-player games; in other words, games where everything that can be known is available there on the board or in terms of pixels on the screen. Going forward, are you finished with that type of game? Are you planning to move on to more complex games with hidden information, and so forth?
DEMIS HASSABIS: There’s a new version of AlphaZero that we’re going to publish soon that’s even more improved, and as you’ve said, you can think of that as a solution to two-player perfect-information games like chess, Go, shogi, and so on. Of course, the real world is not made up of perfect information, so as you’ve said, the next step is to create systems that can deal with that. We’re already working on that, and one example of this is our work with the PC strategy game, StarCraft, which has a very complicated action space. It’s very complex because you build units, so it’s not static in terms of what pieces you have, like in chess. It’s also real time, and the game has hidden information, for example, the “fog of war” that obscures onscreen information until you explore that area.
Beyond that, games are just our training domain. We’re not doing all this work just to solve games; we want to build these general algorithms that we can apply to real-world problems.
MARTIN FORD: So far, your focus has primarily been on combining deep learning with reinforcement learning. That’s basically learning by practice, where the system repeatedly attempts something, and there’s a reward function that drives it toward success. I’ve heard you say that you believe that reinforcement learning offers a viable path to general intelligence, that it might be sufficient to get there. Is that your primary focus going forward?
DEMIS HASSABIS: Going forward, yes, it is. I think that technique is extremely powerful, but you need to combine it with other things to scale it. Reinforcement learning has been around for a long time, but it was only used in very small toy problems because it was very difficult for anyone to scale up that learning in any way. In our Atari work, we combined that with deep learning, which did the processing of the screen, and the model of the environment you’re in. Deep learning is amazing at scaling, so combining that with reinforcement learning allowed it to scale to these large problems that we’ve now tackled in AlphaGo and DQN—all of these things that people would have told you was impossible 10 years ago.
I think we proved that first part. The reason we were so confident about it and why we backed it when we did was because in my opinion reinforcement learning will become as big as deep learning in the next few years. DeepMind is one of the few companies that take that seriously because, from the neuroscience perspective, we know that the brain uses a form of reinforcement learning as one of its learning mechanisms, it’s called temporal difference learning, and we know the dopamine system implements that. Your dopamine neurons track the prediction errors your brain is making, and then you strengthen your synapses according to those reward signals. The brain works along these principles, and the brain is our only example of general intelligence, which is why we take neuroscience very seriously here. To us, that must be a viable solution to the problem of general intelligence. It may not be the only one, but from a biologically inspired standpoint, it seems reinforcement learning is sufficient once you scale it up enough. Of course, there are many technical challenges with doing that, and many of them are unsolved.
MARTIN FORD: Still, when a child learns things like language or an understanding of the world, it doesn’t really seem like reinforcement learning for the most part. It’s unsupervised learning, as no one’s giving the child labeled data the way we would do with ImageNet. Yet somehow, a young child can learn organically directly from the environment. But it seems to be more driven by observation or random interaction with the environment rather than learning by practice with a specific goal in mind.
DEMIS HASSABIS: A child learns with many mechanisms, it’s not like the brain only uses one. The child gets supervised learning from their parents, teachers, or their peers and they do unsupervised learning when they’re just experimenting with stuff, with no goal in mind. They also do reward learning and reinforcement learning when they do something, and they get a reward for it.
We work on all three of those, and they’re all going to be needed for intelligence. Unsupervised learning is hugely important, and we’re working on that. The question here is, are there intrinsic motivations that evolution has designed in us that end up being proxies for reward, which then guide the unsupervised learning? Just look at information gain. There is strong evidence showing that gaining information is intrinsically rewarding to your brain.
Another thing would be novelty seeking. We know that seeing novel things releases dopamine in the brain, so that means novelty is intrinsically rewarding. In a sense, it could be that these intrinsic motivations that we have chemically in our brains are guiding what seems to us to be unstructured play or unsupervised learning. If the brain finds finding information and structure rewarding in itself, then that’s a hugely useful motivation for unsupervised learning; you’re just going to try and find structure, no matter what, and it seems like the brain is doing that.
Depending on what you determine as the reward, some of these things could be intrinsic rewards that could be guiding the unsupervised learning. I find that it is useful to think about intelligence in the framework of reinforcement learning.
MARTIN FORD: One thing that’s obvious from listening to you is that you combine a deep interest in both neuroscience and computer science. Is that combined approach true for DeepMind as a whole? How does the company integrate knowledge and talent from those two areas?
DEMIS HASSABIS: I’m definitely right in the middle for both those fields, as I’m equally trained in both. I would say DeepMind is clearly more skewed towards machine learning; however, our biggest single group here at DeepMind is made up of neuroscientists led by Matt Botvinick, an amazing neuroscientist and professor from Princeton. We take it very seriously.
The problem with neuroscience is that it’s a massive field in itself, way bigger than machine learning. If you as a machine-learning person wanted to quickly find out which parts of neuroscience would be useful to you, then you’d be stuck. There’s no book that’s going to tell you that, there’s just a mass of research work, and you’ll have to figure out for yourself how to parse that information and find the nuggets that could be useful from an AI perspective. Most of that neuroscience research is being undertaken for medical research, psychology, or for neuroscience itself. Neuroscientists aren’t designing those experiments thinking they would be useful for AI. 99% of that literature is not useful to you as an AI researcher and so you have to get really good at training yourself to navigate and pick out what are the right influences and what is the right level of influence for each of those.
Quite a lot of people talk about neuroscience inspiring AI work, but I don’t think a lot of them really have concrete ideas on how to do that. Let’s explore two extremes. One is you could try and reverse-engineer the brain, which is what quite a lot of people are attempting to do in their approach to AI, and I mean literally reverse-engineer the brain on a cortical level, a prime example being the Blue Brain Project.
MARTIN FORD: That’s being directed by Henry Markram, right?
DEMIS HASSABIS: Right, and he’s literally trying to reverse-engineer cortical columns. It may be interesting neuroscience but, in my view, that is not the most efficient path towards building AI because it’s too low-level. What we’re interested in at DeepMind is a systems-level understanding of the brain and the algorithms the brain implements, the capabilities it has, the functions it has, and the representations it uses.
DeepMind is not looking at the exact specifics of the wetware or how the biology actually instantiates it, we can abstract all of that away. That makes sense, because why would you imagine an in-silico system would have to mimic an in-carbo system because there are completely different strengths and weaknesses about those two systems. In silicon, there’s no reason why you would want to copy the exact permutation details of, say a hippocampus. On the other hand, I am very interested in the computations and the functions that the hippocampus has, like episodic memory, navigating in space, and the grid cells it uses. These are all systems-level influences from neuroscience and showcase our interest in the functions, representations and the algorithms that the brain uses, not the exact details of implementation.
MARTIN FORD: You often hear the analogy that airplanes don’t flap their wings. Airplanes achieve flight, but don’t precisely mimic what birds do.
DEMIS HASSABIS: That’s a great example. At DeepMind, we’re trying to understand aerodynamics by looking at birds, and then abstracting the principles of aerodynamics and building a fixed-wing plane.
Of course, people who built planes were inspired by birds. The Wright Brothers knew that heavier-than-air flight was possible because they’d seen birds. Before the airfoil was invented, they tried without success to use deformable wings, but they were more like birds gliding. What you’ve got to do is look at nature, and then try and abstract away the things that are not important for the phenomenon you’re after in that case, flying and in our case, intelligence. But that doesn’t mean that that didn’t help your search process.
My point is that you don’t know yet what the outcome looks like. If you’re trying to build something artificial like intelligence and it doesn’t work straight away, how do you know that you’re looking in the right place? Is your 20-person team wasting their time, or should you push a bit harder, and maybe you’ll crack it next year? Because of that, having neuroscience as a guide can allow me to make much bigger, much stronger bets on things like that.
A great example of this is reinforcement learning. I know reinforcement learning has to be scalable because the brain does scale it. If you didn’t know that the brain implemented reinforcement learning and it wasn’t scaling, how would you know on a practical level if you should spend another two years on this? It’s very important to narrow down the search space that you’re exploring as a team or a company, and I think that’s a meta-point that is often missed by people that ignore neuroscience.
MARTIN FORD: I think you’ve made the point that the work in AI could also inform research being done in neuroscience. DeepMind just came out with a result on grid cells used in navigation, and it sounds like you’ve got them to emerge organically in a neural network. In other words, the same basic structure naturally arises in both the biological brain and in artificial neural networks, which seems pretty remarkable.
DEMIS HASSABIS: I’m very excited about that because it’s one of our biggest breakthroughs in the last year. Edvard Moser and May-Britt Moser, who discovered grid cells and won the Nobel Prize for their work both wrote to us very excited about this finding because it means that, possibly, these grid cells are not just a function of the wiring of the brain, but actually may be the most optimal way of representing space from a computational sense. That’s a huge and important finding for the neuroscientists because what they’re speculating now is that maybe the brain isn’t necessarily hardwired to create grid cells. Perhaps if you have that structure of neurons and you just expose them to space, that is the most efficient coding any system would come up with.
We’ve also recently created a whole new theory around how the prefrontal cortex might work, based on looking at our AI algorithms and what they were doing, and then having our neuroscientists translate that into how the brain might work.
I think that this is the beginning of seeing many more examples of AI ideas and algorithms inspiring us to look at things in a different way in the brain or looking for new things in the brain, or as an analysis tool to experiment with our ideas about how we think the brain might work.
As a neuroscientist, I think that the journey we’re on of building neuroscience-inspired AI is one of the best ways to address some of the complex questions we have about the brain. If we build an AI system that’s based on neuroscience, we can then compare it to the human brain and maybe start gleaning some information about its unique characteristics. We could start shedding light on some of the profound mysteries of the mind like the nature of consciousness, creativity, and dreaming. I think that comparing the brain to an algorithmic construct could be a way to understand that.
MARTIN FORD: It sounds like you think there could be some discoverable general principles of intelligence that are substrate-independent. To return to the flight analogy, you might call it “the aerodynamics of intelligence.”
DEMIS HASSABIS: That’s right, and if you extract that general principle, then it must be useful for understanding the particular instance of the human brain.
MARTIN FORD: Can you talk about some of the practical applications that you imagine happening within the next 10 years? How are your breakthroughs going to be applied in the real world in the relatively near future?
DEMIS HASSABIS: We’re already seeing lots of things in practice. All over the world people are interacting with AI today through machine translation, image analysis, and computer vision.
DeepMind has started working on quite a few things, like optimizing the energy being used in Google’s data centers. We’ve worked on WaveNet, the very human-like text-to-speech system that’s now in the Google Assistant in all Android-powered phones. We use AI in recommendation systems, in Google Play, and even on behind-the-scenes elements like saving battery life on your Android phone. Things that everyone uses every single day. We’re finding that because they’re general algorithms, they’re coming up all over the place, so I think that’s just the beginning.
What I’m hoping will come through next are the collaborations we have in healthcare. An example of this is our work with the famous UK eye hospital, Moorfields, where we’re looking at diagnosing macular degeneration from your retina scans. We published the results from the first phase of our joint research partnership in Nature Medicine, and they show that our AI system can quickly interpret eye scans from routine clinical practice with unprecedented accuracy. It can also correctly recommend how patients should be referred for treatment for over 50 sight-threatening eye diseases as accurately as world-leading expert doctors.
There are other teams doing similar work for diseases like skin cancer. Over the next five years, I think healthcare will be one of the biggest areas to see a benefit from the work we’re all doing in the field.
What I’m really personally excited about, and this is something I think we’re on the cusp of, is using AI to actually help with scientific problems. We’re working on things like protein folding, but you can imagine its use in material design, drug discovery and chemistry. People are using AI to analyze data from the Large Hadron Collider to searching for exoplanets. There’s a lot of really cool areas of masses of data that we as human experts find hard to identify the structure in that I think this kind of AI is going to become increasingly used for. I’m hoping that over the next 10 years this will result in an advancement in the speed of scientific breakthroughs in some really fundamental areas.
MARTIN FORD: What does the path to AGI look like? What would you say are the main hurdles that will have to be surmounted before we have human-level AI?
DEMIS HASSABIS: From the beginning of DeepMind we identified some big milestones, such as the learning of abstract, conceptual knowledge, and then using that for transfer learning. Transfer learning is where you usefully transfer your knowledge from one domain to a new domain that you’ve never seen before, it’s something humans are amazing at. If you give me a new task, I won’t be terrible at it out of the box because I’ll bring some knowledge from similar things or structural things, and I can start dealing with it straight away. That’s something that computer systems are pretty terrible at because they require lots of data and they’re very inefficient. We need to improve that.
Another milestone is that we need to get better at language understanding, and another is replicating things that old AI systems were able to do, like symbolic manipulation, but using our new techniques. We’re a long way from all of those, but they would be really big milestones if they were to happen. If you look at where we were in 2010, just eight years ago, we’ve already achieved some big things that were milestones to us, like AlphaGo, but there are more to come. So those would be the big ones for me, concepts and transfer learning.
MARTIN FORD: When we do achieve AGI, do you imagine intelligence being coupled with consciousness? Is it something that would automatically emerge, or is consciousness a completely separate thing?
DEMIS HASSABIS: That’s one of the interesting questions that this journey will address. I don’t know the answer to it at the moment, but that’s one of the very exciting things about the work that both we and others are doing in this field.
My hunch currently would be that consciousness and intelligence are double-dissociable. You can have intelligence without consciousness, and you can have consciousness without human-level intelligence. I’m pretty sure smart animals have some level of consciousness and self-awareness, but they’re obviously not that intelligent at least compared to humans, and I can imagine building machines that are phenomenally intelligent by some measures but would not feel conscious to us in any way at all.
MARTIN FORD: Like an intelligent zombie, something that has no inner experience.
DEMIS HASSABIS: Something that wouldn’t feel sentient in the way we feel about other humans. Now that’s a philosophical question, because the problem is, as we see with the Turing test, how would we know if it was behaving in the same way as we were? The Occam’s razor explanation is to say that if you’re exhibiting the same behavior as I exhibit, and you’re made from the same stuff as I’m made from, and I know what I feel, then I can assume you’re feeling the same thing as me. Why would you not?
What’s interesting with a machine is that they could exhibit the same behavior as a human, if we designed them like that, but they’re on a different substrate. If you’re not on the same substrate then that Occam’s razor idea doesn’t hold as strongly. It may be that they are conscious in some sense, but we don’t feel it in the same way because we don’t have that additional assumption to rely on. If you break down why we think each of us is conscious, I think that’s a very important assumption, if you’re operating on the same substrate as me, why would it feel different to your substrate?
MARTIN FORD: Do you believe machine consciousness is possible? There are some people that argue consciousness is fundamentally a biological phenomenon.
DEMIS HASSABIS: I am actually open-minded about that, in the sense that I don’t think we know. It could well turn out that there’s something very special about biological systems. There are people like Sir Roger Penrose that think it’s to do with quantum consciousness, in which case a classical computer wouldn’t have it, but it’s an open question. That’s why I think the path we’re on will shed some light on it because I actually think we don’t know whether that’s a limit or not. Either way, it will be fascinating because it would be pretty amazing if it turned out that you couldn’t build consciousness at all on a machine. That would tell us a lot about what consciousness is and where it resides.
MARTIN FORD: What about the risks and the downsides associated with AGI? Elon Musk has talked about “raising the demon” and an existential threat. There’s also Nick Bostrom, who I know is on DeepMind’s advisory board and has written a lot on this idea. What do you think about these fears? Should we be worried?
DEMIS HASSABIS: I’ve talked to them a lot about these things. As always, the soundbites seem extreme but it’s a lot more nuanced when you talk to any of these people in person.
My view on it is that I’m in the middle. The reason I work on AI is because I think it’s going to be the most beneficial thing to humanity ever. I think it’s going to unlock our potential within science and medicine in all sorts of ways. As with any powerful technology, and AI could be especially powerful because it’s so general, the technology itself is neutral. It depends on how we as humans decide to design and deploy it, what we decide to use it for, and how we decide to distribute the gains.
There are a lot of complications there, but those are more like geopolitical issues that we need to solve as a society. A lot of what Nick Bostrom worries about are the technical questions we have to get right, such as the control problem and the value alignment problem. My view is that on those issues we do need a lot more research because we’ve only just got to the point now where there are systems that can even do anything interesting at all.
We’re still at a very nascent stage. Five years ago, you might as well have been talking about philosophy because no one had anything that was interesting. We’ve now got AlphaGo and a few other interesting technologies that are still very nascent, but we’re now at the point where we should start reverse-engineering those things and experimenting on them by building visualization and analysis tools. We’ve got teams doing this to better understand what these black-box systems are doing and how we interpret their behavior.
MARTIN FORD: Are you confident that we’ll be able to manage the risks that come along with advanced AI?
DEMIS HASSABIS: Yes, I’m very confident, and the reason is that we’re at the inflection point where we’ve just got these things working, and not that much effort has yet gone into reverse engineering them and understanding them, and that’s happening now. Over the next decade, most of these systems won’t be black-box in the sense that we mean now. We’ll have a good handle on what’s going on with these systems, and that will lead to a better understanding of how to control the systems and what their limits are mathematically, and then that could lead into best practices and protocols.
I’m pretty confident that path will address a lot of the technical issues that people like Nick Bostrom are worried about, like the collateral consequences of goals not being set correctly. To make advances in that, my view has always been that the best science occurs when theory and practice—empirical work—go hand in hand, and for this subject and field, empirical work experiments are engineering.
A lot of the fears that some of the people not working at the coalface of this technology have won’t hold once we actually have a much better understanding of these systems. That’s not to say that I think that there’s nothing to worry about, because I think we should worry about these things. There are plenty of near-term questions to resolve as well—like how do we test these systems as we deploy them in products? Some of the long-term problems are so hard that we want to be thinking about them in the time we have right now, well ahead of when we’re going to need the answers.
We also need to be able to inform the research that has to be done to come up with the solutions to some of those questions that are posed by people like Nick Bostrom. We are actively thinking about these problems and we’re taking them seriously, but I’m a big believer in human ingenuity to overcome those problems if you put enough brainpower on it collectively around the world.
MARTIN FORD: What about the risks that will arise long before AGI is achieved? For example, autonomous weapons. I know you’ve been very outspoken about AI being used in military applications.
DEMIS HASSABIS: These are very important questions. At DeepMind, we start from the premise that AI applications should remain under meaningful human control, and be used for socially beneficial purposes. This means banning the development and deployment of fully autonomous weapons, since it requires a meaningful level of human judgment and control to ensure that weapons are used in ways that are necessary and proportionate. We’ve expressed this view in a number of ways, including signing an open letter and supporting the Future of Life Institute’s pledge on the subject.
MARTIN FORD: It’s worth pointing out even though chemical weapons are in fact banned, they have still been used. All of this requires global coordination and it seems that rivalries between countries could push things in the other direction. For example, there is a perceived AI race with China. They do have a much more authoritarian system of government. Should we worry that they will gain an advantage in AI?
DEMIS HASSABIS: I don’t think it’s a race in that sense because we know all the researchers and there’s a lot of collaboration. We publish papers openly and I know that for example Tencent has created an AlphaGo clone, so I know many of the researchers there. I do think that if there’s going to be coordination and perhaps even regulation and best practices down the road, it’s important that it’s international and the whole world adopts that. It doesn’t work if some countries don’t adopt those principles. However, that’s not an issue that’s unique to AI. There are many other problems that we’re already grappling with that are a question of global coordination and organization—the obvious one being climate change.
MARTIN FORD: What about the economic impact of all of this? Is there going to be a big disruption of the job market and perhaps rising unemployment and inequality?
DEMIS HASSABIS: I think there’s been very minimal disruption so far from AI, it’s just been part of the technology disruption in general. AI is going to be hugely transformative, though. Some people believe that it’s going to be on the scale of the Industrial Revolution or electricity, while other people believe it’s going to be a class of its own above that, and that’s something I think that remains to be seen. Maybe it will mean we’re in a world of abundance, where there are huge productivity gains everywhere? Nobody knows for sure. The key thing is to make sure those benefits are shared with everyone.
I think that’s the key thing, whether that’s universal basic income, or it’s done in some other form. There are lots of economists debating these things, and we need to think very carefully about how everyone in society will benefit from those presumably huge productivity gains, which must be coming in, otherwise it wouldn’t be so disruptive.
MARTIN FORD: Yes, that’s basically the argument that I’ve been making, that it’s fundamentally a distributional problem and that a large part of our population is in danger of being left behind. But it is a staggering political challenge to come up with a new paradigm that will create an economy that works for everyone.
DEMIS HASSABIS: Right.
Whenever I meet an economist, I think they should be working quite hard on this problem, but it’s difficult to because they can’t really envisage how it could be so productive because people have been talking about massive productivity gains for 100 years.
My dad studied economics at university, and he was saying that in the late 1960s a lot of people were seriously talking about that: “What is everyone going to do in the 1980s when we have so much abundance, and we don’t have to work?” That, of course, never happened, in the 1980s or since then, and we’re working harder than ever. I think a lot of people are not sure if it’s ever going to be like that, but if it does end up that we have a lot of extra resources and productivity, then we’ve got to distribute it widely and equitably, and I think if we do that, then I don’t see a problem with it.
MARTIN FORD: Is it safe to say that you’re an optimist? I’d guess that you see AI as transformative and that it’s arguably going to be one of the best things that’s ever happened to humanity. Assuming, of course, that we manage it wisely?
DEMIS HASSABIS: Definitely, and that’s why I’ve worked towards it my whole life. All of the things I’ve been doing that we covered in the first part of our discussion have been building towards achieving that. I would be quite pessimistic about the way the world’s going, if AI was not going to come along. I actually think there’s a lot of problems in the world that require better solutions, like climate change, Alzheimer’s research or water purification. I can give you a list of things that are going to get worse over time. What is a worry is that I don’t see how we’re going to get the global coordination and the excess resources or activity to solve them. But ultimately, I’m actually optimistic about the world because a transformative technology like AI is coming.
DEMIS HASSABIS is a former child chess prodigy who finished his high school exams two years early before coding the multi-million selling simulation game Theme Park at age 17. Following graduation from Cambridge University with a Double First in Computer Science he founded the pioneering videogames company Elixir Studios producing award winning games for global publishers such as Vivendi Universal. After a decade of experience leading successful technology startups, Demis returned to academia to complete a PhD in cognitive neuroscience at University College London, followed by postdoctoral research at MIT and Harvard. His research into the neural mechanisms underlying imagination and planning was listed in the top ten scientific breakthroughs of 2007 by the journal Science.
Demis is a five-time World Games Champion, and a Fellow of the Royal Society of Arts and the Royal Academy of Engineering, winning the Academy’s Silver Medal. In 2017 he was named in the Time 100 list of the world’s most influential people, and in 2018 was awarded a CBE for services to science and technology. He was elected as a Fellow of the Royal Society, has been a recipient of the Society’s Mullard Award, and was also awarded an Honorary Doctorate by Imperial College London.
Demis co-founded DeepMind along with Shane Legg and Mustafa Suleyman in 2010. DeepMind was acquired by Google in 2014 and is now part of Alphabet. In 2016 DeepMind’s AlphaGo system defeated Lee Sedol, arguably the world’s best player of the ancient game of Go. That match is chronicled in the documentary film AlphaGo (https://www.alphagomovie.com/).