We don’t have anything anywhere near as good as an insect, so I’m not afraid of superintelligence showing up anytime soon.
CHAIRMAN, RETHINK ROBOTICS
Rodney Brooks is widely recognized as one of the world’s foremost roboticists. Rodney co-founded iRobot Corporation, an industry leader in both consumer robotics (primarily the Roomba vacuum cleaner) and military robots, such as those used to defuse bombs in the Iraq war (iRobot divested its military robotics division in 2016). In 2008, Rodney co-founded a new company, Rethink Robotics, focused on building flexible, collaborative manufacturing robots that can safely work alongside human workers.
MARTIN FORD: While at MIT, you started the iRobot company, which is now one of the world’s biggest distributors of commercial robots. How did that come about?
RODNEY BROOKS: I started iRobot back in 1990 with Colin Angle and Helen Greiner. At iRobot we had a run of 14 failed business models and didn’t get a successful one until 2002, at which point we hit on two business models that worked in the same year. The first one was robots for the military. They were deployed in Afghanistan to go into caves to see what was in them. Then, during the Afghanistan and Iraq conflicts, around 6,500 of them were used to deal with roadside bombs.
At the same time in 2002, we launched the Roomba, which was a vacuum cleaning robot. In 2017, the company recorded full-year revenue of $884 million and has, since launch, shipped over 20 million units. I think it’s fair to say the Roomba is the most successful robot ever in terms of numbers shipped, and that was really based on the insect-level intelligence that I had started developing at MIT around 1984.
When I left MIT in 2010, I stepped down completely and started a company, Rethink Robotics, where we build robots that are used in factories throughout the world. We’ve shipped thousands of them to date. They’re different from conventional industrial robots in that they’re safe to be with, they don’t have to be caged, and you can show them what you want them to do.
In the latest version of the software we use, Intera 5, when you show the robots what you want them to do, they actually write a program. It’s a graphical program that represents behavior trees, which you can then manipulate if you want, but you don’t have to. Since its launch, more sophisticated companies wanted to be able to get in and tweak exactly what the robot was doing after it had been shown what to do, but you don’t have to know what the underlying representation is. These robots use force feedback, they use vision, and they operate in real environments with real people around them 24 hours a day, seven days a week, 365 days a year, all over the world. I think certainly they are the most advanced artificial intelligence robots currently in mass deployment.
MARTIN FORD: How did you come to be at the forefront of robotics and AI? Where does your story begin?
RODNEY BROOKS: I grew up in Adelaide, South Australia, and in 1962 my mother found two American How and Why Wonder Books. One was called Electricity and the other, Robots and Electronic Brains. I was hooked, and I spent the rest of my childhood using what I’d learned from the books to explore and try to build intelligent computers, and ultimately robots.
I did an undergraduate degree in mathematics and started a PhD in artificial intelligence in Australia but realized there was a little problem in that there were no computer science departments or artificial intelligence researchers in the country. I applied to the three places that I’d heard of that did artificial intelligence, MIT (Massachusetts Institute of Technology), Carnegie Mellon (Pittsburgh, USA), and Stanford University. I got rejected by MIT but got accepted to Carnegie Mellon and Stanford, starting in 1977. I chose Stanford because it was closer to Australia.
My PhD at Stanford was on computer vision with Tom Binford. Following on from that, I was at Carnegie Mellon for a postdoc, then onto another postdoc at MIT, finally ending back at Stanford in 1983 as a member of the tenure-track faculty. In 1984 I moved back to MIT as a member of the faculty, where I stayed for 26 years.
While at MIT as a postdoc, I started working more on intelligent robots. By the time I moved back to MIT in 1984 I realized just how little progress we’d made in modeling robot perception. I got inspired by insects with a hundred thousand neurons outperforming any robot we had by fantastic amounts. I then started to try and model intelligence on insect intelligence, and that’s what I did for the first few years.
I then ran the Artificial Intelligence Lab at MIT that Marvin Minsky had founded. Over time, that merged with the Laboratory of Computer Science and formed CSAIL, the Computer Science and Artificial Intelligence Lab, which is, today, still the largest lab at MIT.
MARTIN FORD: Looking back, what would you say is the highlight of your career with either robots or AI?
RODNEY BROOKS: The thing I’m proudest of was in March 2011 when the earthquake hit Japan and the tidal wave knocked out the Fukushima Nuclear Power Plant. About a week after it happened, we got word that the Japanese authorities were really having problems in that they couldn’t get any robots into the plant to figure out what was going on. I was still on the board of iRobot at that time, and we shipped six robots in 48 hours to the Fukushima site and trained up the power company tech team. As a result, they acknowledged that the shutdown of the reactors relied on our robots being able to do things for them that they on their own were unable to do.
MARTIN FORD: I remember that story about Japan. It was a bit surprising because Japan is generally perceived as being on the very leading edge of robotics, and yet they had to turn to you to get working robots.
RODNEY BROOKS: I think there’s a real lesson there. The real lesson is that the press hyped up things about them being far more advanced than they really are. Everyone thought Japan had incredible robotic capabilities, and this was led by an automobile company or two, when really what they had was great videos and nothing about reality.
Our robots had been in war zones for nine years being used in the thousands every day. They weren’t glamorous, and the AI capability would be dismissed as being almost nothing, but that’s the reality of what’s real and what is applicable today. I spend a large part of my life telling people that they are being delusional when they see videos and think that great things are around the corner, or that there will be mass unemployment tomorrow due to robots taking over all of our jobs.
At Rethink Robotics, I say, if there was no lab demo 30 years ago, then it’s too early to think that we could make it into a practical product now. That’s how long it takes from a lab demo to a practical product. It’s certainly true of autonomous driving; everyone’s really excited about autonomous driving now. People forget that the first automobile that drove autonomously on a freeway at over 55 miles an hour for 10 miles was in 1987 near Munich. The first time a car drove across the US, hands off the wheel, feet off the pedals coast to coast, was No Hands Across America in 1995. Are we going to see mass-produced self-driving cars tomorrow? No. It takes a long, long, long time to develop something like this, and I think people are still overestimating how quickly this technology will be deployed.
MARTIN FORD: It sounds to me like you don’t really buy into the Kurzweil Law of Accelerating Returns. The idea that everything is moving faster and faster. I get the feeling that you think things are moving at the same pace?
RODNEY BROOKS: Deep learning has been fantastic, and people who are outside the field of it come in and say, wow. We’re used to exponentials because we had exponentials in Moore’s Law, but Moore’s Law is slowing down because you can no longer halve the feature size. What it’s leading to though is a renaissance of computer architecture. For 50 years, you couldn’t afford to do anything out of the ordinary because the other guys would overtake you, just because of Moore’s Law. Now we’re starting to see a flourishing of computer architecture and I think it’s a golden era for computer architecture because of the end of Moore’s Law. That gets back to Ray Kurzweil and people who saw those exponentials and think that everything is exponential.
Certain things are exponential, but not everything. If you read Gordon Moore’s 1965 paper, The Future of Integrated Electronics, where Moore’s Law originated from, the last part was devoted to what the law doesn’t apply to. Moore said it doesn’t apply to power storage, for example, where it’s not about the information abstraction of zeroes and ones, it’s about bulk properties.
Take green tech as an example. A decade ago, venture capitalists in Silicon Valley got burned because they thought Moore’s Law was everywhere, and that it would apply to green tech. No, that’s not how it works. Green tech relies on bulk, it relies on energy, it’s not something that is halve-able physically and you still have the same information content.
Getting back to deep learning, people think because one thing happened and then another thing happened, it’s just going to get better and better. For deep learning, the fundamental algorithm of backpropagation was developed in the 1980s, and those people eventually got it to work fantastically after 30 years of work. It was largely written off in the 1980s and the 1990s for lack of progress, but there were 100 other things that were also written off at the same time. No one predicted which one out of those 100 things would pop. It happened to be that backpropagation came together with a few extra things, such as clamping, more layers, and a lot more computation, and provided something great. You could never have predicted that backpropagation and not one of those 99 other things were going to pop through. It was by no means inevitable.
Deep learning has had great success, and it will have more success, but it won’t go on forever providing more or greater success. It has limits. Ray Kurzweil is not going to be uploading his consciousness any time soon. It’s not how biological systems work. Deep learning will do some things, but biological systems rely on hundreds of algorithms, not just one algorithm. We will need hundreds more algorithms before we can make that progress, and we cannot predict when they will pop. Whenever I see Kurzweil I remind him that he is going to die.
MARTIN FORD: That’s mean.
RODNEY BROOKS: I’m going to die too. I have no doubt about it, but he doesn’t like to have it pointed out because he’s one of these techno-religion people. There are different versions of techno religion. There are the life extension companies being started by the billionaires in Silicon Valley, then there’s the upload yourself to a computer person like Ray Kurzweil. I think that probably for a few more centuries, we’re still mortal.
MARTIN FORD: I tend to agree with that. You mentioned self-driving cars, let me just ask you specifically how fast you see that moving? Google supposedly has real cars with nobody inside them on the road now in Arizona.
RODNEY BROOKS: I haven’t seen the details of that yet, but it has taken a lot longer than anyone thought. Both Mountain View (California) and Phoenix (Arizona) are different sorts of cities to much of the rest of the US. We may see some demos there, but it’s going to be a few years before there is a practical mobility-as-a-service operation that turns out to be anything like profitable. By profitable, I mean making money almost at the rate at which Uber is losing money, which was $4.5 billion last year.
MARTIN FORD: The general thought is that since Uber loses money on every ride, if they can’t go autonomous it’s not a sustainable business model.
RODNEY BROOKS: I just saw a story this morning, saying that the median hourly wage of an Uber driver is $3.37, so they’re still losing money. That’s not a big margin to get rid of and replace with those expensive sensors required for autonomous driving. We haven’t even figured out what the practical solution is for self-driving cars. The Google cars have piles of expensive sensors on the roof, and Tesla tried and failed with just built-in cameras. We will no doubt see some impressive demonstrations and they will be cooked. We saw that with robots from Japan, those demonstrations were cooked, very, very cooked.
MARTIN FORD: You mean faked?
RODNEY BROOKS: Not faked, but there’s a lot behind the curtain that you don’t see. You infer, or you make generalizations about what’s going on, but it’s just not true. There’s a team of people behind those demonstrations, and there will be teams of people behind the self-driving demonstrations in Phoenix for a long time, which is a long way from it being real.
Also, a place like Phoenix is different from where I live in Cambridge, Massachusetts, where it’s all cluttered one-way streets. This raises questions, such as where does the driving service pick you up in my neighborhood? Does it pick you up in the middle of the road? Does it pull into a bus lane? It’s usually going to be blocking the road, so it’s got to be fast, people will be tooting horns at them, and so on. It’s going to be a while before fully autonomous systems can operate in that world, so I think even in Phoenix we’re going to see designated pickup and drop-off places for a long time, they won’t be able to just slot nicely into the existing road network.
We’ve started to see Uber rolling out designated pick-up spots for their services. They now have a new system, which they were trying in San Francisco and Boston and has now expanded to six cities, where you can stand in line at an Uber rank with other people getting cold and wet waiting for their cars. We’re imagining self-driving cars are going to be just like the cars of today except with no driver. No, there’s going to be transformations of how they’re used.
Our cities got transformed by cars when they first came along, and we’re going to need a transformation of our cities for this technology. It’s not going to be just like today but with no drivers in the cars. That takes a long time, and it doesn’t matter how much of a fanboy you are in Silicon Valley, it isn’t going to happen quickly.
MARTIN FORD: Let’s speculate. How long will it take to have something like what we have with Uber today, a mass driverless product where you could be in Manhattan or San Francisco and it will pick you up somewhere and take you to another place you specify?
RODNEY BROOKS: It’s going to come in steps. The first step may be that you walk to a designated pick-up place and they’re there. It’s like when you pick up a Zipcar (an American car-sharing company scheme) today, there are designated parking spots for Zipcars. That will come earlier than the service that I currently get from an Uber where they pull up and double park right outside my house. At some point, I don’t know whether it is going to be in my lifetime, we’ll see a lot of self-driving cars moving around our regular cities but it’s going to be decades in the making and there’s going to be transformations required, but we haven’t quite figured out yet what they’re going to be.
For instance, if you’re going to have self-driving cars everywhere, how do you refuel them or recharge them? Where do they go to recharge? Who plugs them in? Well, some startups have started to think about how fleet management systems for electric self-driving cars might work. They will still require someone to do the maintenance and the normal daily operations. A whole bunch of infrastructure like that would have to come about for autonomous vehicles to be a mass product, and it’s going to take a while.
MARTIN FORD: I’ve had other estimates more in the range of five years until something roughly the equivalent to Uber is ready. I take it that you think that’s totally unrealistic?
RODNEY BROOKS: Yes, that’s totally unrealistic. We might get to see certain aspects of it, but not the equivalent. It’s going to be different, and there’s a whole bunch of new companies and new operations that have to support it that haven’t happened yet. Let’s start with the fundamentals. How are you going to get in the car? How’s it going to know who you are? How do you tell if you’ve changed your mind when you’re driving and you want to go to a different location? Probably with speech, Amazon Alexa and Google Home have shown us how good speech recognition is, so I think we will expect the speech to work.
Let’s look at the regulatory system. What can you tell the car to do? What can you tell the car to do if you don’t have a driver’s license? What can a 12-year-old, who’s been put in the car by their parents to go to soccer practice, tell the car to do? Does the car take voice commands from 12-year-olds, or does it not listen to them? There’s an incredible number of practical and regulatory problems that people have not been talking about that remain to be solved. At the moment, you can put a 12-year-old in a taxi and it will take him somewhere. That isn’t going to happen for a long time with self-driving cars.
MARTIN FORD: Let’s go back to one of your earlier comments on your previous research into insects. That’s interesting because I’ve often thought that insects are very good biological robots. I know you’re no longer a researcher yourself, but I was wondering what’s currently happening in terms of building a robot or an intelligence that begins to approach what an insect is capable of, and how does that influence our steps toward superintelligence?
RODNEY BROOKS: Simply put, we don’t have anything anywhere near as good as an insect, so I’m not afraid of superintelligence showing up anytime soon. We can’t replicate the learning capabilities of insects using only a small number of unsupervised examples. We can’t achieve the resilience of the insect in being able to adapt in the world. We certainly can’t replicate the mechanics of an insect, which are amazing. No one has anything that approaches an insect’s level of intent. We have great models that can look at something and classify it and even put a label on it in certain cases, but that’s so much different to even the intelligence of an insect.
MARTIN FORD: Think back to the ‘90s and the time you started iRobot, do you think since then robotics has met or even exceeded your expectations, or has it been disappointing?
RODNEY BROOKS: When I came to the United States in 1977, I was really interested in robots and ended up working on computer vision. There were three mobile robots in the world at that point. One of those robots was at Stanford, where Hans Moravec would run experiments to get the robot to move 60 feet across a large room in six hours, another one was at NASA’s Jet Propulsion Laboratory (JPL), and the last was at the Laboratory for Analysis and Architecture of Systems (LAAS) in Toulouse, France.
There were three mobile robots in the world. iRobot now ships millions of mobile robots per year, so from the point of view of how far that’s come, I’m pretty happy. We made it big and we’ve moved a long, long way. The only reason that those advances in robotics haven’t been a bigger story is because in that same time frame we’ve gone from room-size mainframe computers to having billions of smartphones throughout the world.
MARTIN FORD: Moving on from insects, I know you’ve been working on creating robotic hands. There have been some amazing videos of robotic hands from various teams. Can you let me know how that field is progressing?
RODNEY BROOKS: Yes, I wanted to differentiate that mobile commercial robot work that I was doing at iRobot from what I was doing with my students at MIT, so my research at MIT changed from insects to humanoids and as a result, I started to work there with robot arms. That work is progressing slowly. There are various exciting things happening in lab demos, but they’re focusing on one particular task, which is very different from the more general way in which we operate.
MARTIN FORD: Is that slow progress due to a hardware or a software problem, and is it the mechanics of it or just the control?
RODNEY BROOKS: It’s everything. There are a whole bunch of things that you have to make progress on in parallel. You have to make progress on the mechanics, on the materials that form the skin, on the sensors embedded throughout the hand, and on the algorithms to control it, and all those things have to happen at once. You can’t race ahead with one pathway without the others alongside it.
Let me give you an example to drive this home. You’ve probably seen those plastic grabber toys that have a handle at one end that you squeeze to open a little hand at the other end. You can use them to grab hard-to-reach stuff, or to reach a light bulb that you can’t quite get to on your own.
That really primitive hand can do fantastic manipulation beyond what any robot can currently do, but it’s an amazingly primitive piece of plastic junk that you’re using to do that manipulation with. That’s the clincher, you are doing the manipulation. Often, you’ll see videos of a new robot hand that a researcher has designed, and it’s a person holding the robot hand and moving it around to do a task. They could do the same task with this little plastic grabber toy, it’s the human doing it. If it was that simple, we could attach this grabber toy to the end of a robot arm and have it perform the task—a human can do it with this toy at the end of their arm, why can’t a robot? There’s something dramatic missing.
MARTIN FORD: I have seen reports that deep learning and reinforcement learning is being used to have robots learn to do things by practicing or even just by watching YouTube videos. What’s your view on this?
RODNEY BROOKS: Remember they’re lab demos. DeepMind has a group using our robots and they’ve recently published some interesting force feedback work with robots attaching clips to things, but each of these is painstakingly worked on by a team of really smart researchers for months. It’s nowhere near the same as a human. If you take any person and show them something to do dexterously, they can do it immediately. We are nowhere close to anything like that from a robot’s perspective.
I recently built some IKEA furniture and I’ve heard people say this would be a great robot test. Give them an IKEA kit, give them the instructions that come with it, and have them make it. I must have done 200 different dexterous sorts of tasks while building that furniture. Let’s say we took my robots, that we sell in the thousands and are state of the art and have more sensors in them than any other robot that is sold today, and we tried to replicate that. If we worked for a few months in a very restricted environment we might get a coarse demonstration of one of those 200 tasks that I just knew and did. Again, it’s imagination running wild here to think a robot could soon do all of those tasks, the reality is very different.
MARTIN FORD: What is the reality? Thinking 5 to 10 years ahead, what are we going to see in the field of robotics and artificial intelligence? What kinds of breakthroughs should we realistically expect?
RODNEY BROOKS: You can never expect breakthroughs. I expect 10 years from now the hot thing will not be deep learning, there’ll be a new hot thing driving progress.
Deep learning has been a wonderful technology for us. It is what enables the speech systems for Amazon Echo and Google Home, and that’s a fantastic step forward. I know deep learning is going to enable other steps forward too, but something will come along to replace it.
MARTIN FORD: When you say deep learning, do you mean by that neural networks using backpropagation?
RODNEY BROOKS: Yes, but with lots of layers.
MARTIN FORD: Maybe then the next thing will still be neural networks but with a different algorithm or Bayesian networks?
RODNEY BROOKS: It might be, or it might be something very different, that’s what we don’t know. I guarantee, though, that within 10 years there’ll be a new hot topic that people will be exploiting for applications, and it will make certain other technologies suddenly pop. I don’t know what they will be, but in a 10-year time frame we’re certainly going to see that happen.
It’s impossible to predict what’s going to work and why, but you can in a predictable way say something about market pull, and market pull is going to come from a few different megatrends that are currently taking place.
For example, the ratio of elderly retired people to working-age people is changing dramatically. Depending on whose numbers you look at, the ratio is changing from something like nine working-age people to every one retired person (9:1) to two working-age people to every retired person (2:1). There are a lot more elderly people in the world. It depends on the country and other factors, but that means there will be a market pull toward helping the elderly get things done as they get frailer. We’re already seeing this in Japan at robotics trade shows, where there are a lot of lab demos of robots helping the elderly to do simple tasks, such as getting into and out of bed, getting into and out of the bathroom, just simple daily things. Those things currently require one-to-one human help, but as that ratio of working-age to elderly changes, there isn’t going to be the labor force to fulfil that need. That’s going to pull robotics into helping the elderly.
MARTIN FORD: I agree that that elder care segment is a massive opportunity for the robotics and AI industry, but it does seem very challenging in terms of the dexterity that’s required to really assist an elderly person in taking care of themselves.
RODNEY BROOKS: It is not going to be a simple substitution of a robotic system for a person, but there is going to be a demand so there will be motivated people working on trying to come up with solutions because it is going to be an incredible market.
I think we will also see a pull for construction work because we are urbanizing the world at an incredible rate. Many of the techniques that we use in construction were invented by the Romans, there’s room for a little technological update in some of those.
MARTIN FORD: Do you think that would be construction robots or would it be construction scale 3D printing?
RODNEY BROOKS: 3D printing may come in for aspects of it. It’s not going to be printing the whole building, but certainly we might see printed pre-formed components. We’ll be able to manufacture a lot more parts off-site, which will in turn lead to innovation in delivering, lifting, and moving those parts. There’s room for a lot of innovation there.
Agriculture is another industry that will potentially see robotics and AI innovation, particularly with climate change disrupting our food chain. People are already talking about urban farming, bringing farming out of a field and into a factory. This is something where machine learning can be very helpful. We have the computation power now to close a loop around every seed we need to grow and to provide it with the exact nutrients and conditions that it needs without having to worry about the actual weather outside. I think climate change is going to drive automation of farming in a different way than it has so far.
MARTIN FORD: What about real household consumer robots? The example people always give is the robot that would bring you a beer. It sounds like that might still be some way off.
RODNEY BROOKS: Colin Angle, the CEO of iRobot, who co-founded it with me in 1990, has been talking about that for 28 years now. I think that I’m still going to be going to the fridge myself for a while.
MARTIN FORD: Do you think that there will ever be a genuinely ubiquitous consumer robot, one that saturates the consumer market by doing something that people find absolutely indispensable?
RODNEY BROOKS: Is Roomba indispensable? No, but it does something of value at a low enough cost that people are willing to pay for it. It’s not quite indispensable, it’s a convenience level.
MARTIN FORD: When do we get there for a robot that can do more than move around and vacuum floors? A robot that has sufficient dexterity to perform some basic tasks?
RODNEY BROOKS: I wish I knew! I think no one knows. Everyone’s saying robots are coming to take over the world, yet we can’t even answer the question of when one will bring us a beer.
MARTIN FORD: I saw an article recently with the CEO of Boeing, Dennis Muilenburg, saying that they’re going to have autonomous drone taxis flying people around within the next decade, what do you think of his projection?
RODNEY BROOKS: I will compare that to saying that we’re going to have flying cars. Flying cars that you can drive around in and then just take off have been a dream for a long time, but I don’t think it’s going to happen.
I think the former CEO of Uber, Travis Kalanick, claimed that they were going to have flying Ubers deployed autonomously in 2020. It’s not going to happen. That’s not to say that I don’t think we’ll have some form of autonomous personal transport. We already have helicopters and other machines that can reliably go from place to place without someone flying them. I think it’s more about the economics of it that will determine when that happens, but I don’t have an answer to when that will be.
MARTIN FORD: What about artificial general intelligence? Do you think it is achievable and, if so, in what timeframe do you think we have a 50% chance of achieving it?
RODNEY BROOKS: Yes, I think it is achievable. My guess on that is the year 2200, but it’s just a guess.
MARTIN FORD: Tell me about the path to get there. What are the hurdles we’ll face?
RODNEY BROOKS: We already talked about the hurdle of dexterity. The ability to navigate and manipulate the world is important in understanding the world, but there’s a much wider context to the world than just the physical. For example, there isn’t a single robot or AI system out there that knows that today is a different day to yesterday, apart from a nominal digit on a calendar. There is no experiential memory, no understanding of being in the world from day to day, and no understanding of long-term goals and making incremental progress toward them. Any AI program in the world today is an idiot savant living in a sea of now. It’s given something, and it responds.
The AlphaGo program or chess-playing programs don’t know what a game is, they don’t know about playing a game, they don’t know that humans exist, they don’t know any of that. Surely, though, if an AGI is equivalent to a human, it’s got to have that full awareness.
As far back as 50 years ago people worked on research projects around those things. There was a whole community that I was a part of in the 1980s through the 1990s working on the simulation of adaptive behavior. We haven’t made much progress since then, and we can’t point to how it’s going to be done. No one’s currently working on it, and the people that claim to be advancing AGI are actually re-doing the same things that John McCarthy talked about in the 1960s, and they are making about as much progress.
It’s a hard problem. It doesn’t mean you don’t make progress on the way in a lot of technologies, but some things just take hundreds of years to achieve. We think that we’re the golden people at the critical time. Lots of people have thought that at lots of times, it doesn’t make it true for us right now and I see no evidence of it.
MARTIN FORD: There are concerns that we will fall behind China in the race to advanced artificial intelligence. They have a larger population, and therefore more data, and they don’t have as strict privacy concerns to hold back what they can do in AI. Do you think that we are entering a new AI arms race?
RODNEY BROOKS: You’re correct, there is going to be a race. There’s been a race between companies, and there will be a race between countries.
MARTIN FORD: Do you view it as a big danger for the West if a country like China gets a substantial lead in AI?
RODNEY BROOKS: I don’t think it’s as simple as that. We will see uneven deployment of AI technologies. I think we are seeing this already in China in their deployment of facial recognition in ways that we would not like to see here in the US. As for new AI chips, this is not something that a country like the US can afford to even begin to fall behind with. However, to not fall behind would require leadership that we do not currently have.
We’ve seen policies saying that we need more coal miners, while science budgets are cut, including places like the National Institute of Standards and Technology. It’s craziness, it’s delusional, it’s backward thinking, and it’s destructive.
MARTIN FORD: Let’s talk about some of the risks or potential dangers associated with AI and robotics. Let’s start with the economic question. Many people believe we are on the cusp of a big disruption on the scale of a new Industrial Revolution. Do you buy into that? Is there going to be a big impact on the job market and the economy?
RODNEY BROOKS: Yes, but not in the way people talk about. I don’t think it’s AI per se. I think it’s the digitalization of the world and the creation of new digital pathways in the world. The example I like to use is toll roads. In the US, we’ve largely gotten rid of human toll takers on toll roads and toll bridges. It’s not particularly done with AI but it’s done because there’s a whole bunch of digital pathways that have been built up in our society over the last 30 years.
One of the things that allowed us to get rid of toll takers is the tag that you can put on your windscreen that gives a digital signature to your car. Another advance that made it practical to get rid of all the human toll lanes is computer vision, where there is an AI system with some deep learning that can take a snapshot of the license plate and read it reliably. It’s not just at the toll gate, though. There are other digital chains that have happened to get us to this point. You are able to go to a website and register the tag in your car and the particular serial code that belongs to you, and also provide your license number so that there’s a backup.
There’s also digital banking that allows a third party to regularly bill your credit card without them ever touching your physical credit card. In the old days you had to have the physical credit card, now it’s become a digital chain. There’s also the side effect for the companies that run the toll booth, that they no longer need trucks to collect the money and take it to the bank because they have this digital supply chain.
There’s a whole set of digital pieces that came together to automate that service and remove the human toll taker. AI was a small, but necessary piece in there, but it wasn’t that overnight that person was replaced by an AI system. It’s those incremental digital pathways that enable the change in labor markets, it’s not a simple one-for-one replacement.
MARTIN FORD: Do you think those digital chains will disrupt a lot of those grass roots service jobs?
RODNEY BROOKS: Digital chains can do a lot of things but they can’t do everything. What they leave behind are things that we typically don’t value very much but are necessary to keep our society running, like helping the elderly in the restroom, or getting them in and out of showers. It’s not just those kinds of tasks—look at teaching. In the US, we’ve failed to give schoolteachers the recognition or the wages they deserve, and I don’t know how we’re going to change our society to value this important work, and make it economically worthwhile. As some jobs are lost to automation, how do we recognize and celebrate those other jobs that are not?
MARTIN FORD: It sounds like you’re not suggesting that mass unemployment will happen, but that jobs will change. I think one thing that will happen is that a lot of desirable jobs are going to disappear. Think of the white-collar job where you’re sitting in front of a computer and you’re doing something predictable and routine, cranking out the same report again and again. It’s a very desirable high-paying job that people go to college to get and that job is going to be threatened, but the maid cleaning the hotel room is going to be safe.
RODNEY BROOKS: I don’t deny that, but what I do deny is when people say, oh that’s AI and robots doing that. As I say, I think this is more down to digitalization.
MARTIN FORD: I agree, but it’s also true that AI is going to be deployed on that platform, so things may move even faster.
RODNEY BROOKS: Yes, it certainly makes it easier to deploy AI given that platform. The other worry, of course, is that the platform is built on totally insecure components that can get hacked by anyone.
MARTIN FORD: Let’s move on to that security question. What are the things that we really should worry about, aside from the economic disruption? What are the real risks, such as security, that you think are legitimate and that we should be concerned with?
RODNEY BROOKS: Security is the big one. I worry about the security of these digital chains and the privacy that we have all given up willingly in return for a certain ease of use. We’ve already seen the weaponization of social platforms. Rather than worry about a self-aware AI doing something willful or bad, it’s much more likely that we’re going to see bad stuff happen from human actors figuring out how to exploit the weaknesses in these digital chains, whether they be nation states, criminal enterprises, or even lone hackers in their bedrooms.
MARTIN FORD: What about the literal weaponization of robots and drones? Stuart Russell, one of the interviewees in this book, made a quite terrifying film called Slaughterbots about those concerns.
RODNEY BROOKS: I think that kind of thing is very possible today because it doesn’t rely on AI. Slaughterbots was a knee-jerk reaction saying that robots and war are a bad combination. There’s another reaction that I have. It always seemed to me that a robot could afford to shoot second. A 19-year-old kid just out of high school in a foreign country in the dark of night with guns going off around them can’t afford to shoot second.
There’s an argument that keeping AI out of the military will make the problem go away. I think you need to instead think about what it is you don’t want to happen and legislate about that rather than the particular technology that is used. A lot of these things could be built without AI.
As an example, when we go to the Moon next, it will rely heavily on AI and machine learning, but in the ‘60s we got there and back without either of those. It’s the action itself that we need to think about, not which particular technology is being used to perform that action. It’s naive to legislate against a technology and it doesn’t take into account the good things that you can do with it, like have the system shoot second, not shoot first.
MARTIN FORD: What about the AGI control problem and Elon Musk’s comments about summoning the demon? Is that something that we should be having conversations about at this point?
RODNEY BROOKS: In 1789 when the people of Paris saw hot-air balloons for the first time, they were worried about those people’s souls getting sucked out from up high. That’s the same level of understanding that’s going on here with AGI. We don’t have a clue what it would look like.
I wrote an essay on The Seven Deadly Sins of Predicting the Future of AI (https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/), and they are all wrapped up in this stuff. It’s not going to be a case of having exactly the same world as it is today, but with an AI super intelligence in the middle of it. It’s going to come very gradually over time. We have no clue at all about what the world or that AI system are going to be like. Predicting an AI future is just a power game for isolated academics who live in a bubble away from the real world. That’s not to say that these technologies aren’t coming, but we won’t know what they will look like before they arrive.
MARTIN FORD: When these technology breakthroughs do arrive, do you think there’s a place for regulation of them?
RODNEY BROOKS: As I said earlier, the place where regulation is required is on what these systems are and are not allowed to do, not on the technologies that underlie them. Should we stop research today on optical computers because they let you perform matrix multiplication much faster, so you could apply greater deep learning much more quickly? No, that’s crazy. Are self-driving delivery trucks allowed to double park in congested areas of San Francisco? That seems to be a good thing to regulate, not what the technology is.
MARTIN FORD: Taking all of this into account, I assume that you’re an optimist overall? You continue to work on this so you must believe that the benefits of all this are going to outweigh any risks.
RODNEY BROOKS: Yes, absolutely. We have overpopulated the world, so we have to go this way to survive. I’m very worried about the standard of living dropping because there’s not enough labor as I get older. I’m worried about security and privacy, to name two more. All of these are real and present dangers, and we can see the contours of what they look like.
The Hollywood idea of AGIs taking over is way in the future, and we have no clue even how to think about that. We should be worried about the real dangers and the real risks that we are facing right now.
RODNEY BROOKS is a robotics entrepreneur who holds a PhD in Computer Science from Stanford University. He’s currently the Chairman and CTO of Rethink Robotics. For a decade between 1997 and 2007, Rodney was the Director of the MIT Artificial Intelligence Laboratory and later the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
He’s a fellow to several organizations, including The Association for the Advancement of Artificial Intelligence (AAAI), where he is a founding fellow. So far in his career he’s won a number of awards for his work within the field, including the Computers and Thought Award, the IEEE Inaba Technical Award for Innovation Leading to Production, the Robotics Industry Association’s Engelberger Robotics Award for Leadership and the IEEE Robotics and Automation Award.
Rodney even starred as himself in the 1997 Error Morris movie, Fast, Cheap and Out of Control. A movie named after one of his papers, and which currently holds a 91% Rotten Tomatoes score.