Chapter 12. DANIELA RUS

DANIELA RUS

I like to think of a world where more mundane routine tasks are taken off your plate. Maybe garbage cans that take themselves out and smart infrastructure to ensure that they disappear, or robots that will fold your laundry.

DIRECTOR OF MIT CSAIL

Daniela Rus is the Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, one of the world’s largest research organizations focused on AI and robotics. Daniela is a fellow of ACM, AAAI and IEEE, and a member of the National Academy of Engineering, and the American Academy for Arts and Science. Daniela leads research in robotics, mobile computing, and data science.

MARTIN FORD: Let’s start by talking about your background and looking at how you became interested in AI and robotics.

DANIELA RUS: I’ve always been interested in science and science fiction, and when I was a kid I read all the popular science fiction books at the time. I grew up in Romania where we didn’t have the range of media that you had in the US, but there was one show that I really enjoyed, and that’s the original Lost in Space.

MARTIN FORD: I remember that. You’re not the first person I’ve spoken to who has drawn their career inspiration from science fiction.

DANIELA RUS: I never missed an episode of Lost in Space, and I loved the cool geeky kid Will and the robot. I didn’t imagine that I would do anything remotely associated with that at that time. I was lucky enough to be quite good at math and science, and by the time I got to college age I knew that I wanted to do something with math, but not pure math because it seemed too abstract. I studied computer science with a major in computer science and mathematics, and a minor in astronomy—the astronomy continuing the connection to my fantasies of what could be in other worlds.

Toward the end of my undergraduate degree I went to a talk given by John Hopcroft, the Turing Award-winning theoretical computer scientist, and in that talk, John said that classical computer science was finished. What he meant by that was that many of the graph-theoretic algorithms that were posed by the founders of the field of computing had solutions and it was time for the grand applications, which in his opinion were robots.

I found that an exciting idea, so I worked on my PhD with John Hopcroft because I wanted to make contributions to the field of robotics. However, at that time the field of robotics was not at all developed. For example, the only robot that was available to us was a big PUMA arm (Programmable Universal Manipulation Arm), an industrial manipulator that had little in common with my childhood fantasies of what robots should be. It got me thinking a lot about what I could contribute, and I ended up studying dexterous manipulation, but very much from a theoretical, computational point of view. I remember finishing my thesis and trying to implement my algorithms to go beyond simulation and create real systems. Unfortunately, the systems that were available at the time were the Utah/MIT hand and the Salisbury hand, and neither one of those hands was able to exert the kind of forces and torques that my algorithms required.

MARTIN FORD: It sounds to me like there was a big gap between where the physical machines were and where the algorithms were.

DANIELA RUS: Exactly. At the time, I really realized that a machine is actually a closed connection between body and brain, and for any task you want that machine to execute, you really needed a body capable of those tasks, and then you needed a brain to control the body to deliver what it was meant to do.

As a result, I became very interested in the interaction between body and brain, and challenging the notion of what a robot is. So industrial manipulators are excellent examples of robots, but they are not all that we could do with robots; there are so many other ways to envision robots.

Today in my lab, we have all kinds of very non-traditional robots. There are modular cellular robots, soft robots, robots built out of food, and even robots built out of paper. We’re looking at new types of materials, new types of shapes, new types of architectures and different ways of imagining what the machine body ought to be. We also do a lot of work on the mathematical foundations of how those bodies operate, and I’m very interested in understanding and advancing the engineering of both the science of autonomy and of intelligence.

I became very interested in the connection between the hardware of the device and the algorithms that control the hardware. When I think about algorithms, I think that while it’s very important to consider the solutions, it’s also important to consider the mathematical foundations for those solutions because that’s in some sense where we create the nuggets of knowledge that other people can build on.

MARTIN FORD: You’re the director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), which is one of the most important research endeavors in not just robotics, but in AI generally. Could you explain what exactly CSAIL is?

DANIELA RUS: Our objective at CSAIL is to invent the future of computing to make the world better through computing, and to educate some of the best students in the world in research.

CSAIL is an extraordinary organization. When I was a student, I looked up to it as the Mount Olympus of technology and never imagined that I’d become a part of it. I like to think of CSAIL as the prophet for the future of computing, and the place where people envision how computing can be used to make the world better.

CSAIL actually has two parts, Computer Science (CS) and AI, both having a really deep history. The AI side of our organization goes back to 1956 when the field was invented and founded. In 1956, Marvin Minsky gathered his friends in New Hampshire where they spent a month, no doubt hiking in the woods, drinking wine and having great conversations, uninterrupted by social media, email, and smartphones.

When they emerged from the woods, they told the world that they had coined a new field of study: artificial intelligence. AI refers to the science and engineering of creating machines that exhibit human-level skills in how they perceive the world; in how they move in the world; in how they play games; in how they reason; in how they communicate; and even, in how they learn. Our researchers at CSAIL have been thinking about these questions and making groundbreaking contributions ever since, and it’s an extraordinary privilege to be part of this community.

The computer science side goes back to 1963, when Bob Fano, a computer scientist and MIT professor, had the crazy idea that two people might use the same computer at the same time. You have to understand this was a big dream back then when computers were the size of rooms and you had to book time on them. Originally, it was set up as Project MAC, which stood for Machine-Aided Cognition, but there was a joke that it was actually named MAC after Minsky and Corby (Fernando “Corby” Corbató), who were the two technical leads for the CS and the AI side. Ever since the founding of the laboratory in 1963, our researchers have put a lot of effort into imagining what computing looks like and what it can accomplish.

Many of the things that you take for granted today have their roots in the research developed at CSAIL, such as the password, RSA encryption, the computer time-sharing systems that inspired Unix, the optical mouse, object-oriented programming, speech systems, mobile robots with computer vision, the free software movement, the list goes on. More recently CSAIL has been a leader in defining the cloud and cloud computing, and in democratizing education through Massive Open Online Courses (MOOCs) and in thinking about security, privacy, and many other aspects of computing.

MARTIN FORD: How big is CSAIL today?

DANIELA RUS: CSAIL is the largest research laboratory at MIT, with over 1,000 members, and it cuts across 5 schools and 11 departments. CSAIL today has 115 faculty members, and each of these faculty members has a big dream about computing, which is such an important part of our ethos here. Some of our faculty members want to make computing better through algorithms, systems or networks, while others want to make life better for humanity with computing. For example, Shafi Goldwasser wants to make sure that we can have private conversations over the internet; and Tim Berners-Lee wants to create a bill of rights, a Magna Carta of the World Wide Web. We have researchers who want to make sure that if we get sick, the treatments that are available to us are personalized and customized to be as effective as they can be. We have researchers who want to advance what machines can do: Leslie Kaelbling wants to make Lieutenant-Commander Data, and Russ Tedrake wants to make robots that can fly. I want to make shape-shifting robots because I want to see a world with pervasive robots that support us in our cognitive and physical tasks.

This aspiration is really inspired by looking back at history and observing that only 20 years ago, computation was a task reserved for the expert few because computers were large, expensive, and difficult to handle, and it took knowledge to know what to do with them. All of that changed a decade ago when smartphones, cloud computing, and social media came along.

Today, so many people compute. You don’t have to be an expert in order to use computing, and you use computing so much that you don’t even know how much you depend on it. Try to imagine a day in your life without the world wide web and everything that enables. No social media; no communication through email; no GPS; no diagnosis in hospitals; no digital media; no digital music; no online shopping. It’s just incredible to see how computation has permeated into the fabric of life. To me, this raises a very exciting and important question, which is: In this world that has been so changed by computation, what might it look like with robots and cognitive assistants helping us with physical and cognitive tasks?

MARTIN FORD: As a university-based organization, what’s the balance between what you would classify as pure research and things that are more commercial and that end up actually developing into products? Do you spin off startups or work with commercial companies?

DANIELA RUS: We don’t house companies; instead, we focus on training our students and giving them various options for what they could do when they graduate, whether that be joining the academic life, going into high-tech industry, or becoming entrepreneurs. We fully support all of those paths. For example, say a student creates a new type of system after several years of research, and all of a sudden there is an immediate application for the system. This is the kind of technological entrepreneurship that we embrace, and hundreds of companies have been spun out of CSAIL research, but the actual companies do not get housed by CSAIL.

We also don’t create products, but that’s not to say we ignore them. We’re very excited about how our work could be turned into products, but generally, our mission is really to focus on the future. We think about problems that are 5 to 10 years out, and that’s where most of our work is, but we also embrace the ideas that matter today.

MARTIN FORD: Let’s talk about the future of robotics, which sounds like something you spend a great deal of your time thinking about. What’s coming down the line in terms of future innovations?

DANIELA RUS: Our world has already been transformed by robotics. Today, doctors can connect with patients, and teachers can connect with students that are thousands of miles away. We have robots that help with packing on factory floors, we’ve got networked sensors that we deploy to monitor facilities, and we have 3D printing that creates customized goods. Our world has already been transformed by advances in artificial intelligence and robotics, and when we consider adding even more extensive capabilities from our AI and robot systems, extraordinary things will be possible.

At the high level, we have to picture a world where routine tasks will be taken off our plate because this is the sweet spot for where technology is today. These routine tasks could be physical tasks or could be computational or cognitive tasks.

You already see some of that in the rise of machine learning applications for various industries, but I like to think of a world where more mundane routine tasks are taken off your plate. Maybe garbage cans that take themselves out and smart infrastructure to ensure that they disappear, or robots that will fold your laundry. We will have transportation available in the same way that water or electricity are available, and you will be able to go anywhere at any time. We will have intelligent assistants who will enable us to maximize our time at work and optimize our lives to live better and more healthily, and to work more efficiently. It will be extraordinary.

MARTIN FORD: What about self-driving cars? When will I be able to call a robot taxi in Manhattan and have it take me anywhere?

DANIELA RUS: I’m going to qualify my answer and say that certain autonomous driving technologies are available right now. Today’s solutions are good for certain level 4 autonomy situations (the penultimate level before full autonomy, as defined by the Society of Automotive Engineers). We already have robot cars that can deliver people and packages, and that operate at low speeds in low-complexity environments where you have low interaction. Manhattan is a challenging case because traffic in Manhattan is super chaotic, but we do already have robot cars that could operate in retirement communities or business campuses, or in general places where there is not too much traffic. Nevertheless, those are still real-world places where you can expect other traffic, other people, and other vehicles.

Next, we have to think about how we extend this capability to make it applicable to bigger and more complex environments where you’ll face more complex interactions at higher speeds. That technology is slowly coming, but there are still some serious challenges ahead. For instance, the sensors that we use in autonomous driving today are not very reliable in bad weather. We’ve still got a long way to go to reach level 5 autonomy where the car is fully autonomous in all weather conditions. These systems also have to be able to handle the kind of congestion that you find in New York City, and we have to become much better at integrating robot cars with human-driven cars. This is why thinking about mixed human/machine environments is very exciting and very important. Every year we see gradual improvements in the technology but getting to a complete solution, if I was to estimate, could take another decade.

There are, though, specific applications where we will see autonomy used commercially sooner than other applications. I believe that a retirement community could use autonomous shuttles today. I believe that long-distance driving with autonomous trucks is coming soon. It’s a little bit simpler than driving in New York, but it’s harder than what driving in a retirement community would look like because you have to drive at high speed, and there are a lot of corner cases and situations where maybe a human driver would have to step in. Let’s say it’s raining torrentially and you are on a treacherous mountain pass in the Rockies. To face that, you need to have a collaboration between a really great sensor and control system, and the human’s reasoning and control capabilities. With autonomous driving on highways, we will see patches of autonomy interleaved with human assistance, or vice versa, and that will be sooner than 10 years, for sure, maybe 5.

MARTIN FORD: So in the next decade a lot of these problems would be solved, but not all of them. Maybe the service would be confined to specified routes or areas that are really well mapped?

DANIELA RUS: Well, not necessarily. Progress is happening. In our group we just released a paper that demonstrates one of the first systems capable of driving on country roads. So, on the one hand, the challenges are daunting, but on the other hand, 10 years is a long time. 20 years ago, Mark Weiser, who was the chief scientist at Xerox PARC, talked about pervasive computing and he was seen as a dreamer. Today, we have solutions for all of the situations he envisioned where computing would be used, and how computing would support us.

I want to be a technology optimist. I want to say that I see technology as something that has the huge potential to unite people rather than divide people, and to empower people rather than estrange people. In order to get there, though, we have to advance science and engineering to make technology more capable and more deployable.

We also have to embrace programs that enable broad education and allow people to become familiar with technology to the point where they can take advantage of it and where anyone could dream about how their lives could be better by the use of technology. That’s something that’s not possible with AI and robotics today because the solutions require expertise that most people don’t have. We need to revisit how we educate people to ensure that everyone has the tools and the skills to take advantage of technology. The other thing that we can do is to continue to develop the technology side so that machines begin to adapt to people, rather than the other way around.

MARTIN FORD: In terms of ubiquitous personal robots that can actually do useful things, it seems to me that the limiting factor is really dexterity. The cliché is being able to ask a robot to go to the refrigerator and get you a beer. That’s a real challenge in terms of the technology that we have today.

DANIELA RUS: Yes, I think you’re right. We do currently see significantly greater successes in navigation than in manipulation, and these are two major types of capabilities for robots. The advances in navigation were enabled by hardware advances. When the LIDAR sensor—the laser scanner—was introduced, all of a sudden, the algorithms that didn’t work with sonar started working, and that was transformational. We now had a reliable sensor that control algorithms could use in a robust way. As a result of that, mapping, planning, and localization took off, and that fueled the great enthusiasm in autonomous driving.

Coming back to dexterity, on the hardware side, most of our robot hands still look like they did 50 years ago. Most of our robot hands are still very rigid, industrial manipulators with a two-pronged pincer, and we need something different. I personally believe that we are getting closer because we are beginning to look at reimagining what a robot is. In particular, we have been working on soft robots and soft robot hands. We’ve shown that with soft robot hands—the kind that we can design and build in my lab—we are able to pick up objects and handle objects much more reliably and much more intuitively than what is possible with traditional two-finger grasps.

It works as follows: if you have a traditional robot hand where the fingers are all made out of metal, then they are capable of what is technically called “hard finger contact”—you put your finger on the object you’re trying to grasp at one point, and that’s the point at which you can exert forces and torques. If you have that kind of a setup, then you really need to know the precise geometry of the object that you’re trying to pick up. You then need to calculate very precisely where to put your fingers on the surface of the object so that all their forces and torques balance out, and they can resist external forces and torques. This is called in technical literature, “the force closure and form closure problem.” This problem requires very heavy computation, very precise execution, and very accurate knowledge of the object that you’re trying to grasp.

That’s not something that humans do when they grasp an object. As an experiment, try to grasp a cup with your fingernails—it is such a difficult task. As a human, you have a perfect knowledge of the object and where it is located, but you will have a difficult time with that. With soft fingers, you actually don’t need to know the exact geometry of the object you’re trying to grasp because the fingers will comply to whatever the object surface is. Contact along a wider surface area means that you don’t have to think precisely about where to place the fingers in order to reliably envelop and lift the object.

That translates into much more capable robots and much simpler algorithms. As a result, I’m very bullish about the future progress in grasping and manipulation. I think that soft hands, and in general, soft robots are going to be a very critical aspect of advancement in dexterity, just like the laser scanner was a critical aspect of advancing the navigation capabilities of robots.

That goes back to my observation that machines are made up of bodies and brains. If you change the body of the machine and you make it more capable, then you will be able to use different types of algorithms to control that robot. I’m very excited about soft robotics, and I’m very excited about the potential for soft robotics to impact an area of robotics that has been stagnant for many years. A lot of progress has been made in grasping and manipulation, but we do not have the kinds of capabilities that compare with those of natural systems, people or animals.

MARTIN FORD: Let’s talk about progress in AI toward human-level artificial intelligence or AGI. What does that path look like, and how close are we?

DANIELA RUS: We have been working on AI problems for over 60 years, and if the founders of the field were able to see what we tout as great advances today, they would be very disappointed because it appears we have not made much progress. I don’t think that AGI is in the near future for us at all.

I think that there is a great misunderstanding in the popular press about what artificial intelligence is and what it isn’t. I think that today, most people who say “AI,” actually mean machine learning, and more than that, they mean deep learning within machine learning.

I think that most people who talk about AI today tend to anthropomorphize what these terms mean. Someone who is not an expert says the word “intelligence” and only has one association with intelligence, and that is the intelligence of people.

When people say “machine learning,” they imagine that the machine learned just like a human has learned. Yet these terms mean such different things in the technical context. If you think about what machine learning can do today, it’s absolutely extraordinary. Machine learning is a process that starts with millions of usually manually labeled data points, and the system aims to learn a pattern that is prevalent in the data, or to make a prediction based on that data.

These systems can do this much better than humans because these systems can assimilate and correlate many more data points then humans are able to. However, when a system learns, for example, that there is a coffee mug in a photograph, what it is actually doing is it’s saying that the pixels that form this blob that represents the coffee mug in the current photo are the same as other blobs that humans have labeled in images as coffee mugs. The system has no real idea what that coffee mug represents.

The system has no idea what to do with it, it doesn’t know if you drink it, eat it, or if you throw it. If I told you that there is a coffee mug on my desk, you don’t need to see that coffee mug in order to know what it is because you have the kind of reasoning and experience that machines today simply do not have.

To me, the gap between this and human-level intelligence is extraordinary, and it will take us a long time to get there. We have no idea of the processes that define our own intelligence, and no idea how our brain works. We have no idea how children work. We know a little bit about the brain, but that amount is insignificant to how much there is to know. The understanding of intelligence is one of the most profound questions in science today. We see progress at the intersection between neuroscience, cognitive science, and computer science.

MARTIN FORD: Is it possible that there might be an extraordinary breakthrough that really moves things along?

DANIELA RUS: That’s possible. In our lab, we’re very interested in figuring out whether we can make robots that will adapt to people. We started looking at whether we can detect and classify brain activity, which is a challenging problem.

We are mostly able to classify whether a person detects that something is wrong because of the “you are wrong” signal—called the “error-related potential.” This is a signal that everyone makes, independent of their native tongue and independent of their circumstances. With the external sensors we have today, which are called EEG caps, we are fairly reliably able to detect the “you are wrong” signal. That’s interesting because if we can do that, then we can imagine applications where workers could work side by side with robots, and they could observe the robots from a distance and correct their mistakes when a mistake is detected. In fact, we have a project that addresses this question.

What’s interesting, though, is that these EEG caps are made up of 48 electrodes placed on your head—it’s a very sparse, mechanical setup that reminds you of when computers were made up of levers. On the other hand, we have the ability to do invasive procedures to tap into neurons at the level of the neural cell, so you could actually stick probes into the human brain, and you could detect neural-level activity very precisely. There’s a big gap between what we can do externally and what we can do invasively, and I wonder whether at some point we will have some kind of Moore’s law improvement on sensing brain activity and observing brainwave activity at a much higher resolution.

MARTIN FORD: What about the risks and the downsides of all of this technology? One aspect is the potential impact on jobs. Are we looking at a big disruption that could eliminate a lot of work, and is that something we have to think about adapting to?

DANIELA RUS: Absolutely! Jobs are changing: jobs are going away, and jobs are being created. The McKinsey Global Institute published a study that gives some really important views. They looked at a number of professions and observed that there are certain tasks that can be automated with the level of machine capability today, and others that cannot.

If you do an analysis of how people spend time in various professions, there are certain categories of work. People spend time applying expertise; interacting with others; managing; doing data processing; doing data entry; doing predictable physical work; doing unpredictable physical work. Ultimately, there are tasks that can be automated and tasks that can’t. The predictable physical work and the data tasks are routine tasks that can be automated with today’s technologies, but the other tasks can’t.

I’m actually very inspired by this because what I see is that technology can relieve us of routine work in order to give us time to focus on the more interesting parts of our work. Let’s go through an example in healthcare. We have an autonomous wheelchair, and we have been talking with physical therapists about using this wheelchair. They are very excited about it because, at the moment, the physical therapist works with patients in the hospital in the following way:

For every new patient, the physical therapist has to go to the patient’s bed where they have to put the patient in a wheelchair, push the patient to the gym where they’ll work together in the gym and at the end of the hour, the physical therapist has to take the patient back to the patient’s hospital bed. A significant amount of time is spent moving the patient around and not on patient care.

Now imagine if the physical therapist didn’t have to do this. Imagine if the physical therapist could stay in the gym, and the patient would show up delivered by an autonomous wheelchair. Then both the patient and the physical therapist would have a much better experience. The patient would get more help from the physical therapist, and the physical therapist would focus on applying their expertise. I’m very excited about the possibility of enhancing the quality of time that we spend in our jobs and increasing our efficiency in our jobs.

A second observation is that in general, it is much easier for us to analyze what might go away than to imagine what might come back. For instance, in the 20th century, agricultural employment dropped from 40% to 2% in the United States. Nobody in the 20th century guessed that this would happen. Just consider, then, that only 10 years ago, when the computer industry was booming, nobody predicted the level of employment in social media; in app stores; in cloud computing; and even in other things like college counseling. There are so many jobs that employ a lot of people today that did not exist 10 years ago, and that people did not anticipate would exist. I think that it’s exciting to think about the possibilities for the future and the new kinds of jobs that will be created as a result of technology.

MARTIN FORD: So, you think the jobs destroyed by technology and the new jobs created will balance out?

DANIELA RUS: Well, I do also have concerns. One concern is in the quality of jobs. Sometimes, when you introduce technology, the technology levels the playing field. For instance, it used to be that taxi drivers had to have a lot of expertise—they had to have great spatial reasoning, and they had to memorize large maps. With the advent of GPS, that level of skill is no longer needed. What that does is open the field for many more people to join the driving market, and that tends to lower the wages.

Another concern is that I wonder if people are going to be trained well enough for the good jobs that will be created as a result of technology. I think that there are only two ways to approach this challenge. In the short term, we have to figure out how to help people retrain themselves, how to help people gain the skills that are needed in order to fulfill some of the jobs that exist. I can’t tell you how many times a day I hear, “We want your AI students. Can you send us any AI students?” Everyone wants experts in artificial intelligence and machine learning, so there are a lot of jobs, and there are also a lot of people who are looking for jobs. However, the skills that are in demand are not necessarily the skills that people have, so we need retraining programs to help people acquire those skills.

I’m a big believer in the fact that actually anybody can learn technology. My favorite example is a company called BitSource. BitSource was launched a couple of years back in Kentucky, and this company is retraining coal miners into data miners and has been a huge success. This company has trained a lot of the miners who lost their jobs and who are now in a position to get much better, much safer and much more enjoyable jobs. It’s an example that actually tells us that with the right programs and the right support, we can actually help people in this transition period.

MARTIN FORD: Is that just in terms of retraining workers, or do we need to fundamentally change our entire educational system?

DANIELA RUS: In the 20th century we had reading, writing, and arithmetic that defined literacy. In the 21st century, we should expand what literacy means, and we should add computational thinking. If we teach in schools how to make things and how to breathe life into them by programming, we will empower our students. We can get them to the point where they can imagine anything and make it happen, and they will have the tools to make it happen. More importantly, by the time they finish high school, these students will have the technical skills that will be required in the future, and they will be exposed to a different way of learning that will enable them to help themselves for the future.

The final thing I want to say about the future of work is that our attitude toward learning will also have to change. Today, we operate with a sequential model of learning and working. What I mean by this is that most people spend some chunk of their lives studying and at some point, they say, “OK, we’re done studying, now we’re going to start working.” With technology accelerating and bringing in new types of capabilities, though, I think it’s very important to reconsider the sequential approach to learning. We should consider a more parallel approach to learning and working, where we will be open to acquiring new skills and applying those skills as a lifelong learning process.

MARTIN FORD: Some countries are making AI a strategic focus or adopting an explicit industrial policy geared toward AI and robotics. China, in particular, is investing massively in this area. Do you think that there is a race toward advanced AI, and is the US at risk of falling behind?

DANIELA RUS: When I look at what is happening in AI around the world, I think it is amazing. You have China, Canada, France, and the UK, among dozens of others, hugely investing in AI. Many countries are betting their future on AI, and I think we in the US should do too. I think we should consider the potential for AI, and we should increase the support and the funding of AI.

DANIELA RUS is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Daniela’s research interests are in robotics, artificial intelligence, and data science.

The focus of her work is developing the science and engineering of autonomy, toward the long-term objective of enabling a future with machines pervasively integrated into the fabric of life, supporting people with cognitive and physical tasks. Her research addresses some of the gaps between where robots are today and the promise of pervasive robots: increasing the ability of machines to reason, learn, and adapt to complex tasks in human-centered environments, developing intuitive interfaces between robots and people, and creating the tools for designing and fabricating new robots quickly and efficiently. The applications of this work are broad and include transportation, manufacturing, agriculture, construction, monitoring the environment, underwater exploration, smart cities, medicine, and in-home tasks such as cooking.

Daniela serves as the Associate Director of MIT’s Quest for Intelligence Core, and as Director of the Toyota-CSAIL Joint Research Center, whose focus is the advancement of AI research and its applications to intelligent vehicles. She is a member of the Toyota Research Institute advisory board.

Daniela is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, and a member of the National Academy of Engineering and the American Academy of Arts and Sciences. She is the recipient of the 2017 Engelberger Robotics Award from the Robotics Industries Association. She earned her PhD in Computer Science from Cornell University.

Daniela has also worked on two collaborative projects with the Pilobolus Dance company at the intersection of technology and art. Seraph, a pastoral story about human-machine friendship, was choreographed in 2010 and performed in 2010-2011 in Boston and New York City. The Umbrella Project, a participatory performance exploring group behavior, was choreographed in 2012 and performed at PopTech 2012, in Cambridge, Baltimore, and Singapore.