Chapter 23. OREN ETZIONI

OREN ETZIONI

If you look at a question like, “Would an elephant fit through a doorway?”, while most people can answer that question almost instantaneously, machines will struggle. What’s easy for one is hard for the other, and vice versa. That is what I call the AI paradox.

CEO, THE ALLEN INSTITUTE FOR ARTIFICIAL INTELLIGENCE

Oren Etzioni is the CEO of the Allen Institute for Artificial Intelligence, an independent organization established by Microsoft co-founder Paul Allen and dedicated to conducting high-impact research in artificial intelligence for the common good. Oren oversees a number of research initiatives, perhaps most notably Project Mosaic, a $125 million effort to build common sense into an artificial intelligence system—something that is generally considered to be one of the most difficult challenges in AI.

MARTIN FORD: Project Mosaic sounds very interesting. Could you tell me about that and the other projects that you’re working on at the Allen Institute?

OREN ETZIONI: Project Mosaic is focused on endowing computers with common sense. A lot of the AI systems that humans have built, to date, are very good at narrow tasks. For example, humans have built AI systems that can play Go very well, but the room may be on fire, and the AI won’t notice. These AI systems completely lack common sense, and that’s something that we’re trying to address with Mosaic.

Our over-arching mission at the Allen Institute for Artificial Intelligence is AI for the common good. We’re investigating how you can use artificial intelligence to make the world a better place. Some of that is through basic research, while the rest has more of an engineering flavor.

A great example of this is a project called Semantic Scholar. In the Semantic Scholar project, we’re looking at the problem of scientific search and scientific hypothesis generation. Because scientists are inundated with more and more publications, we realize that scientists, just like all of us when we’re experiencing information overload, really need help in cutting through that clutter; and that’s what Semantic Scholar does. It uses machine learning and natural language processing, along with various AI techniques, to help scientists figure out what they want to read and how to locate results within papers.

MARTIN FORD: Does Mosaic involve symbolic logic? I know there was an older project called Cyc that was a very labor-intensive process, where people would try to write down all the logical rules, such as how objects related, and I think it became kind of unwieldy. Is that the kind of thing you’re doing with Mosaic?

OREN ETZIONI: The problem with the Cyc project is that, over 35 years in, it’s really been a struggle for them, for exactly the reasons you said. But in our case, we’re hoping to leverage more modern AI techniques—crowdsourcing, natural language processing, machine learning, and machine vision—in order to acquire knowledge in a different way.

With Mosaic, we’re also starting with a very different point of view. Cyc started, if you will, inside out, where they said, “OK. We’re going to build this repository of common-sense knowledge and do logical reasoning on top of it.” Now, what we said in response is, “We’re going to start by defining a benchmark, where we assess the common-sense abilities of any program.” That benchmark then allows us to measure how much common sense a program has, and once we’ve defined that benchmark (which is not a trivial undertaking) we’ll then build it and be able to measure our progress empirically and experimentally, which is something that Cyc was not able to do.

MARTIN FORD: So, you’re planning to create some kind of objective test that can be used for common sense?

OREN ETZIONI: Exactly! Just the way the Turing test was meant to be a test for artificial intelligence or IQ, we’re going to have a test for common sense for AI.

MARTIN FORD: You’ve also worked on systems that attempt to pass college examinations in biology or other subjects. Is that one of the things you’re continuing to focus on?

OREN ETZIONI: One of Paul Allen’s visionary and motivating examples, which he’s investigated in various ways even prior to the Allen Institute for AI, was the idea of a program that could read a chapter in a textbook and then answer the questions in the back of that book. So, we formulated a related problem by saying, “Let’s take standardized tests, and see to what extent we can build programs that score well on these standardized tests.” And that’s been part of our Aristo project in the context of science, and part of our Euclid project in the context of math problems.

For us it is very natural to start working on a problem by defining a benchmark task, and then continually improving performance on it. So, we’ve done that in these different areas.

MARTIN FORD: How is that progressing? Have you had successes there?

OREN ETZIONI: I would say the results have been mixed, to be frank. I would say that we’re state of the art in both science and math tests. In the case of science, we ran a Kaggle competition, where we released the questions, and several thousand teams from all over the world joined. With this, we wanted to see whether we were missing anything, and we found that in fact our technology did quite a bit better than anything else out there, at least who participated in the test.

In the sense of being state of the art and having that be a focus for research, and publishing a series of papers and datasets, I think it’s been very positive. What’s negative is that our ability on these tests is still quite limited. We find that, when you have the full test, we’re getting something like a D, not a very stellar grade. This is because these problems are quite hard, and often they also involve vision and natural language. But we also realized that a key problem that was blocking us was actually the lack of common sense. So, that’s one of the things that led us to Project Mosaic.

What’s really interesting here is that there’s something I like to call the AI paradox, where things that are really hard for people—like playing World Championship-level Go—are quite easy for machines. On the other hand, there are things that are easy for a person to do, for example if you look at a question like, “Would an elephant fit through a doorway?”, while most people can answer that question almost instantaneously, machines will struggle. What’s easy for one is hard for the other, and vice versa. That is what I call the AI paradox.

Now, the standardized test writers, they want to take a particular concept like photosynthesis, or gravity, and have the student apply that concept in a particular context, so that they demonstrate their understanding. It turned out that representing something like photosynthesis, at a 6th grade level, and representing that to the machine is really quite easy, so we have an easy time doing that. But where the machine struggles is when it’s time to applying the concept in a particular situation that requires language understanding and common-sense reasoning.

MARTIN FORD: So, you think your work on Mosaic could accelerate progress in other areas, by providing a foundation of common-sense understanding?

OREN ETZIONI: Yes. I mean, a typical question is. “If you have a plant in a dark room and you move it nearer the window, will the plant’s leaves grow faster, slower or at the same rate?” A person can look at that question and understand that if you move a plant nearer to the window then there’s more light, and that more light means the photosynthesis proceeds faster, and so the leaves are likely to grow faster. But it turns out that the computer really struggles with this—because the AI doesn’t necessarily understand what you mean when you say, “What happens when you move a plant nearer the window.”

These are some examples that indicate what led us to Project Mosaic, and what some of our struggles have been with things like Aristo and Euclid over the years.

MARTIN FORD: What led you to work in AI, and how did you end up working at the Allen Institute?

OREN ETZIONI: My foray into AI really started in high school when I read the book, Gödel, Escher, Bach: An Eternal Golden Braid. This book explored the themes of logician Kurt Gödel, the artist M. C. Escher, and the composer Johann Sebastian Bach, and expounded many of the concepts that are relatable to AI, such as mathematics and intelligence. This is where my fascination with AI began.

I then went to Harvard, for college, where they were just starting AI classes when I was a sophomore. So, I took my first AI class, and I was completely fascinated. They were not doing much in the way of AI at the time but, just a short subway ride away, I found myself at the MIT AI Lab and I remember that Marvin Minsky, the co-founder of MIT’s AI lab, was teaching. And actually Douglas Hofstadter, the author of Gödel, Escher, Bach, was a visiting professor, so I attended Douglas’s seminar and became even more enchanted with the field of AI.

I got a part-time job as a programmer at the MIT AI Lab and for somebody who was just starting their career, I was, as they say, over the moon. As a result, I decided to go to graduate school to study AI. Graduate school for me was at Carnegie Mellon University where I worked with Tom Mitchell, who is one of the founding fathers of the field of machine learning.

The next step in my career was when I became a faculty member at the University of Washington, where I studied many topics in AI. At the same time got involved in a number of AI-based start-ups, which I found to be very exciting. All of this resulted in me joining the Allen Institute of AI, and more specifically in 2013 Paul Allen’s team reached out to me saying that they wanted me to launch an Institute for AI. So, in January 2014 we launched the Allen Institute for AI. Now fast forward to 2018 and here we are today.

MARTIN FORD: As the leader of one of Paul Allen’s institutes, I assume you have a lot of contact with him. What would you say about his motivation and vision for the Allen Institute of AI?

OREN ETZIONI: I’m really lucky in that I’ve had a lot of contact with Paul over the years. When I was first contemplating this position, I read Paul’s book Idea Man, which gave me a sense of both his intellect and his vision. While reading Paul’s book, I realized that he’s really operating in the tradition of the Medicis. He’s a scientific philanthropist and has signed the Giving Pledge that was started by both Bill and Melinda Gates and Warren Buffett, where he’s publicly dedicated most of his wealth to philanthropy. What drives him here is that he’s been fascinated by AI and the questions around how we can imbue semantics and an understanding of text in machines since the 1970s.

Over the years, Paul and I have had many conversations and email exchanges, and Paul continues to help shape the vision of the institute, not just in terms of the financial support but in terms of the project choices, and the direction of the institute. Paul is still very much hands-on.

MARTIN FORD: Paul has also founded the Allen Institute of Brain Science. Given that the fields are related, is there some synergy between the two organizations? Do you collaborate or share information with the brain science researchers?

OREN ETZIONI: Yes, that’s correct. So, way back in 2003, Paul started the Allen Institute of Brain Science. In our corner of the Allen Institutes, we call ourselves “AI2,” partly because it’s a bit kind of tongue-in-cheek as we’re the Allen Institute of AI but also because we’re the second Allen Institute.

But going back to Paul’s scientific philanthropy, his strategy is to create a series of these Allen Institutes. There’s a very close exchange of information between us all. But the methodologies that we use are really quite different, in that the Institute of Brain Science is really looking at the physical structure of the brain, while here at AI2 we’re adopting a rather more classical-AI methodology for building software.

MARTIN FORD: So, at AI2 you’re not necessarily trying to build AI by reverse-engineering the brain, you’re actually taking more of a design approach, where you’re building an architecture that’s inspired by human intelligence?

OREN ETZIONI: That is exactly right. When we wanted to figure out flight, we ended up with airplanes, and now we’ve developed Boeing 747s, which are very different than birds in several ways. There are some of us within the AI field who think that it is quite likely that our artificial intelligence will be implemented very differently than human intelligence.

MARTIN FORD: There’s enormous attention currently being given to deep learning and to neural networks. How do you feel about that? Do you think it’s overhyped? Is deep learning likely to be the primary path forward in AI, or just one part of the story?

OREN ETZIONI: I guess my answer would be all of the above. There have been some very impressive achievements with deep learning, and we see that in machine translation, speech recognition, object detection, and facial recognition. When you have a lot of labeled data, and you have a lot of computer power, these models are great.

But at the same time, I do think that deep learning is overhyped because some people say that it’s really putting us on a clear path towards artificial intelligence, possibly general artificial intelligence, and maybe even superintelligence. And there’s this sense that that’s all just around the corner. It reminds me of the metaphor of a kid who climbs up to the top of the tree and points at the moon, saying, “I’m on my way to the moon.”

I think that in fact, we really have a long way to go and there are many unsolved problems. In that sense, deep learning is very much overhyped. I think the reality is that deep learning, and neural networks are particularly nice tools in our toolbox, but it’s a tool that still leaves us with a number of problems like reasoning, background knowledge, common sense, and many others largely unsolved.

MARTIN FORD: I do get the sense from talking to some other people, that they have great faith in machine learning as the way forward. The idea seems to be that if we just have enough data, and we get better at learning—especially in areas like unsupervised learning—then common-sense reasoning will emerge organically. It sounds like you would not agree with that.

OREN ETZIONI: The notion of “emergent intelligence” was actually a term that Douglas Hofstadter, the cognitive scientist, talked about back in the day. Nowadays people talk about it in various contexts, with consciousness, and with common sense, but that’s really not what we’ve seen. We do find that people, including myself, have all kinds of speculations about the future, but as a scientist, I like to base my conclusions on the specific data that we’ve seen. And what we’ve seen is people using deep learning as high-capacity statistical models. High capacity is just some jargon that means that the model keeps getting better and better the more data you throw at it.

Statistical models that at their core are based on matrices of numbers being multiplied, and added, and subtracted, and so on. They are a long way from something where you can see common sense or consciousness emerging. My feeling is that there’s no data to support these claims and if such data appears, I’ll be very excited, but I haven’t seen it yet.

MARTIN FORD: What projects would you point to, in addition to what you’re working on, that are really at the forefront? What are the most exciting things happening in AI? The places you’d look for the next big developments. Would that be AlphaGo, or are there other things going on?

OREN ETZIONI: Well, I think DeepMind is where some of the most exciting work is currently taking place.

I’m actually more excited about what they called AlphaZero, than AlphaGo, and so the fact that they were able to achieve excellent performance without hand-labeled examples, I think is quite exciting. At the same time, I think everybody in the community agrees that when you’re dealing with board games, it’s black and white, there’s an evaluation function, it’s a very limited realm. So, I would look to current work on robotics, and work on natural language processing to see some excitement. And I think that there’s also some work in fields called “transfer learning,” where people are trying to map from one task to the other.

I think Geoffrey Hinton is trying to develop a different approach to deep learning. I think at AI2, where we have 80 people who are looking at how do you put together the symbolic type of approaches and knowledge with the deep learning paradigm, I think that’s also very exciting.

There’s also “zero-shot learning,” where people are trying to build programs that can learn when they see something even for the first time. And there is “one-shot learning” where a program sees a single example, and they’re able to do things. I think that’s exciting. Brenden Lake who’s an assistant professor of Psychology and Data Science at NYU, is doing some work along those lines.

Tom Mitchell’s work, with lifelong learning at CMU, is also very interesting—they’re trying to build a system that looks more like a person: it doesn’t just run through a dataset and build a model and then it’s done. Instead, it continually operates and continually tries to learn, and then learn based on that, over a longer extended period of time.

MARTIN FORD: I know there’s an emerging technique called “curriculum learning,” where you start with easier things and then move to the harder things, in the same way a human student would.

OREN ETZIONI: Exactly. But if we just take a step back for a minute here, we can see that AI is a field that’s rife with bombastic and overly grandiose misnomers. In the beginning, the field was called “artificial intelligence,” and to me, that’s not the best name. Then there’s “human learning” and “machine learning,” both of which sound very grandiose but actually, the set of techniques they use are often very limited. All these terms that we just talked about—and curricular learning is a great example—refer to approaches where we’re simply trying to extend a relatively limited set of statistical techniques, and to start to take on more of the characteristics of human learning.

MARTIN FORD: Let’s talk about the path toward artificial general intelligence. Do you believe it is achievable, and if so, do you think it’s inevitable that we will ultimately get to AGI?

OREN ETZIONI: Yes. I’m a materialist, so I don’t believe there’s anything in our brain other than atoms and consequently, I think that thought is a form of computation and so I think that it’s quite likely that over some period of time we’ll figure out how to do it in a machine.

I do recognize that maybe we’re just not smart enough to do that, even with the help of a computer, but my intuition is that we will likely achieve AGI. As for the time line though, we’re very far from AGI because there are so many problems that need to be solved that we haven’t even been able to define appropriately for the machine.

This is one of the subtlest things in the whole field. People see these amazing achievements, like a program that beats people in Go and they say, “Wow! Intelligence must be around the corner.” But when you get to these more nuanced things like natural language, or reasoning over knowledge, it turns out that we don’t even know, in some sense, the right questions to ask.

Pablo Picasso is famous for saying computers are useless. They answer questions rather than asking them. So, when we define a question rigorously, when we can define it mathematically or as a computational problem, we’re really good at hammering away at that and figuring out the answer. But there are a lot of questions that we don’t yet know how to formulate appropriately, such as how can we represent natural language inside a computer? Or, what is common sense?

MARTIN FORD: What are the primary hurdles we need to overcome to achieve AGI?

OREN ETZIONI: When I talk to people working in AI about these questions, such as when we might achieve AGI, one of the things that I really like is to identify is what I call canaries in the coal mine. In the same way that the coal miners put canaries in the mines to warn them of dangerous gases, I feel like there are certain stepping stones—and that if we achieved those, then AI would be in a very different world.

So, one of those stepping stones would be an AI program that can really handle multiple, very different tasks. An AI program that’s able to both do language and vision, it’s able to play board games and cross the street, it’s able to walk and chew gum. Yes, that is a joke, but I think it is important for AI to have the ability to do much more complex things.

Another stepping stone is that it’s very important that these systems be a lot more data-efficient. So, how many examples do you need to learn from? If you have an AI program that can really learn from a single example, that feels meaningful. For example, I can show you a new object, and you look at it, you’re going to hold it in your hand, and you’re thinking, “I’ve got it.” Now, I can show you lots of different pictures of that object, or different versions of that object in different lighting conditions, partially obscured by something, and you’d still be able to say, “Yep, that’s the same object.” But machines can’t do that off of a single example yet. That would be a real stepping stone to AGI for me.

Self-replication is another dramatic stepping stone towards AGI. Can we have an AI system that is physically embodied and that can make a copy of itself? That would be a huge canary in the coal mine because then that AI system could make lots of copies of itself. People have quite a laborious and involved process for making copies of themselves, and AI systems cannot. You can copy the software easily but not the hardware. Those are some of the major stepping stones to AGI that come to mind.

MARTIN FORD: And maybe the ability to use knowledge in a different domain would be a core capability. You gave the example of studying a chapter in a textbook. To be able to acquire that knowledge, and then not just answer questions about it, but actually be able to employ it in a real-world situation. That would seem to be at the heart of true intelligence.

OREN ETZIONI: I completely agree with you, and that question is only a step along the way. It’s employment of AI in the real world, and also in unanticipated situations.

MARTIN FORD: I want to talk about the risks that are associated with AI, but before we do that, do you want to say more about what you view as some of the greatest benefits, some of the most promising areas where AI could be deployed?

OREN ETZIONI: There are two examples that stand out to me, the first being self-driving cars, where we have upwards of 35,000 deaths each year on US highways alone, we have in the order of a million accidents where people are injured, and studies have shown that we could cut a substantial fraction of that by using self-driving cars. I get very excited when I see how AI can directly translate to technologies that save lives.

The second example, which we’re working on, is science—which has been such an engine of prosperity in economic growth, the improvement of medicine, and generally speaking for humanity. Yet despite these advancements, there are still so many challenges, whether it’s Ebola, or cancer, or superbugs that are resistant to antibiotics. Scientists need help to solve these problems and just to move faster. With a project like Semantic Scholar, it has the potential to save people’s lives by providing better medical outcomes and better medical research.

My colleague, Eric Horvitz, is one of the most thoughtful people on these topics. He has a great quote when he responds to people who are worried about AI taking lives. He says that actually, it’s the absence of AI technology that is already killing people. The third-leading cause of death in American hospitals is physician error, and a lot of that could be prevented using AI. So, our failure to use AI is really what’s costing lives.

MARTIN FORD: Since you mentioned self-driving cars, let me try to pin you down on a timeframe. Imagine you’re in Manhattan, in some random location, and you call for a car. A self-driving car arrives with no one inside, and it’s going to take you to some other random location. When do you think we will see that as a widely available consumer service?

OREN ETZIONI: I would say that is probably somewhere between 10 and 20 years away from today.

MARTIN FORD: Let’s talk about the risks. I want to start with the one that I’ve written a lot about, which is the potential economic disruption, and the impact on the job market. I think it’s quite possible that we’re on the leading edge of a new industrial revolution, which might really have a transformative impact, and maybe will destroy or deskill a lot of jobs. What do you think about that?

OREN ETZIONI: I very much agree with you, in the sense that I have tried, as you have, not to get overly focused on the threats of superintelligence because we should have fewer imaginary problems and more real ones. But we have some very real problems and one of the most prominent of them, if not the most prominent, is jobs. There’s a long-term trend towards the reduction of manufacturing jobs, and due to automation, computer automation, and AI-based automation, we now have the potential to substantially accelerate that timeline. So, I do think that there’s a very real issue here.

One point that I would make, is that it’s also the case that the demographics are working in our favor. The number of children we have as a species is getting smaller on average, and the number of us living longer is increasing, and society is aging—especially after the baby boom. So, for the next 20 years, I think we’re going to be seeing increasing automation, but we’re also going to be seeing the number of workers not growing as quickly as it did before. Another way that demographic factors work in our favor is that, while for the last two decades, more women were entering the workforce, and the percentage of female participation in the workforce was going up, this affect has now plateaued. In other words, women who want to be in the workforce are now already there. So again, I think that for the next 20 years we’re not going to see the numbers of workers increasing. The risk of automation taking jobs away from people is still serious though I think.

MARTIN FORD: In the long run, what do you think of the idea of a universal basic income, as a way to adapt society to the economic consequences of automation?

OREN ETZIONI: I think that what we’ve already seen with agriculture, and with manufacturing, is clearly going to recur. Let’s say we don’t argue about the exact timing. It’s very clear that, in the next 10 to 50 years, many jobs are either going to go away completely or those jobs are going to be radically transformed—they’ll be done a lot more efficiently, with fewer people.

As you know, the number of people working in agriculture is much smaller than it was in the past, and the jobs involved in agriculture are now much more sophisticated. So, when that happens, we have this question: “What are the people going to do?” I don’t necessarily know, but I do have one contribution to this conversation, which I wrote up as an article for Wired in February 2017 titled Workers displaced by automation should try a new job: Caregiver. (https://www.wired.com/story/workers-displaced-by-automation-should-try-a-new-job-caregiver/)

In that Wired paper, I said some of the most vulnerable workers, in this economic situation that we’re discussing here, are people who don’t have a high-school degree or those who don’t have a college degree. I don’t think it’s likely that we’re going to be successful in the principle of coal miners to data miners, that we’re going to give these people technical retraining, and that they’ll somehow become part of the new economy very easily. I think that’s a major challenge.

I also don’t think that universal basic income, at least given the current climate, where we can’t even achieve universal health care, or universal housing, is going to be easy either.

MARTIN FORD: It seems pretty clear that any viable solution to this problem will be a huge political challenge.

OREN ETZIONI: I don’t know that there is a general solution or a silver bullet, but my contribution to the conversation is to think about jobs that are very strongly human focused. Think of the jobs providing emotional support: having coffee with somebody or being a companion who keeps somebody company. I think that those are the jobs that when we think about our elderly, when we think about our special-needs kids, when we think about various populations like that, those are the ones that we really want a person to engage with—rather than a robot.

If we want society to allocate resources toward those kinds of jobs, to give the people engaged in those jobs better compensation and greater dignity, then I think that there’s room for people to take on those jobs. That said, there are many issues with my proposal, I don’t think it’s a panacea, but I do think it’s a direction that’s worth investing in.

MARTIN FORD: Beyond the job market impact, what other things do you think we genuinely should be concerned about in terms of artificial intelligence in the next decade or two?

OREN ETZIONI: Cybersecurity is already a huge concern, and it becomes much more so if we have AI. The other big concern for me is autonomous weapons, which is a scary proposition, particularly the ones that can make life-or-death decisions on their own. But what we just talked about, the risks to jobs—that is still the thing that we should be most concerned about, even more so than security and weapons.

MARTIN FORD: How about existential risk from AGI, and the alignment or control problem with regard to a superintelligence. Is that something that we should be worried about?

OREN ETZIONI: I think that it’s great for a small number of philosophers and mathematicians to contemplate the existential threat, so I’m not dismissing it out of hand. At the same time, I don’t think those are the primary things that we should be concerned about, nor do I think that there’s that much that we can do at this point about that threat.

I think that one of the interesting things to consider is if a superintelligence emerges, it would be really nice to be able to communicate with it, to talk to it. The work that we’re doing at AI2—and that other people are also doing—on natural language understanding, seems like a very valuable contribution to AI safety, at least as valuable as worrying about the alignment problem, which ultimately is just a technical problem having to do with reinforcement learning and objective functions.

So, I wouldn’t say that we’re underinvesting in being prepared for AI safety, and certainly some of the work that we’re doing at AI2 is actually implicitly a key investment in AI safety.

MARTIN FORD: Any concluding thoughts?

OREN ETZIONI: Well, there’s one other point I wanted to make that I think people often miss in the AI discussion, and that’s the distinction between intelligence and autonomy (https://www.wired.com/2014/12/ai-wont-exterminate-us-it-will-empower-us/).

We naturally think that intelligence and autonomy go hand in hand. But you can have a highly intelligent system with essentially no autonomy, and the example of that is a calculator. A calculator is a trivial example, but something like AlphaGo that plays brilliant Go but won’t play another game until somebody pushes a button: that’s high intelligence and low autonomy.

You can also have high autonomy and low intelligence. My favorite kind of tongue-in-cheek example is a bunch of teenagers drinking on a Saturday night: that’s high autonomy but low intelligence. But a real-world example, that we’ve all experienced would be a computer virus that can have low intelligence but quite a strong ability to bounce around computer networks. My point is that we should understand that the systems that we’re building have these two dimensions to them, intelligence and autonomy, and that it’s often the autonomy that is the scary part.

MARTIN FORD: Drones or robots that could decide to kill without a human in the loop to authorize that action is something that is really generating a lot of concern in the AI community.

OREN ETZIONI: Exactly, when they’re autonomous and they can make life-and-death decisions on their own. Intelligence, on the other hand, could actually help save lives, by getting them more targeted, or by having them abort when the human cost is unacceptable, or when the wrong person or building is targeted.

I want to emphasize the fact that a lot of our worries about AI are really worries about autonomy, and I want to emphasize that autonomy is something that we can choose as a society to meter out.

I like to think of “AI” as standing for “augmented intelligence,” just as it is with systems like Semantic Scholar and like with self-driving cars. One of the reasons that I am an AI optimist, and feel so passionate about it, and the reason that I’ve dedicated my entire career to AI since high school, is that I see this tremendous potential to do good with AI.

MARTIN FORD: Is there a place for regulation, to address that issue of autonomy? Is that something that you would advocate?

OREN ETZIONI: Yes, I think that regulation is both inevitable and appropriate when it comes to powerful technologies. I would focus on regulating the applications of AI—so AI cars, AI clothes, AI toys, and AI in nuclear power plants, rather than the field itself. Note that the boundary between AI and software is quite murky!

We’re in a global competition for AI, so I wouldn’t rush to regulate AI per se. Of course, existing regulatory bodies like the National Safety Transportation Board are already looking at AI cars, and the recent Uber accident. I think that regulation is very appropriate and that it will happen and should happen.

Oren Etzioni is the CEO of the Allen Institute for Artificial Intelligence (abbreviated as AI2), an independent, non-profit research organization established by Microsoft co-founder Paul Allen in 2014. AI2, located in Seattle, employs over 80 researchers and engineers with the mission of “conducting high-impact research and engineering in the field of artificial intelligence, all for the common good.”

Oren received a bachelor’s degree in computer science from Harvard in 1986. He then went on to obtain a PhD from Carnegie Mellon University in 1991. Prior to joining AI2, Oren was a professor at the University of Washington, where he co-authored over 100 technical papers. Oren is a fellow of the Association for the Advancement of Artificial Intelligence, and is also a successful serial entrepreneur, having founded or co-founded a number of technology startups that were acquired by larger firms such as eBay and Microsoft, Oren helped to pioneer meta-search (1994), online comparison shopping (1996), machine reading (2006), open information extraction (2007), and semantic search of the academic literature (2015).