Chapter 7. FEI-FEI LI

FEI-FEI LI

If we look around, whether you’re looking at AI groups in companies, AI professors in academia, AI PhD students or AI presenters at top AI conferences, no matter where you cut it: we lack diversity. We lack women, and we lack under-represented minorities.

PROFESSOR OF COMPUTER SCIENCE, STANFORD CHIEF SCIENTIST, GOOGLE CLOUD

Fei-Fei Li is Professor of Computer Science at Stanford University, and Director of the Stanford Artificial Intelligence Lab (SAIL). Working in areas of computer vision and cognitive neuroscience, Fei-Fei builds smart algorithms that enable computers and robots to see and think, inspired by the way the human brain works in the real world. Fei-Fei is Chief Scientist, AI and Machine Learning at Google Cloud, where she works to advance and democratize AI. Fei-Fei is a strong proponent of diversity and inclusion in artificial intelligence and co-founded AI4ALL, an organization to attract more women and people from underrepresented groups into the field.

MARTIN FORD: Let’s talk about your career trajectory. How did you first become interested in AI, and how did that lead to your current position at Stanford?

FEI-FEI LI: I’ve always been something of a STEM student, so the sciences have always appealed to me, and in particular I love physics. I went to Princeton University where I majored in Physics, and a by-product of studying physics is that I became fascinated by the fundamentals of the universe. Questions like, where does the universe come from? What does it mean to exist? Where is the universe going? The fundamental quest of human curiosity.

In my research I noticed something really interesting: since the beginning of the 20th century, we’ve seen a great awakening of modern physics, due to the likes of Einstein and Schoenberg, who towards the end of their lives became fascinated not only by the physical matter of the universe but by life, and biology, and by the fundamental questions of being. I became very fascinated by these questions as well. When I started to study, I realized that my real interest in life is not to discover physical matters but to understand intelligence—which defines human life.

MARTIN FORD: Was this when you were in China?

FEI-FEI LI: I was in the US, at Princeton Physics, when my intellectual interest in AI and neuroscience began. When I applied for a PhD there I was very lucky, and to this day, it’s still a bit of a rare combination to do what I did—which was both neuroscience and AI.

MARTIN FORD: Do you think then that it’s an important advantage to study both of those fields rather than to focus exclusively on a computer-science-driven approach?

FEI-FEI LI: I think it gives me a unique angle because I consider myself a scientist, and so when I approach AI, what drives me is scientific hypotheses and the scientific quest. The field of AI is about thinking machines, making machines intelligent, and I like to work on problems at the core of conquering machine intelligence.

Coming from a cognitive neuroscience background, I take an algorithmic point of view, and a detailed modeling point of view. So, I find the connection between the brain and machine learning fascinating. I also think a lot about human-inspired tasks that drive AI advances: the real-world tasks that our natural intelligence had to solve through evolution. My background has in this way given me a unique angle and approach to working with AI.

MARTIN FORD: Your focus has really been on computer vision, and you’ve made the point that, in evolutionary terms, the development of the eye likely led to the development of the brain itself. The brain was providing the compute power to interpret images, and so maybe understanding vision is the gateway to intelligence. Am I correct in that line of thinking?

FEI-FEI LI: Yes, you’re right. Language is a huge part of human intelligence, of course: along with speech, tactile awareness, decision-making, and reasoning. But visual intelligence is embedded in all of these things.

If you look at the way nature designed our brain, half of the human brain is involved in human intelligence, and that human intelligence is intimately related to a motor system, to decision-making, to emotion, to intention, and to language. The human brain does not just happen to recognize isolated objects; these functions are an integral part of what deeply defines human intelligence.

MARTIN FORD: Could you sketch out some of the work you’ve done in computer or machine vision?

FEI-FEI LI: During the first decade of the 21st century, object recognition was the holy grail that the field of computer vision was working on. Object recognition is a building block for all vision. As humans, if we open our eyes and look around our environment, we recognize almost every object we look at. Recognition is critically important for us to be able to navigate the world, understand the world, communicate about the world, and do things in the world. Object recognition was a very lofty holy grail in computer vision, and we were using tools such as machine learning at that time.

Then in the mid-2000s, as I transitioned from a PhD student to become a professor, it became obvious that computer vision as a field was stuck, and that the machine learning models were not making huge progress. Back then, the whole international community was benchmarking autorecognition tasks with around 20 different objects.

So, along with my students and collaborators, we started thinking deeply about how we might make a quantum leap forward. We began to see that it was just not going to be sufficient for us to work with such a small-scale problem involving 20 objects to reach the lofty goal of object recognition. I was very much inspired by human cognition at this point, and the developmental story of any child, where the first few years of development involves a huge amount of data. Children engage in a huge amount of experimenting with their world, seeing the world, and just taking it in. Coincidentally, at was just at this time that the internet had boomed into a global phenomenon that provided a lot of big data.

I wanted to do a pretty crazy project that would take all the pictures we could find on the internet, organize them into concepts that mattered to humans, and label those images. As it turned out, this crazy idea turned into the project called ImageNet, with 15 million images organized into 22,000 labels.

We immediately open-sourced ImageNet for the world, because to this day I believe in the democratization of technology. We released the entire 15 million images to the world and started to run international competitions for researchers to work on the ImageNet problems: not on the tiny small-scale problems but on the problems that mattered to humans and applications.

Fast-forward to 2012, and I think we see the turning point in object recognition for a lot of people. The winner of the 2012 ImageNet competition created a convergence of ImageNet, GPU computing power, and convolutional neural networks as an algorithm. Geoffrey Hinton wrote a seminal paper that, for me, was Phase One in achieving the holy grail of object recognition.

MARTIN FORD: Did you continue this project?

FEI-FEI LI: For the next two years, I worked on taking object recognition a step further. If we again look at human development, babies start by babbling, a few words, and then they start making sentences. I have a two-year-old daughter and a six-year-old son. The two-year-old is making a lot of sentences, which is huge developmental progress, something that humans do as intelligent agents and animals. Inspired by this human development, I started working on the problem of how to enable computers to speak sentences when they see pictures, rather than just labeling a chair or a cat.

We were working on this problem using deep learning models for a few years. In 2015, I talked about the project at the TED2015 conference. The title of my talk was How we’re teaching computers to understand pictures, and I discussed enabling computers to be able to understand the content of an image and summarize it in a human, natural-language sentence which could then be communicated.

MARTIN FORD: The way algorithms are trained is quite different from what happens with a human baby or young child. Children for the most part are not getting labeled data—they just figure things out. And even when you point to a cat and say, “look there’s a cat,” you certainly don’t have to do that a hundred thousand times. Once or twice is probably enough. There’s a pretty remarkable difference in terms of how a human being can learn from the unstructured, real-time data we meet in the world, versus the supervised learning that’s done with AI now.

FEI-FEI LI: You totally nailed it, and this is why as an AI scientist I wake up so excited every day because there’s so much to work with. Some part of the work has inspiration from humans, but a large part of the work does not resemble humans at all. As you say, the success today of neural networks and deep learning mostly involve supervised pattern recognition, which means that it’s a very narrow sliver of capabilities compared to general human intelligence.

I gave a talk at Google’s I/O conference this year, where I was again using the example of my two-year-old daughter. A couple of months ago, I watched her on a baby monitor escape from her crib by learning the cracks in the system, a potential path to escape from the crib. I saw her open her sleeping bag, which I had particularly modified in order to prevent her from opening and get herself out. That kind of coordinated intelligence to a visual motor, planning, reasoning, emotion, intention, and persistence, is really nowhere to be seen in our current AI. We’ve got a lot of work to do, and it’s really important to recognize that.

MARTIN FORD: Do you think there will likely be breakthroughs that allow computers to learn more like children? Are people actively working on how to solve this problem?

FEI-FEI LI: There are absolutely people working on that, especially within the research community. A lot of us are working on the next horizon problem. In my own lab at Stanford, we are working on robotic learning problems where the AI is learning by imitation, which is much more natural than learning by supervised labels.

As kids, we watch how other humans do things and then we do it; so, the field is now starting to get into inverse reinforcement learning algorithms, and neuro-programming algorithms. There is a lot of new exploration, and DeepMind is doing that. Google Brain is doing that; Stanford is doing that; and MIT is doing that. I’m very hopeful that in our lifetime we’ll be seeing a lot more AI breakthroughs, given the incredible amount of global investment in this area. We also see a lot of effort in the research community to look at algorithms beyond supervised learning.

Dating when a breakthrough will come, is much harder to predict. I learned, as a scientist, not to predict scientific breakthroughs, because they come serendipitously, and they come when a lot of ingredients in history converge. But I’m very hopeful that in our lifetime we’ll be seeing a lot more AI breakthroughs given the incredible amount of global investment in this area.

MARTIN FORD: I know you’re the chief scientist for Google Cloud. A point that I always make when I give presentations is that AI and machine learning are going to be like a utility—almost like electricity—something that can be deployed almost anywhere. It seems to me that integrating AI into the cloud is one of the first steps toward making the technology universally available. Is that in line with your vision?

FEI-FEI LI: As a professor, every seven or eight years there is a built-in encouragement for sabbaticals where we leave the university for a couple of years to explore a different line of work or to refresh yourself. Two years ago, I was very sure that I wanted to join an industry to really democratize AI technologies, because AI has advanced to a point where some of the technology that is now working, like supervised learning and pattern recognition, is doing good things for society. And like you say, if you think about disseminating technology like AI, the best and biggest platform is a cloud because there’s no other computing on any platform which humanity has invented that reaches as many people. Google Cloud alone, at any moment, is empowering, helping, or serving billions of people.

I was therefore very happy to be invited as chief scientist of Google Cloud, where the mission is to democratize AI. This is about creating products that empower businesses and partners, and then taking the feedback from customers and working with them closely to improve the technology itself. This way we can close that loop between the democratization of AI and the advancement AI. I’m overseeing both the research part of cloud AI as well as the product of cloud AI, and we’ve been here since January 2017.

An example of what we’re doing is a product we created that’s called AutoML. This is a unique product on the market to really lower the entry barrier of AI as much as much as possible—so that AI can be delivered to people who don’t do AI. The customer pain point is that so many businesses need customized models to help them to tackle their own problems. So, in the context of computer vision, if say I were a retailer, I might need a model to recognize my logo. If I were National Geographic magazine, I might need a model to recognize wild animals. If I worked in the agricultural industry, I might need a model to recognize apples. People have all kinds of use cases, but not everybody has the machinery expertise to create the AI.

Seeing this problem, we built the AutoML product so that as long as someone knows what they need, such as, “I need it for apples versus oranges,” and you bring the training data, we will do everything for you. So, from your perspective, it’s all automatic and delivers a customized machine learning model for your problem. We rolled AutoML out in January, and tens of thousands of customers have signed up to this service. It’s been very rewarding to see this democratization of cutting-edge AI.

MARTIN FORD: It sounds like AutoML, if it makes machine learning accessible to less technical people, could easily result in a sort of explosion of all kinds of AI applications created by different people with different objectives.

FEI-FEI LI: Yes, exactly! In fact, I used a Cambrian explosion analogy in one of my presentations.

MARTIN FORD: Today there is an enormous focus on neural networks and deep learning. Do you think that’s the way forward? You obviously believe that deep learning will be refined over time, but you do think that it is really the foundational technology that’s going to lead AI into the future? Or is there another thing out there that’s completely different, where we’re going to end up throwing away deep learning and back propagation and all of that, and have something entirely new?

FEI-FEI LI: If you look at human civilization, the path of scientific progress is always built upon undoing yourself. There isn’t a moment in history where scientists would have said that there’s nothing more to come, that there’s no refinement left. This is especially true for AI, which is such a nascent field that’s only been around for 16 years. Compared to fields like physics, biology, and chemistry, which have hundreds if not thousands of years of history, AI still has a lot to progress.

As an AI scientist, I do not philosophically believe we’ve finished our task, that convolutional neural networks and deep learning are the answers to everything—not by a huge margin. As you said earlier, a lot of problems are not labeled data or involve lots of training examples. Looking at the history of civilization and the things it’s taught us, we cannot possibly think we’ve reached a destination yet. As my two-year-old kid escaping the crib story tells us, we don’t have any AI that is close to that level of intelligence sophistication.

MARTIN FORD: What particular projects would you point to that you think are at the forefront of research in AI?

FEI-FEI LI: In my own lab, we have been doing a project that goes way beyond ImageNet, called the Visual Genome Project. In this project, we’ve thought deeply about the visual world, and we have recognized that ImageNet is very impoverished. ImageNet just gives some discreet labels of objects on the picture or visual scene, whereas in real visual scenes, objects are connected, humans and objects are doing a lot of things. There’s also the connection between vision and language, so Visual Genome Project is really what one would call the next step beyond ImageNet. It’s designed to really focus on the relationships between the visual world and our language, so we’ve been doing a lot of work in pushing that forwards.

Another direction I’m super excited about involves AI and healthcare. We’re currently working on a project in my lab that’s inspired by a focus on one particular element of healthcare—care itself. The topic of care touches a lot of people. Care is the process of taking care of patients, but if you look at our hospital system, for example, there are a lot of inefficiencies: low quality care, lack of monitoring, errors, and high costs associated with the whole healthcare delivery process. Just look at the mistakes of the surgery world, and the lack of hygiene that can result in hospitals acquiring infections. There’s also the lack of help and of awareness in senior home care. There are a lot of problems in care.

We recognized, about five years ago, that the technology that could help healthcare delivery is very similar to the top technology of self-driving cars and AI. We need smart sensors to sense the environment and the mood, and we need algorithms to make sense of the data collected and give feedback to clinicians, nurses, the patient and family members. So, we started pioneering this AI for healthcare delivery area of research. We’re working with Stanford children’s hospital, Utah’s Intermountain Hospital, and San Francisco’s unlocked senior homes. We recently published an opinion piece in the New England Journal of Medicine. I think it’s very exciting because it’s using cutting-edge AI technology, like the ones self-driving cars use, but it’s applied to an area that is so deeply critical for human needs and wellbeing.

MARTIN FORD: I want to talk about the path to artificial general intelligence (AGI). What you think the major hurdles that we would need to surmount are?

FEI-FEI LI: I want to answer your question in two parts. The first part I’ll answer more narrowly on the question about the path to AGI, and in the second part, I want to talk about what I think the framework and frame of mind should be for the future development of AI.

So, let’s first define AGI, because this isn’t about AI versus AGI: it’s all on one continuum. We all recognize today’s AI is very narrow and task specific, focusing on pattern recognition with labeled data, but as we make AI more advanced, that is going to be relaxed, and so in a way, the future of AI and AGI is one blurred definition. I guess the general definition of AGI would be the kind of intelligence that is contextualized, situationally aware, nuanced, multifaceted and multidimensional—and one that has the kind of learning capability that humans do, which is not only through big data but also through unsupervised learning, reinforcement learning, virtual learning, and various kinds of learning.

If we use that as a definition of AGI, then I think the path to AGI is a continued exploration of algorithms that are beyond just supervised. I also believe that it’s important to recognize the interdisciplinary need for collaborations in brain science, cognitive science, and behavior science. A lot of AI’s technology, whether it’s the hypothesis of the task or the evaluation or the conjecture of algorithms, can touch on related areas like brain science and cognitive science. It’s also really critical that we invest and advocate for this collaboration and interdisciplinary approach. I’ve actually written about this in my New York Times editorial opinion piece in March 2018 titled, How to Make A.I. That’s Good for People.

MARTIN FORD: Right, I read that, and I know you’ve been advocating a comprehensive framework for the next phase of AI development.

FEI-FEI LI: Yes, I did that because AI has graduated from an academic and niche subject area into a much bigger field that impacts human lives in very profound ways. So how do we divide AI, and how do we create AI in the next phase, for the future?

There are three core components or elements of human-centered AI. The first component is advancing AI itself, which has a lot to do with what I was just talking about: interdisciplinary research and work, on AI, across neuroscience and cognitive science.

The second component of human-centered AI is really the technology and application; the human-centered technology. We talk a lot about AI replacing humans in terms of a job scenario, but there are way more opportunities for AI to enhance humans and augment humans. The opportunities are much, much wider and I think we should advocate and invest in technology that is about collaboration and interaction between humans and machines. That’s robotics, natural language processing, human-centric design, and all that.

The third component of human-centered AI recognizes that computer science alone cannot address all the AI opportunities and issues. It’s a deeply impactful technology to humanity, so we should be bringing in economists to talk about jobs, to talk about bigger organizations, to talk about finance. We should bring in policymakers and law scholars and ethicists to talk about regulations, to talk about bias, to talk about security and privacy. We should work with historians, with artists, with anthropologists, and with philosophers—to look at the different implications and new areas of AI research. These are really the three elements that are about human-centered AI for the next phase.

MARTIN FORD: When you talk about human-centered AI, you’re trying to address some concerns that have been raised, and I wanted to touch on some of those. There’s this idea that there is a true existential threat, something that’s been raised by Nick Bostrom, Elon Musk, and Stephen Hawking, where super intelligence could happen very rapidly, a recursive self-improvement loop. I’ve heard people say that your AutoML might be one step toward that because you’re using technology to design other machine learning systems. What do you think about that?

FEI-FEI LI: I think that it’s healthy that we have thought leaders like Nick Bostrom to conjecture a fairly troubling future of AI, or at least send warning signs of things that could impact us in ways that we didn’t expect. But I think it’s important to contextualize that, because in the long history of human civilization, every time a new social order or technology has been invented, it’s had that same potential to disrupt the human world in unexpected and deeply profound ways.

I also think that it’s healthy to have different ways of exploring these important questions through a diversity of voices. And from voices coming from different development paths. It’s good to have Nick, who is a philosopher, to philosophize the potentials. Nick’s is one type of voice in the social discourse of AI. I think we need many voices to contribute.

MARTIN FORD: That particular concern really has been given a lot of weight by people like Elon Musk who attracts a lot of attention by saying, for example, that AI is a bigger threat than North Korea. Do you think that’s over the top, or should we really be that concerned as a society, at this point in time?

FEI-FEI LI: By definition, we tend to remember over-the-top statements. As a scientist and as a scholar I tend to focus on arguments that are built on deeper and well substantiated evidence and logical deduction. It’s really not important whether I judge a particular sentence or not.

The important thing is what we do with the opportunities we have now, and what each one of us is doing. For example, I’m more vocal to discuss the bias and lack of diversity in AI, and so that’s what I speak about, because it’s much more important to look at what I do.

MARTIN FORD: So, the existential threat is pretty far in the future?

FEI-FEI LI: Well like I said, it’s healthy that some people are thinking about that existential threat.

MARTIN FORD: You mentioned briefly the impact on jobs, and this is something that I’ve written about a lot, in fact, it’s what my previous book was about. You said that there are definitely opportunities to enhance people—but at the same time, there is this intersection of technology and capitalism, and businesses always have a very strong motive to eliminate labor if they can. That’s happened throughout history. It seems as though we’re at an inflection point today, where there are soon going to be tools that are able to automate a much broader range of tasks than anything in the past. These tools will replace cognitive and intellectual tasks, and not just manual work. Is there potential for lots of job losses, deskilling of jobs, depressed wages, and so forth?

FEI-FEI LI: I don’t pretend to be an economist, but capitalism is one form of human societal order and it is what, 100 years old? What I’m saying is that no one can predict that capitalism is the only form of human society going forward; nor can anyone predict how technology is going to morph in that future society.

My argument is that AI, as a technology with a lot of potentials, has an opportunity to make life a lot better, to make work more productive. I’ve been working with doctors for five years, and I get that there are parts of doctors’ work that can be potentially replaced by a machine. But I really want that part to be replaced, because I see our doctors overworked, overwhelmed, and their brilliance is sometimes not used in the ways that it should be used. I want to see our doctors having time to talk to patients, having time to talking to each other, and having time to understand and optimize for the best treatment of diseases. I want to see our doctors having time to do the detective work that some rare or harder illnesses need.

AI as a technology has so much potential to enhance and augment labor, in addition to just replace it, and I hope that we see more and more of that. This is something that we’ve got evidence of in history. Computers automated a lot of jobs away from office typists some 40 years ago. But what we see is new jobs, we now have software engineers as a new job, we have people doing way more interesting work around the office. The same went with ATM machines: when they started to automate some of these transactions in the bank, the number of tellers actually increased because there were more financial services that could be done by humans—and the mundane cash deposit or withdrawal can now be done by ATM machines. It’s not a black and white story at all, and it’s our work together that defines how things go.

MARTIN FORD: Let’s talk about some of the other topics you’ve focused on, like diversity and bias. It seems to me that these are two separate things, really. The bias comes about because it’s encapsulated in the human-generated data that machine learning algorithms are trained on, whereas diversity is more of an issue of who’s working in AI.

FEI-FEI LI: First of all, I don’t think they’re as separate as you think because at the end of the day, it’s the values that humans bring to machines. If we have a machine learning pipeline, starting with the data itself, then when that data is biased our machine learning outcome will be biased. And some forms of bias might have even fatal implications. But that itself is potentially linked to the development process of the pipeline. I just want to make a philosophical point that they are actually potentially linked.

That now said, I agree with you that bias and diversity can be treated a little more separately. For example, in terms of data bias resulting in machine learning outcome bias, a lot of academia researchers are recognizing this now, and working on ways to expose that kind of bias. They’re also modifying algorithms to respond to bias in a way to try to correct it that way. This exposure to the bias of products and technology, from academia to industry, is really healthy, and it keeps the industry on their toes.

MARTIN FORD: You must have to deal with machine learning bias at Google. How do you address it?

FEI-FEI LI: Google now has a whole group of researchers working on machine learning bias and “explainability” because the pressure is there to tackle bias, to deliver a better product, and we want to be helping others. It’s still early days, but it’s so critical that this area of research gets invested and that there’s more development in that.

On the topic of diversity and the bias of people, I think it’s a huge crisis. We’ve not solved the issue of diversity in our workforces, especially in STEM. Then with tech and AI being so nascent and yet so impactful as a technology, this problem is exacerbated. If we look around, whether you’re looking at AI groups in companies, AI professors in academia, AI PhD students or AI presenters at top AI conferences, no matter where you cut it: we lack diversity. We lack women, and we lack under-represented minorities.

MARTIN FORD: I know you started the AI4ALL project, which is focused on attracting women and underrepresented minorities into the field of AI. Could you talk about that?

FEI-FEI LI: Yes, that lack of representation we’ve been discussing led to me start the Stanford AI4ALL project four years ago. One important effort we can make is to inspire high school students, before they go to college and decide on their major and future career, and to invite them into AI research and AI study. We especially think that, for underrepresented minorities who are inspired by human missions in AI, they respond to the kind of bigger-than-themselves motivations and inspiration. As a result, we’ve crafted this summer curriculum every year at Stanford, for the past four years, and invited high school girls to participate in AI. This was so successful that in 2017 we formed a national nonprofit organization called AI4ALL and started to replicate this model and invite other universities to participate.

A year later, we have six universities targeting different areas where AI has really struggled to get people involved. In addition to Stanford and Simon Fraser University, we’ve also got Berkeley targeting AI for low-income students, Princeton focusing on AI for racial minorities, Christopher Newport University doing AI for off-the-rails students, and Boston University doing AI for girls. These have only been running for a small amount of time, but we’re hoping to mushroom the program and continue to invite future leaders of AI from a much more diverse background.

MARTIN FORD: I wanted to ask if you think there’s a place for the regulation of artificial intelligence. Is that something you’d like to see? Would you advocate for the government taking more of an interest, in terms of making rules, or do you think that the AI community can solve these problems internally?

FEI-FEI LI: I actually don’t think AI, if you mean the AI technologists, can solve all the AI problems by themselves: our world is interconnected, human lives are intertwined, and we all depend on each other.

No matter how much AI that I make happen, I still drive on the same highway, breathe the same air, and send my kids to community schools. I think that we need to have a very humanistic view of this and recognize that for any technology to have this profound impact, we need to invite all sectors of life and society to participate.

I also think the government has a huge role, which is to invest in basic science, research, and education of AI. Because if we want to have the transparent technology, and if we want to have the fair technology, and if we want to have more people who can understand and impact this technology in positive ways, then the government needs to invest in our universities, research institutes and schools to educate people about AI and support basic science research. I’m not trained as a policymaker, but I talk to some policymakers, and I talk to my friends. Whether it’s about privacy, fairness, dissemination, or collaboration, I see a role the government can play.

MARTIN FORD: The final thing I want to ask you about is this perceived AI arms race, especially with China. How seriously do you take that, and is it something we should worry about?

China does have a different system, a more authoritarian system, and a much bigger population which means more data to train algorithms on and less restrictions regarding privacy and so forth. Are we at risk of falling behind in AI leadership?

FEI-FEI LI: Right now, we’re living in a major hype-cycle of modern physics and how that can transform technology, whether it’s nuclear technology, or electrical technology.

One hundred years later, will we ask ourselves the question: which person owned modern physics? Will we try to name the company or country that owned modern physics and everything after the industrial revolution? I think it will be difficult for any of us to answer those questions. My point is, as a scientist and as an educator, that the human quest for knowledge and truth has no borders. If there is a fundamental principle of science, it is that these are the universal truths and quests for these truths, which we all seek as a species together. And AI is a science in my opinion.

From that point of view, as a basic scientist and as an educator, I work with people from all backgrounds. My Stanford lab literally consists of students from every continent. With the technology we create, whether it’s automation or it’s healthcare, we hope to benefit everyone.

Of course, there is going to be competition between companies and between regions, and I hope that’s healthy. Healthy competition means that we respect each other, we respect the market, we respect the users and consumers, and we respect the laws, even if it’s cross-border laws or international laws. As a scientist, that’s what I advocate for, and I continue to publish in the open source domain to educate students of all colors and nations, and I want to collaborate with people of all backgrounds.

More Information about AI4ALL can be found at http://ai-4-all.org/ .

FEI-FEI LI is Chief Scientist, AI and Machine Learning at Google Cloud, Professor of Computer Science at Stanford University, and Director of both the Stanford Artificial Intelligence Lab and the Stanford Vision Lab. Fei-Fei received her undergraduate degree in physics from Princeton University and her PhD in electrical engineering from the California Institute of Technology. Her work has focused on computer vision and cognitive neural science and she is widely published in top academic journals. She is the co-founder of AI4ALL, an organization focused on attracting women and people from underrepresented groups into the field of AI, which began at Stanford and has now scaled up to universities across the United States.