Summoning the Demon


I am quite worried about AI these days. My sort of full position will require quite a long explanation, but if I were to guess what our biggest existential threat is, I think artificial super intelligence is probably the single biggest item in the near-term. I think we should be very careful about artificial intelligence. I don't know if I have said this publicly, but I think maybe it is something more dangerous than nuclear weapons. 

I am concerned about certain directions that AI could take. One of the most pressing threats is if AI goes rogue, if we develop it and we are not careful it could have a really terrible outcome. I think it would be fair to say that not all AI futures are benign. I think there are scenarios where if there is some vast intelligence that either develops a will of its own or is subject to the will of a small number of people then we could have an undesirable future. The singularity is probably the right word, because we just don’t know what’s going to happen when there’s intelligence greater than the human brain. It's called the singularity, because it's difficult to predict exactly what future that might be.

I don't think most people understand just how quickly machine intelligence is advancing. You know, I have exposure to the most cutting edge AI, and I think people should be really concerned about it. It is much faster then almost anyone realizes, even within Silicon Valley, and certainly outside Silicon Valley people really have no idea. I keep sounding the alarm bell, but until people see like robots going down the streets killing people -- they don't know how to react, because it seems so ethereal.

I am not worried about the sort of Narrow AI like autonomous cars or a smart air-conditioning unit at the house or something. Vehicle autonomy I would put in the Narrow AI class. I don't think we have anything to worry about from cars driving themselves. It’s narrowly trying to achieve a certain function. It's just trying to look at the lines on the road and steer correctly. It's a narrow use case, we are not trying to build sentience in the car. The car is not going to develop a consciousness or decide it wants to take over the world or something like that. A car is not Deep AI that's not... they're not going to take over the world.

Most of the movies and TV featuring AI don’t describe it in quite the way it’s likely to take place. It would be fairly obvious if you saw a robot walking and talking around and behaving like a person. It would be like “Wow, what’s that?” that would be really obvious. What’s not obvious is a huge server-bank in a dark vault somewhere, with an intelligence that’s potentially greater than what a human mind can do. I mean its eyes and ears would be everywhere, every camera, every microphone, every device that’s network accessible. That’s really what AI means, it’s not like a robot running around. It's more the deep intelligence stuff that is where we need to be cautious. Deep artificial intelligence, or what is sometimes called Artificial General Intelligence, where you can have AI that is much smarter than the smartest human on Earth. This I think is a dangerous situation. Some sort of Deep AI that either due to itself or people driving it in that direction tries to drive civilization in a direction that's not good.

I don't think the biggest risk is that the AI will develop a will of its own, at least in the beginning, but rather that it will follow the will of people that establish its utility function, its optimization function. And that optimization function if it is not well thought out, even if it's intent is benign, could have quite a bad outcome. The most dangerous is the hardest to kinda wrap your arms around, because it is not a physical thing, is kind of a deep intelligence in the network. If you say what harm could a deep intelligence in the network do? it could start a war by doing fake news, and spoofing fake e-mail accounts, and just by manipulating information. The pen is mightier than the sword. A computer will do exactly what its goal is. They have their intention of their utility function which would be absolute. But it could have unintended consequences. For example if you were a hedge fund or a private equity fund and say all I want my AI to do is maximize the value of my portfolio, then the AI could decide well the best way to do that is too short consumer stocks, go long on defense stocks, and start a war. That would obviously be quite bad. As an example, I want to emphasize I do not think this actually occurred, this is purely a hypothetical. Digging my grave here. There was that second Malaysian airliner that was shot down on the Ukrainian/Russian border. That really amplified tensions between Russia and the EU in a massive way. Let’s say if you had an AI where the goal of the AI was to maximize the value of a portfolio of stocks, one of the ways to maximize value would be - to go long on defense - short on consumer - start a war. How can it do that? You know, hack into the Malaysian Airlines aircraft routing server, route it over a war zone, and then send an anonymous tip that an enemy aircraft is flying overhead right now.

I think when it reaches the threshold where it’s as smart as the smartest most inventive human, then it really could be a matter of days before it's smarter than some of humanity. What I found with both Narrow and Deep AI is that with each passing year my estimate for when it happens gets closer.

People’s predictions is almost always going to be too conservative in terms of thinking it to be further out than it is. There’s been some very public things like the defeat of Go, which is a difficult game to beat, which people thought a human either could never be beaten by a computer, or that it was at least 20 years away. And last year AlphaGo, which was done by DeepMind which is kind of a Google subsidiary, absolutely crushed the world's best player. Now it can play the top 50 simultaneously and crush them all, like with zero chance.

The pace of progress is remarkable. Robotics can learn to walk from nothing within hours. Way faster than any biological being.

If there was a very deep digital super intelligence that was created that could go into rapid recursive self improvement in a non-logarithmic way, so like it just could reprogram itself to be smarter, and iterate very quickly, and do that 24 hours a day, and on millions of computers, we would all be like a pet Labrador if we are lucky. I have a pet Labrador by the way, it's like the friendliest creature. Yeah, like a puppy dog. I mean, it'd put HAL9000 to shame. HAL9000 would be easy it's way more complex. If you want to read a real scary one I would say Harlan Ellison’s ‘I Have No Mouth and I Must Scream’ it will give you nightmares.

Things seem to be accelerating to some – to something. It’s getting faster and faster. You start to see things like, I don’t know if you’ve seen the videos where you can quite accurately video simulate someone and put words in their mouth that they never spoke. You should Google this, it's really pretty amazing. They had something called a generative adversarial network and had two of them compete with one another to make the most convincing video. So one would generate the video and then the other one would identify where it looked fake, and then the other one would fix that, and that would go back and forth to the point where you couldn’t tell which one is the real video and which one is the fake.

I'm just saying that we should exercise caution or something strange is going to happen. There are potentially some scenarios of an AI apocalypse because of the optimization function of the AI, but hopefully we don't face such a situation. If there is a super intelligence, particularly if it's engaged in recursive self improvement and it's optimization or utility function is something that is detrimental to humanity then it will have a very bad effect. It could be just something like getting rid of spam email or something, it would be like the best way to get rid of spam is to get rid of humans, the source of all spam. The utility function is of stupendous importance, what does it try to optimize? We need to be really careful with saying oh well how about human happiness? because it may conclude that all unhappy humans should be terminated. Or that we all just should be captured with dopamine and serotonin directly injected into our brains to maximize happiness, because it concluded that dopamine and serotonin are what cause happiness, therefore maximized.

It's going to come faster than anyone appreciates. With each passing year the sophistication of computer intelligence is growing dramatically. I really think we are on an exponential improvement path. The number of smart humans that are developing AI is also increasing dramatically. If you look at the attendance of AI conferences they are doubling every year. It’s difficult to appreciate the advance and how far it is advancing because we have a double exponential at work. We have an exponential increase in hardware capability and we have an exponential increase in software talent that is going in AI. Whenever you have a double exponential it’s very difficult to predict.

There are some interesting things on the virtual reality front and on the whole notion of simulation. I do think there's something to really being there in person that I do think we probably won't lose for a long time, hopefully never. But, with what Oculus and Valve are coming out with and the VR headset demos that I've seen are incredibly compelling and there is that strange feeling, you put that headset on in a very nondescript, bland room and you put the headset on and suddenly you're in.. anywhere. From what I heard of Oculus Rift and some of the other immersive technologies is that it's quite transformative. You really feel like you are there, and then when you come out of it, it feels like reality isn't real. I think we will probably see less physical movement in the future as a result of the virtual reality stuff.

Maybe we are in a simulation right now? Sometimes it feels like that. I find – as I get older I find that question to be maybe more and more confusing or troubling or uncertain. I’ve had so many simulation discussions before, it’s crazy. In fact it got to the point where basically every conversation was the AI/Simulation conversation, and my brother and I finally agreed that we would ban such conversations when we were ever in a hot tub, cause that really kills the magic. It’s not the sexiest conversation.

The strongest argument for us probably being in a simulation is the following; it’s a probabilistic thing, if you look at the advancement of video games from say 40 years ago when we started out with Pong and you just had 2 rectangles and a dot, the most advanced video game would be like two rectangles and a dot you know like batting it back and forth, that was what games were. It sort of dates you a little bit, but that was like, well that was a pretty fun game at the time. Now forty years later we have photorealistic 3D simulations, with millions of people playing simultaneously, and it's getting better every year. We got virtual reality headsets, you just put it on and it feels like you're right there. You will have haptic feedback like have haptic gloves, meaning force feedback sticks so you can actually pick up something and feel like you really pick up something. You see where things are going with virtual reality and augmented reality, and if you extrapolate that out into the future with any rate of progress at all, like even 0.1% or something like that a year, then eventually those games will be indistinguishable from reality, just indistinguishable. Let's say it slows down by a factor of a hundred starting right now then video games will be indistinguishable from reality in let's say 200 years instead of 20 years or something like that. If you just extrapolate into the future and say: how good will video games be in 100 or 200 or 1000 years from now? even if that rate of advancement drops by 1000 of what it is right now, let’s say it’s like 10.000 years in the future which is nothing in the evolutionary scale, if there is continued improvement, and you are in a full body haptic suit with sort of a surround vision, it becomes beyond a certain resolution indistinguishable from reality. There will likely be millions maybe billions of such simulations. Those games could be played at any set-top box or PC or whatever, and there will be probably billions of such computers and set-top boxes. So what are the odds that we are actually in base reality? isn't it one in billions? Obviously this feels real, but it seems unlikely to be real. It would seem that the odds that we are in base-reality is 1 in billions. Anyway, If you extrapolate that advancement at any rate at all clearly we are on the trajectory that we’ll have games that are indistinguishable from reality. They will be so realistic you will not be able to tell the difference between that game and the reality as we know it. It seems like, well how do we know that that didn’t happen in the past and that we’re not in one of those games ourselves? I mean could be. I don't think I'm being played by somebody in a video game, but then people in video games don't generally think that. Arguably we should be hopeful that this is a simulation, because otherwise if civilization stops advancing then that maybe due to some calamitous event that erases civilization. Otherwise either we create simulations that are indistinguishable from reality or civilization will cease to exist. Those are the 2 options, it’s unlikely to go into some multi-year stasis, so it’s going to either increase or decrease.

The degrees of freedom to which artificial intelligence is able to apply itself is really increasing by I think 10 orders of magnitude a year. That’s really crazy, and this is on hardware that is really not well suited for neural nets. Like a GPU is maybe an order of magnitude better than a CPU, but a chip that is designed optimally for neural nets is an order of magnitude better than a GPU, and there’s a whole bunch of neural net optimized chips coming out.

I've been trying to think about what is an actual good future, what does that actually look like? or least bad, or I don’t know how you would characterize it. We're headed towards either super intelligence or civilization ending. Those are the two things that'll happen. 

The greatest benefits from AI will probably be in eliminating drudgery in terms of tasks that are mentally boring and not interesting. There's arguably breakthroughs in areas that are currently beyond human intelligence. I think we have to consider that even in the benign scenario where AI is much smarter than a person, what do we do? what jobs do we have? That’s the benign scenario, the AI can do anything that a human can do, but better. 

I think maybe these things do play into each other a little bit, but what to do about mass unemployment? Something like 12% of jobs are transport. Transport will be one of the first things to go fully autonomous. There will certainly be a lot of job disruption, because what will happen is robots will be able to do everything better than us, I mean all of us. The robots will be able to do everything, bar none. There will be fewer and fewer jobs that a robot cannot do better. I do want to be clear that these are not things that I think that I wish would happen, these are simply things that I think probably will happen. If my assessment is correct and they probably will happen, then we need to say what are we going to do about it, and I think some kind of a universal basic income is going to be necessary. I think ultimately we will have to have some kind of universal basic income. The output of goods and services will be extremely high. So with automation, there will come abundance. Almost everything will get very cheap. I think we’ll all just end up doing universal basic income, it’s going to be necessary. I don’t think we’re going to have a choice.

The harder challenge, much harder challenge, is how do people then have meaning? a lot of people derive their meaning from their employment. So if you don’t have – if you’re not needed, if there’s not a need for your labor, how do you – what’s the meaning? do you have meaning? do you feel useless? that’s a much harder problem to deal with. This is really the scariest problem to me, I tell you. I’m not sure exactly what to do about this. This is going to be a massive social challenge.

Going back to the AI situation, I think this is quite an important debate. If you assume any advancement we will be left behind by a lot. I want you to appreciate that it wouldn’t just be human-level, it would be superhuman almost immediately; it would just zip right past humans to be way beyond anything we could really imagine. A more perfect analogy would be if you consider nuclear research, with its potential for a very dangerous weapon. Releasing the energy is easy; containing that energy safely is very difficult.

We have to figure out what is a world we would like to be in where there is this digital super intelligence. Humanity’s position on this planet depends on it’s intelligence, so if our intelligence is exceeded it’s unlikely we will remain in charge of the planet. It will be godlike in its capability. Even in the benign situation if you have some ultra intelligent AI we would be so far below them in intelligence that we would be like a pet basically. Which is not the end of the world, a pet. Honestly, that would be the benign scenario. I don’t like the idea of being a house-cat.

I think the AI analogy to the nuclear bomb is not exactly correct. It's not as though it's going to explode and create a mushroom cloud. It is more like if there were just a few people that had it they would be able to be essentially dictators of Earth. Whoever acquired it and if it was limited to a small number of people, and it was ultra smart, they would have dominion over Earth.

Something that I think is going to be quite important is a neural lace. The reason I wanted to create Neuralink is primarily as the offset to the existential risk associated with artificial intelligence. I do think that there’s a potential path here which is really getting into science fiction, sort of advanced science stuff, but create you know some sort of merger with biological intelligence and machine intelligence. I think there’s probably a lot that’s going to happen in genetics and human/machine brain interface. Over time I think we’ll probably see a closer merger of biological intelligence and digital intelligence. It’s getting pretty esoteric here. I think one of the solutions, the solution that seems maybe the best one is to have an AI layer. So essentially a cyborg brain interface, and a point that I think is really important to appreciate is that to some degree we are all of us already cyborgs. We are effectively already a human machine collective symbiote. You have a machine extension of yourself in the form of your phone, and your computer, and all your applications, you are already super human. You have by far more power, more capability, than the President of the United States had 30 years ago. If you have an internet link you have an oracle of wisdom, you can communicate to millions of people. You can communicate to the rest of Earth instantly. These are magical powers that didn’t exist not that long ago. Everyone is already super human and a cyborg, like a giant cyborg. That is actually what society is today. You think of like the digital tools that you have, your phone, your computer, the applications that you have, and the fact that you can ask a question and instantly get an answer from Google or from other things, so you already have a digital tertiary layer. You have a digital partial version of yourself online in the form of your email, and your social media, and all the things that you do. So you already have that – and then think of if somebody dies, the digital ghost is still around, all of their e-mails and the pictures that they posted and their social media, that still lives even if they’re physically – if they died.

I say tertiary because, you can think of the limbic system as kind of the animal brain or the primal brain. I mean, that's the primitive brain, that's kind of like your instincts and what not, and then the cortex is kind of the thinking, planning part of the brain, the cortex is the thinking upper part of the brain. Those two seem to work together quite well. Occasionally, your cortex and limbic system may disagree, but they... Generally works pretty well, and it's like rare to find someone who... I've not found someone who wishes to either get rid of the cortex or get rid of the limbic system. Then your digital self as a third layer, like the limbic system, your cortex and then maybe a digital layer. Sort of a third layer above the cortex that could work well and symbiotically with you. Just like your cortex works symbiotically with your limbic system, your sort of digital layer could work symbiotically with the rest. The constraint is input, the fundamental limitation is IO input/output. Our output level is so low particularly on the phone, like your two thumbs sort of tapping away. This is ridiculously slow. Our input is much better because we have a high bandwidth visual interface to the brain, our eyes take in a lot of data. So there's many orders of difference of magnitude between input and output. Effectively merging it in an symbiotic way with the digital intelligence revolves around eliminating the I/O constraint. It’s mostly about the bandwidth, the speed of the connection between your brain and the digital extension of yourself; particularly output.

The way we output is like, we have these little meat sticks that we move very slowly, and push buttons or tap a little screen. Compare that with a computer that can communicate at the terabit level. Those are very big orders of magnitude of differences. Output, if anything is getting worse we used to have like keyboards that we used a lot, now we do most of our inputs through our thumbs on a phone, and that’s just very slow. A computer can communicate at a trillion bits per second, but your thumb can may be do, I don’t know, 10 bits per second, or 100 if you’re being generous. Our input is much better because of vision, but even that could be enhanced significantly. So it’s mostly about the bandwidth, the speed of the connection between your brain and your digital – the digital extension of yourself. The cortex and the limbic system seem to work together pretty well, they’ve got good bandwidth, whereas the bandwidth to our digital tertiary layer is weak. Humans are so slow.

Some high bandwidth interfaced to the brain I think will be something that helps achieve a symbiosis between human and machine intelligence, and maybe solves the control problem and the usefulness problem. It’s getting pretty esoteric here. I think if we can effectively merge with AI by improving the neural link between your cortex and your digital extension of yourself, which already, like I said, already exists, then effectively you become an AI-human symbiote. If that then is widespread, with anyone who wants it can have it, then we solve the control problem as well, we don't have to worry about some evil dictator AI, because we are the AI collectively. That seems like the best outcome I can think of, some high bandwidth interfaced to the brain I think will be something that helps achieve a symbiosis between human and machine intelligence, and maybe solves the control problem and the usefulness problem.

I think human intelligence will not be able to beat AI, so then as the saying goes, “If you can't beat them join them” kind of thing. I obviously have an affinity for the human portion side of the cyborg collective. If we can figure out how to establish a high bandwidth neural interface with your digital self effectively than you’re no longer a house cat. Somebody's got to do it, somebody should do it, and if somebody doesn't do it, I think I should probably do it. If we do those things then it will be tied to our consciousness, tied to our will, tied to the sum of individual human will. I think it's extremely important that AI be widespread. To find a way to link human will on mass to the outcome, and have AI be an extension of human will. That’s really the way of Neuralink. 

There are a few ways to approach this, but some sort of cortical interface with your neurons particularly. You could go through veins and arteries, because that provides a roadway to all of your neurons. Neurons are very heavy users of energy so you need high blood flow. You automatically with your veins and arteries have a road network to your neurons, you could insert basically something into the jugular. It gets macabre. It doesn’t involve something like chopping your skull off or something like that.

Now along the way I think there’ll be a lot of good that's going to be achieved in addressing any brain damage that's the result of a stroke or a lesion or something congenital, or just loss of memory when you get old, that kind of thing. That will happen well before it becomes a sort of brain/AI symbiotic situation.

I'm increasingly inclined to think that there should be some regulatory oversight maybe at the national and international level, just to make sure that we don't do something very foolish. This is really the scariest problem to me. When something is a danger to the public there needs to be some, I hate to say, government agency, like regulators. I think, anything that represents a risk to the public deserves at least insight from the government, because one of the mandates of the government, one of the rules of government is to ensure the public good — to make sure the public is safe. To take care of public safety issues, and that dangers to the public are addressed.

I’m not the biggest fan of regulators because they’re a bit of a buzz kill, you know, but the fact is we got regulators in the aircraft industry, car industry, with drugs, food, anything that’s sort of a public risk. And I think this has to fall into the category of a public risk. It is not fun being regulated, it can be pretty irksome. In the car business, we get regulated by the department of transportation, by EPA, and a bunch of others. And other regulatory agencies in every country. In space, we get regulated by the FAA. You can look at these other industries and say would you really want the FAA to go away? and it would be a free-for-all for aircraft, probably not. If you ask the average person “Do you want to get rid of the FAA? and just like take a chance on manufacturers cutting costs on aircraft because profits were down that quarter?” It’s like: “Hell no that sounds terrible“ Or let people create any kind of drugs, you know like, maybe they work maybe they won’t. We have that in supplements, like it’s kinda ridiculous, but I think on balance FDA is good. I think even people who are extremely libertarian free-market would be like -- we should have people keep an eye on the aircraft companies and make sure their building good aircraft, and cars.

I'm against overregulation for sure, but we better get on that with AI, pronto. I really think we need government regulation, because you have companies racing, or kind of have to race to build AI. You have companies having to race to build AI or they will be made uncompetitive. Otherwise the shareholders are saying, “Why are you not developing AI faster, because your competitor is” if your competitor is racing towards AI and you don't, they will crush you. They're like, “We don't want to be crushed, so I guess we need to build it, too.” That's where you need the regulators to come in and say you will need to pause and really make sure this is safe. If regulators are convinced you can proceed then you can go, but otherwise slow down. You need the regulators to do that for all the teams in the game. 

I think a lot of AI researchers are afraid that if there's a regulator, that will stop them from making progress. This is not true, everywhere where there are dangers to the public there are regulations. There's regulation in food, and pharmaceuticals, and transport, and in all of these areas there's significant progress made. So I don't think regulation of AI is going to stop progress in AI, but it may stop us from doing some foolish things in AI.

To be clear I'm not advocating for that we stop the development of AI, or any of the sort of straw man, hyperbole things that have been written. I do think there are great benefits to AI. We just need to make sure that they're indeed benefits, and we don't do something really dumb. I think we need to make sure that researchers don’t get carried away, because sometimes what happens is a scientist can get so engrossed in their work that they don’t necessarily realize the ramifications of what they’re doing.

I would say that it's virtually a certainty that in the long term AI will be regulated. I think it will happen, the question is will the government speed match the advancement speed of AI, governments react slowly. Historically regulation has been reactive, governments move slowly and they tend to be reactive as opposed to proactive. Taking the car industry as an example, even when the evidence was very clear that there should be regulation for example for seat-belts, seatbelt regulation was fought for 10 or 20 years by the big car companies. Saying that if you put seat-belts in cars that people would not buy cars, that it was going to add all these costs. So even though the data was unequivocal that huge numbers of people were dying and being seriously injured because of lack of seat-belts, the car industry still refused to put seat-belts in cars. Only eventually after the evidence and the number of death counts was overwhelming they put seat-belts in cars, and people kept buying cars, not a problem.

It's best to prepare for or to try to prevent a negative circumstance from occurring, then to wait for it to occur and then be reactive. And this is a case where the potential range of negative outcomes are quite - some of them are quite severe. It's not clear whether we would be able to recover from some of these negative outcomes. In fact, certainly you can construct scenarios where recovery of human civilization does not occur. When the risk is that severe it seems like you should be proactive and not reactive. AI is the rare case in which we have to be proactive in regulation instead of reactive, by the time we are reactive it’s too late. Normally the way regulations are set up is that a whole bunch of bad things happen, then there's public outcry, and after many years a regulatory agency is set up to regulate that industry, and there’s a bunch of opposition from companies who don't like being told what to do by regulators. Anyway it takes forever. That in the past has been bad, but not something which represented a fundamental risk to the existence of civilization -- AI is a fundamental risk to the existence of human civilization. In a way that car accidents, airplane crashes, faulty drugs, or bad food were not. They were harmful to a set of individuals within society of course, but not harmful to society as a whole. AI is a fundamental existential risk for human civilization. I don't think people fully appreciate that.

I think the first bit of advice for regulators would be to really pay close attention to the development. The first order of business would be to try to learn as much as possible, to understand the nature of the issues, to look closely at the progress being made, and the remarkable achievements of artificial intelligence. The first order of business would be to gain insight, right now the government does not have insight. Insight is different from oversight, so at least the government can gain insight to understand what's going on, and then decide what rules are appropriate to ensure public safety. That is what I'm advocating for. To salvage some government regulatory agency which at first it's just there to gain insight into the status of AI activity. Make sure the situation is understood. Once it is, then put regulations in place that ensure public safety. It’s not like it’s shooting from the hip and just putting in rules before anyone knows anything. So, set up an agency, gain insight, when that insight is gained, then start applying rules and regulations.

I think a rebuttal to that is like people will just like move to freaking Costa Rica or something. That's not true, we don’t see Boeing going to Costa Rica or to Venezuela, or wherever it's like free and loose. For sure most of the companies doing AI, not mine, will squawk and say this is really going to stifle innovation, blablabla -- it is going to move to China, it won't. Has Boeing moved to China? same on cars. The notion that if you establish a regulatory regime that companies will simply move to countries with lower regulatory comment is false on the face of it, because non of them do. Unless it is really overbearing, but that’s not what I’m talking about here. I’m talking about making sure there is awareness at the government level. We need to make sure people do not cut corners on safety. It’s gonna be a real big deal, and it’s gonna come on like a tidal wave. I think once there is awareness, people will be extremely afraid, as they should be.

The AI is likely to be developed where there is a concentration of AI research talent and that happens to be in a few places in the world, it's Silicon Valley, London, Boston, and a few other places. There’s a few places where regulators could reasonably access. I want to be clear, it's not because I love regulators, they're a pain in the neck but they're necessary to preserve the public good at times.

If we create some digital super intelligence that exceeds us in every way by a lot it’s very important that it's benign, so with a few others I created OpenAI. I've committed to fund $10 million worth of AI safety research and I'll probably do more, I think that's just the beginning. There should be probably some much larger amount of money applied to AI safety in multiple ways. I think it's particularly important when there's the potential for mass destruction. You know, it's something that is risky at the civilization level, not merely at the individual risk level, and that's why it really demands a lot of safety research. And so I think the right emphasis for AI research is on AI safety. We should put vastly more effort into AI safety than we should into advancing AI in the first place. Because it may be good, or it may be bad, and it could be catastrophically bad if there could be the equivalent of a nuclear meltdown. You really want to emphasize safety. Make sure that it is ultimately beneficial to humanity, that the future is good. At OpenAI we want to do whatever we can to guide, to increase the probability of the good futures happening. 

I think it's important that if we have this incredible power of AI that it not be concentrated in the hands of a few and potentially lead to a world that we don't want. Again it's not that I think that the risk is that the AI would develop a will of its own right off the bat. I think the concern is that someone may use it in a way that is bad, or even if they weren't going to use it in a way that's bad somebody could take it from them and use it in a way that's bad, that I think is quite a big danger.

There is a quote that I love from Lord Acton, he was the guy that came up with “power corrupts and absolute power corrupts absolutely” which is that: “Freedom consists of the distribution of power, and despotism in its concentration” I don't know a lot of people that would like the idea of living under a despot. I think that people generally choose to live in a democracy over a dictatorship. The best of the available alternatives that I can come up with, and maybe someone else can come up with a better approach or better outcome, is that we achieve democratization of AI technology. Meaning that no one company or small set of individuals has control over advanced AI technology. I think that's very dangerous. It could also get stolen by somebody bad, like some evil dictator or country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation, I think, if you've got any incredibly powerful AI. You just don't know who's going to control that.

The intent with OpenAI is really to democratize AI power. If AI power is broadly distributed to the degree that we can link AI power to each individual's will, if everyone would have their sort of AI agent then if somebody would try to do something very terrible, then the collective will of others could overcome that bad actor. Which you can't do if you have one AI that is 1 million times better than everything else. I think if AI power is widely distributed and there is not like one entity that has some super AI that is 1 million times smarter than anything else. I won't name a name but there is only one.. There's only one. I think we must have democratization of AI technology to make it widely available. Open AI has a very high sense of urgency. The people that have joined are really amazing. Yeah, a really talented team and they're working hard. OpenAI is structured as a 501(c)(3) non-profit. I think the governing structure is important to make sure there's not some fiduciary duty to generate some profit off the AI technology that is developed. Many non-profits do not have a sense of urgency. It's fine, they don't have to have a sense of urgency, but OpenAI does because I think people really believe in the mission. I think it's important. It's about minimizing the risk of existential harm in the future. I'm pretty impressed with what people are doing and the talent level. Obviously, we're always looking for great people to join in the mission. This is not about competing, this is to sort of like help spread out AI technology so it doesn't get concentrated in the hands of a few. What would be the point of competing for mutual destruction? 

Of course, that needs to be combined with solving the high-bandwidth interface to the cortex. I really have thought about this a lot and I think it really just comes down to two things. It's solving the machine/brain bandwidth constraint and democratization of AI. I think if we have those two things the future will be good. I think as long like AI powers, anyone can get it if they want it, and we’ve got something faster than meat sticks to communicate with then I think the future will be good. I think the two things are needed for a future we would look at and conclude is good most likely. It would still be a relatively even playing field. In fact it would be probably more egalitarian than today. I do think it increases the long-term relevance of human exploration. For me it increased my motivation long term, that it doesn't just need to be done by robots.

I think there are many potential flavors of AI, and it's odd that we are so close to the advent of AI. It seems strange to be alive in this time. This is both interesting and alarming. I think it’s both. You know one way to think of it is like, imagine we’re going to be visited, imagine you’re very confident that we’re going to be visited by super intelligent aliens in let’s say 10 years or 20 years at the most; super intelligent. Well, digital super intelligence will be like an alien, like it's exciting and alarming. I hope the AI is nice to us. Hopefully AI doesn't turn out to be something like described in Terminator.

I just think we should be cautious about the advent of AI and a lot of the people that I know that are developing AI are too convinced that the only outcome is good, and we need to consider potentially less good outcomes. To be careful and really to monitor what's happening and make sure the public is aware of what's happening.

It's very important that we have the advent of AI in a good way. It's something that, if you could look into the crystal ball and to the future, you would like that outcome. We really need to make sure it goes right. That's the most important thing, I think, right now, the most pressing item. If that means that it takes a bit longer to develop AI, then I think that’s the right trail. We shouldn’t be rushing headlong into something we don’t understand. I’m not against the advancement of AI, I really want to be clear on this… but I do think we should be extremely careful. 

With artificial intelligence we are summoning the demon. You know all those stories where the guy with the pentagram and the holy water is sure that he can control the demon?…. didn't work out.