AI is the best thing since sliced bread. We should embrace it wholeheartedly and understand the secrets of unlocking the human brain by embracing AI. We can’t do it by ourselves.
ENTREPRENEUR FOUNDER, KERNEL & OS FUND
Bryan Johnson is the founder of Kernel, OS Fund, and Braintree. After the sale of Braintree to PayPal in 2013 for $800m, Johnson founded OS Fund in 2014 with $100m of those funds. His objective was to invest in entrepreneurs and companies that develop breakthrough discoveries in hard science to address our most pressing global problems. In 2016, Johnson founded Kernel with another $100m of his funds. Kernel is building brain-machine interfaces with the intention of providing humans with the option to radically enhance their cognition.
MARTIN FORD: Could you explain what Kernel is? How did it get started, and what is the long-term vision?
BRYAN JOHNSON: Most people start companies with a product in mind, and they build that given product. I started Kernel with a problem identified—we need to build better tools to read and write our neural code, to address disease and malfunction, to illuminate the mechanisms of intelligence, and to extend our cognition. Look at the tools we have to interface with our brain right now—we can get an image of our brain via an MRI scan, we can do bad recordings via EEG outside the scalp that don’t really give us much, and we can implant an electrode to address a disease. Outside of that, our brain is largely inaccessible to the world outside of our five senses. I started Kernel with $100 million with the objective of figuring out what tools we can build. We’ve been on this quest for two years, and we still remain in stealth mode on purpose. We have a team of 30 people and we feel very good about where we’re at. We’re working very hard to build the next breakthroughs. I wish I could give you more details about where we’re at in the world. We will have that out in time, but right now we’re not ready.
MARTIN FORD: The articles I’ve read suggest that you’re beginning with medical applications to help with conditions like epilepsy. My understanding is that you initially want to try an invasive approach that involves brain surgery, and you then want to leverage what you learn to eventually move to something that will enhance cognition, while hopefully being less invasive. Is that the case, or are you imagining that we’re all going to have chips inserted into our brain at some point?
BRYAN JOHNSON: Having chips in our brain is one avenue that we’ve contemplated, but we’ve also started looking at every possible entry point in neuroscience because the key in this game is figuring out how to create a profitable business. Figuring out how to create an implantable chip is one option, but there are many other options, and we’re looking at all of them.
MARTIN FORD: How did you come to the idea of starting Kernel and OS Fund? What route did your early career take to bring you to that point?
BRYAN JOHNSON: The starting point for my career was when I was 21, where I had just returned from my Mormon mission to Ecuador. I lived among and witnessed extreme poverty and suffering. During my two years of living among extreme poverty, the only question that was weighing on my mind was, what could I do that would create the most value for the greatest number of people in the world? I wasn’t motivated by fame or money, I just wanted to do good in the world. I looked at all the options I could find, and none of them satisfied me. Because of that, I determined to become an entrepreneur, build a business, and retire by the age of 30. In my 21-year-old mind that made sense. I got lucky, and fourteen years later I sold my company Braintree for $800 million in cash to eBay in 2013.
By that point, I had also left Mormonism, which had defined my entire reality of what life was about, and when I left that I had to recreate myself from scratch. I was 35, fourteen years since my initial life decisions, and that drive to benefit humanity hadn’t left me. I asked myself the question, what’s the one single thing that I can do that will maximize the probability that the human race will survive. In that moment of observation, it wasn’t clear to me that humans have what we need to survive ourselves and survive the challenges we face. I saw two answers to that question, and they were Kernel and the OS Fund.
The idea behind OS Fund is that most people in the world who manage or have money do not have scientific expertise, and therefore, they typically invest in things that they are more comfortable with, such as finance or transportation. That means that there is insufficient capital going to science-based endeavors. My observation was that if I could demonstrate as a non-scientist that I could invest in some of the hardest science in the world and be successful in doing this, I would create a model that others could follow. So, I invested $100 million in my OS Fund to do that, and five years in, we are in the top decile of performance among US firms. We’ve made 28 investments, and we’ve been able to demonstrate that we can successfully invest in these science-based entrepreneurs that are doing world-changing technology.
The second thing was Kernel. In the beginning, I talked to over 200 really smart people, asking them what they were doing in the world and why. From there, I’d ask them follow-on on questions to understand the entire assumptions stack of how they think, and the one thing that I walked away from is that the brain is the originator of all things, everything we do as humans stems from our brains. Everything we build, everything we’re trying to become, and every problem we’re trying to solve. It lives upstream from anything else, yet it was absent in anybody’s focus. There were efforts, for example from DARPA and the Allen Brain Institute, but most were focused on specific medical applications or basic research in neuroscience. There was nobody in the world that I could identify that basically said, the brain is the most important thing in existence because everything sits downstream from the brain. It’s a really simple observation, but it was a blind spot everywhere.
Our brain sits right behind our eyes, yet we focus on everything downstream from it. There is not an endeavor that is on a scale that’s relevant, something that lets us read and write neural code to read and write our cognition. So, with Kernel, I set out to do for the brain what we did for the genome, which is to sequence a genome and then create a tool to write the genome. In 2018, we can read and edit the DNA—the software—that makes us humans, and I wanted to do the same thing for the brain, which is read and write our code.
There’s a bunch of reasons why I want to be able to read and write the human brain. My fundamental belief behind all of this is that we need to radically up-level ourselves as a species. AI is moving very quickly, and what the future of AI holds is anyone’s guess. The expert opinions are across the board. We don’t know if AI is growing on a linear curve, an S curve, an exponential curve, or a punctuated equilibrium, but we do know that the promise of AI is up and to the right.
The rate of our improvement as humans is flat. People hear this and say that we’re hugely improved over people 500 years ago, but we’re not. Yes, we understand greater complexity, for example, more complex concepts in physics and mathematics, but our species generally is exactly the same as we were thousands of years ago. We have the same proclivities and we make the same mistakes. Even if you were to make the case that we are improving as a species, if you compare it to AI, humans are flatlining. If you just simply look at the graph and say AI is up and to the right, humans might be a little bit to the right. So the question is, how big is that delta going to be between AI and ourselves when we begin to feel incredibly uncomfortable? It’s going to just run by us, and then what are we as a species? It is an important question to ask.
Another reason is based on the concept that we have this impending job crisis with AI. The most creative thing people are coming up with is universal basic income, which is basically waving the white flag and saying we can’t cope and we need some money from the government. Nowhere in the conversation is radical human improvement discussed. We need to figure out how to not just nudge ourselves forward, but to make a radical transformation. What we need to do is acknowledge the reason that we need to improve ourselves radically is that we cannot imagine the future. We are constrained in our imagination to what we are familiar with.
If you were to take humans and put them back with Gutenberg and the printing press, and say, paint me a miraculous vision of what’s possible, they wouldn’t be able to do it. They would never have guessed at what’s evolved like the internet or computers. The same is true of radical human enhancement. We don’t know what’s on the other side. What we do know is that is if we are to be relevant as a species, we must advance ourselves significantly.
One more reason is the idea that somehow AI became the biggest threat that we should all care about, which in my mind is just silly. The biggest thing I’m worried about is humans. We have always been our own biggest threat. Look at all of history, we have done awful things to each other. Yes, we’ve done remarkable things with our technology, but we have also inflicted tremendous harm on each other. So, in terms of asking is AI a risk, and should we prioritize that? I would say AI is the best thing since sliced bread. We should embrace it wholeheartedly and understand the secrets of unlocking the human brain by embracing AI. We can’t do it by ourselves.
MARTIN FORD: There are a number of other companies in the same general space as Kernel. Elon Musk has Neuralink and I think both Facebook and DARPA are also working on something. Do you feel that there are direct competitors out there, or is Kernel unique in its approach?
BRYAN JOHNSON: DARPA has done a wonderful job. They have been looking at the brain for quite some time now, and they’ve been a galvanizer of success. Another visionary in the field is Paul Allen and the Allen Institute for Brain Science. The gap that I identified was not understanding that the brain matters, but identifying the brain as the primary entry point to everything in existence we care about. Then through that frame, creating the tools to read and write neural code. To read and write human.
I started Kernel, and then less than a year later both Elon Musk and Mark Zuckerberg did similar things. Elon started a company that was roughly in a similar vein as mine, a similar trajectory of trying to figure out how to re-write human to play well with AI, and then Facebook decided to do theirs focused on further engagement with their users within the Facebook experience. Though it’s still to be determined whether Neuralink, Facebook, and Kernel will be successful over the next couple of years, at least there’s a few of us going at it, which I think is an encouraging situation for the entire industry.
MARTIN FORD: Do you have a sense of how long all this could take? When do you imagine that there will be some sort of device, or chip that is readily available that will enhance human intelligence?
BRYAN JOHNSON: It really depends upon the modality. If it’s implantable, there is a longer time frame, but if it’s not invasive, then that is a shorter time frame. My guess on the time frame is that within 15 years neural interfaces will be as common as smartphones are today.
MARTIN FORD: That seems pretty aggressive.
BRYAN JOHNSON: When I say neural interfaces, I am not specifying the type. I am not saying that people have a chip implanted in their brain. I’m just saying that the user will be able to bring the brain online.
MARTIN FORD: What about the specific idea that you might be able to download information or knowledge directly into your brain? A simple interface is one thing. But to actually download information seems especially challenging because I don’t believe we have any real understanding of how information is stored in the brain. So, the idea that you could take information from another source and inject it directly into your brain really seems like a strictly science-fiction concept.
BRYAN JOHNSON: I agree with that, I don’t think anybody could intelligently speculate on that ability. We have demonstrated methods for enhanced learning or enhanced memory, but the ability to decode thoughts in the brain has not been demonstrated. It’s impossible to give a date because we are inventing the technology as we speak.
MARTIN FORD: One of the things that I have written a lot about is the potential for a lot of jobs to be automated and the potential for rising unemployment and workforce inequality. I have advocated the idea of a basic income, but you’re saying the problem would be better solved by enhancing the cognitive capabilities of people. I think there are a number of problems that come up there.
One is that it wouldn’t address the issue that a large fraction of jobs is routine and predictable, and they will eventually be automated by specialized machines. Increasing the cognition of workers won’t help them keep those jobs. Also, everyone has different levels of ability to begin with, and if you add some technology that enhances cognition, that might raise the floor, but it probably wouldn’t make everyone equal. Therefore, many people might still fall below the threshold that would make them competitive.
Another point that is often raised with this kind of technology is that access to it is not going to be equal. Initially, it’s going to only be accessible to wealthy people. Even if the devices get cheaper and more people can afford them, it seems certain that there would be different versions of this technology, with the better models only accessible to the wealthy. Is it possible that this technology could actually increase inequality, and maybe add to the problem rather than address it?
BRYAN JOHNSON: Two points about this. At the top of everybody’s minds are questions around inequality, around the government owning your brain, around people hacking your brain, and around people controlling your thoughts. The moment people contemplate the possibility of interfacing with their brain, they immediately jump into loss mitigation mode—what’s going to go wrong?
Then, different scenarios come to mind: Will things go wrong? Yes. Will people do bad things? Yes. That’s part of the problem, humans always do those things. Will there be unintended consequences? Yes. Once you get past all these conversations, it opens up another area of contemplation. When we ask those questions, we assume that somehow humans are in this undisputed secure position on this planet and that we can forfeit all the considerations as a species, so we can optimize for equality and other things.
My fundamental premise is that we are at risk of going extinct by doing harm to ourselves, and by exterior factors. I’m coming to this conversation with the belief that whether we enhance ourselves is not a question of luxury. It’s not like should we, or shouldn’t we? Or what are the pros and cons? I’m saying that if humans do not enhance themselves, we will go extinct. By saying that, though, I’m not saying that we should be reckless, or not thoughtful, or that we should embrace inequality.
What I’m suggesting is that the first principle discussion of conversation is that it is an absolute necessity. Once we acknowledge that, then we can contemplate and say, “Now given this constraint, how do we best accommodate everyone’s interest within society? How do we make sure that we march forward at a steady pace together? How do we ensure that we design into the system knowing that people are going to abuse it?” There is a famous quote that the internet was designed with criminals in mind, so the question is, how do we design neural interfaces knowing that people are going to abuse it? How do we design it knowing that the government is going to want to get into your brain? How do we do all of those things? That is a conversation that is not currently happening. People stop at this luxury argument, which I think is short-sighted, and one of the reasons why we’re in trouble as a species.
MARTIN FORD: It sounds like you’re making a practical argument that realistically we may have to accept more radical inequality. We may have to enhance a group of people so that they can solve the problems we face. Then after the problems are solved, we can turn our attention to making the system work for everyone. Is that what you’re saying?
BRYAN JOHNSON: No, what I am suggesting is that we need to develop the technology. As a species we need to upgrade ourselves to be relevant in the face of artificial intelligence, and to avoid destroying ourselves as a species. We already possess the weaponry to destroy ourselves today, and we’ve been on the verge of doing that for decades.
Let me put it in a new frame. I think it’s possible that in 2050, humans look back and they say, “oh my goodness, can you believe that humans in 2017 thought it was acceptable to maintain weapons that could annihilate the entire planet?” What I am suggesting is that there’s a future of human existence that is more remarkable than we can even imagine. Right now, we’re stuck in our current conception of reality, and we can’t get past this contemplation that we might be able to create a future based on harmoniousness instead of competition, and that we might somehow have a sufficient amount of resources and a mindset for all of us to thrive together.
We immediately jump into the fact that we always strive to hurt one another. What I am suggesting is this is why we need enhancement to get past these limits and cognitive bias that we have. So, I am in favor of enhancing everybody at the same time. That puts a burden on the development of the technology, but that’s what the burden needs to be.
MARTIN FORD: When you describe this, I get the sense that you’re thinking in terms of not just enhancing intelligence, but also morality and ethical behavior and decision making. Do you think that there’s potential for technology to make us more ethical and altruistic as well?
BRYAN JOHNSON: To be clear, I find that intelligence is such a limiting word in its conception. People associate intelligence with IQ, and I’m not doing that at all. I don’t want to suggest only intelligence. When I talk about humans radically improving themselves, I mean in every possible realm. For example, let me paint a picture of what I think could happen to AI. AI is extremely good at performing logistical components of our society, an example being it will be a lot better at driving cars than humans. Give AI enough time, and it will be substantially better, and there will be fewer deaths on the road. We’ll look back and say, “can you believe humans used to drive?” AI is a lot better at flying autopilot on an airplane; it’s a lot better at playing Go and chess.
Imagine a scenario where we can develop AI to a point where AI largely runs the logistical aspects of everyone’s lives: transportation, clothing, personal care, health—everything is automated. In that world, our brain is now freed from doing what it does for 80% of the day. It’s free to pursue higher-order complexities. The question now is, what will we do? For example, what if studying physics and quantum theory produced the same reward system that watching the Kardashians does today? What if we found out that our brains could extend to four, five, or ten dimensions? What would we create? What would we do?
What I’m suggesting is the hardest concept in the entire world to grasp, because our brain convinces us that we are an all-seeing eye, that we understand all of the things around us, and that current reality is the only reality. What I am suggesting is that there is a future in cognitive enhancement that we can’t even see, and that’s what limits our imaginations to contemplate it. It’s like going back in time and asking Gutenberg to imagine all the kinds of books that will be written. Since then, the literary world has flourished over the centuries. The same thing is true for neural enhancement, and so you start to get a scale of how gigantic a topic this is.
By traveling through this topic, we’ll get into the constraints of our imagination, we’ll get into human enhancement, people will have to address all their fears even to get to a point where they’d be open to thinking about this. They have to reconcile with AI, they have to figure out if AI is a good thing or a bad thing. If we did enhance ourselves, what would it look like? To squeeze this all into a topic is really hard, and that’s why this stuff is so complex, but also so important. Yet, getting to a level where we can talk about this as a society is very hard, because you have to scaffold your way to all the different pieces we have to get someone who is willing to scaffold to these different layers, and that’s the hardest part of this.
MARTIN FORD: Assuming you could actually build this technology, then how as a society do we talk about it and really wrestle with the implications, particularly in a democracy? Just look at what’s happened with social media, where a lot of unintended and unanticipated problems have clearly developed. What we’re talking about here could be an entirely new level of social interaction and interconnection, perhaps similar to today’s social media, but greatly amplified. What would address that? How should we prepare for that problem?
BRYAN JOHNSON: The first question is, why would we expect anything different than what’s happened with social media? It’s entirely predictable that humans will use the tools they are given to pursue their own self-interests along the lines of making money, gaining status, respect, and an advantage over others. That’s what humans do, and it's how we’ve wired to do, and it how we’ve always done it. That’s what I am saying, we haven’t improved ourselves. We’re the same.
We would expect this to happen just like it did with social media; after all, humans are humans. We’ll always be humans. What I’m suggesting is this is the reason why we enhance ourselves. We know what humans do with stuff, it’s a very proven model. We have thousands and thousands of years of data to know what humans do with stuff. We need to go beyond humans, to something akin to humanity 3.0 or 4.0. We need to radically improve ourselves as a species beyond what we can imagine, but the issue is that we don’t have the tools to that right now.
MARTIN FORD: Are you suggesting that all of this in some sense would have to be regulated? There’s a possibility that as an individual, I might not want my morality to be enhanced. Perhaps I just want to enhance my intelligence, my speed, or something similar, so that I can profit from that without buying in to the other beneficial stuff that you perceive happening. Wouldn’t you need some overall regulation or control of this to be sure that it’s used in a way that benefits everyone?
BRYAN JOHNSON: May I adjust the framing of your question in two ways? First, your statement about regulation implicitly assumes that our government is the only group that can arbitrate interests. I do not agree with that assumption. The government is not the only group in the entire world that can regulate interests. We could potentially create self-sustaining communities of regulation; we do not have to rely on government. The creation of new regulating bodies or self-regulating bodies can emerge that keep the government from being the sole keeper of that.
Second, your statement on morals and ethics assumes that you as a human have the luxury to decide what morals and ethics you want. What I’m suggesting is that if you look back through history, almost every biological species that has ever existed on this earth for the four-plus billion years it has existed have gone extinct. Humans are in a tough spot, and we need to realize we’re in a tough spot because we are not born in an inherent position of luxury. We need to make very serious contemplations, which does not mean that we’re not going to have moral ethics; it does. It just means that it needs to be balanced to realize that we are in a tough spot.
For example, there’s a couple of books that have come out, like Factfulness: Ten Reasons We’re Wrong About the World, and Why Things Are Better Than You Think, by Hans Rosling, and Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. Those books basically say that the world’s not bad, and that although everyone says how terrible it is, all the data says it’s getting better, and it’s getting better faster. What they’re not contemplating is that the future is dramatically different to the past. We’ve never had a form of intelligence in the form of AI that has progressed this fast. Humans have never had these types of tools that have been this destructive. We have not experienced this future before, and it’s our very first time going through this.
That’s why I don’t buy the historical determinism argument that somehow because we’ve done well in the past, we’re guaranteed to do well in the future. I would say that I’m equal parts optimistic about what the future can bring, but I’m also equal parts cautious. I’m cautionary in terms of acknowledging that in order for us to be successful in the future, we must achieve future literacy. We must also be able to start planning for, thinking about, and creating models for the future that enable us to become future literate.
If you look at us as a species now, we fly by the seat of our pants. We pay attention to things when they become a crisis and we can’t plan ahead, and as humans, we know this. We typically do not get ahead in life if we don’t plan for it, and as a species, we have no plan. So again, there are all these concepts that if we are hoping to survive in the future, what gives us confidence that we can do that? We don’t plan for it, we don’t think about it, and we don’t look at anything else beyond individuals, individual states, companies, or countries. We’ve never done it before. How do we deal with that in a thoughtful way so that we can maintain the things we care about?
MARTIN FORD: Let’s talk more generally about artificial intelligence. First of all, is there anything that you can talk about in terms of your portfolio companies and what they are doing?
BRYAN JOHNSON: The companies that I invested in are using AI to push science discovery forward. That’s the one thing they all have in common, whether they’re developing new drugs to cure disease, or finding new proteins for everything, for inputs into agriculture, for food, drugs, pharmaceuticals, or physical products. Whether these companies are designing microorganisms, like synthetic bio, or they’re designing new materials, like true nanotech, they’re all using some form of machine learning.
Machine learning is a tool that is enabling discovery faster and better than anything we’ve ever had before. A couple of months ago, Henry Kissinger wrote an open letter to The Atlantic saying that when he was aware of what AlphaGo did in chess and Go, he was worried about “strategically unprecedented moves.” He literally sees the world as a board game because he was in politics in the cold-war era when the US and Russia were arch rivals, and we literally were, both in chess and as nation states. He saw that when you apply AI to chess and Go—and human geniuses have been playing those games for thousands of years—when we gave the game to AlphaGo within a matter of days, the AI came up with genius moves that we had never seen before.
So, sitting underneath our nose the entire time was undiscovered genius. We didn’t know, and we couldn’t see it ourselves, but AI showed it to us. Henry Kissinger saw that, and he said, that makes me scared. I see that, and I say that’s the best thing in the entire world because AI has the ability to show us what we cannot see ourselves. This is a limitation when humans cannot imagine the future. We cannot imagine what radically enhancing ourselves means, we can’t imagine what the possibilities are, but AI can fill this gap. That’s why I think it’s the best thing that could ever happen to us; it is absolutely critical for us to survive. The issue is that most people, of course, have accepted this narrative of fear from outspoken people who have talked about it, and I think it’s terribly damaging that as a society that narrative is ongoing.
MARTIN FORD: There is a concern expressed by people like Elon Musk and Nick Bostrom, where they talk about the fast take-off scenario, and the control problem related to superintelligence. Their focus is on the fear that AI could get away from us. Is that something we should worry about? I have heard the case made that by enhancing cognitive capability we will be in a better position to control the AI. Is that a realistic view?
BRYAN JOHNSON: I’m appreciative of Nick Bostrom for being as thoughtful as he has been about the risks that AI presents. He started this whole discussion, and he’s been fantastic in framing it. It is a good use of time to contemplate how we might anticipate undesired outcomes and work to fend those off, and I am very appreciative that he allocated his brain to do that.
Regarding Elon, I think the fear mongering that he has done is a negative in society, because in comparison it has not been as thorough and thoughtful as Nick’s work. Elon has basically just taken it out to the world, and both created and inflicted fear among a class of people that can’t comment intelligently on the topic, which I think is unfortunate. I also think we would be well suited as a species to be humbler in acknowledging our cognitive limitations and in contemplating how we might improve ourselves in every imaginable way. The fact that it is not our number one priority as a species demonstrates the humility we need.
MARTIN FORD: The other thing I wanted to ask you about is that there is a perceived race with other countries, and in particular China both in terms of AI, and potentially with the kind of neural interface technology you’re working on with Kernel. What’s your view on that? Could competition be positive since it will result in more knowledge? Is it a security issue? Should we pursue some sort of industrial policy to make sure that we don’t fall behind?
BRYAN JOHNSON: It’s how the world works currently. People are competitive, nation states are competitive, and everybody pursues their self-interest above the other. This is exactly how humans will behave, and I come back to the same observation every single time.
The future that I imagine for humans that paves the way for our success is one in which we are radically improved. Could it mean we live in harmoniousness, instead of a competition-based society? Maybe. Could it mean, something else? Maybe. Could it mean a rewiring of our ethics and morals so far that we won’t even be able to recognize it from our viewpoint today? Maybe. What I am suggesting is we may need a level of imagination about our own potential and the potential of the entire human race to change this game, and I don’t think this game we’re playing now is going to end well.
MARTIN FORD: You’ve acknowledged that if the kinds of technologies that you are thinking about fell into the wrong hands, then that could pose a great risk. We’d need to address that globally, and that seems to present a coordination problem.
BRYAN JOHNSON: I totally agree, I think we absolutely need to focus on that possibility with the utmost attention and care. That’s how human and nation states are going to behave based on historical data.
An equal part to that is that we need to extend our imagination to a point where we can alter that fundamental reality to where we may not have to assume that everyone’s going to just work on their own interests and that people will do whatever they can to other people to achieve what they want. What I am suggesting is that calling into question those fundamentals is something we are not doing as a society. Our brain keeps us trapped in our current perception of what is reality because it’s very hard to imagine that the future would be different to what we currently live in.
MARTIN FORD: You have discussed your concern that we might all become extinct, but overall, are you an optimist? Do you think that as a race we will rise to these challenges?
BRYAN JOHNSON: Yes, I would definitely say I’m an optimist. I’m absolutely bullish on humanity. The statements I make about the difficulties that we face are in order to create a proper assessment of our risk. I don’t want us to have our heads in the sand. We have some very serious challenges as a species, and I think we need to reconsider how we approach these problems. That’s one of the reasons why I founded OS Fund—we need to invent new ways to solve the problems at hand.
As you’ve heard me say many times now, I think we need to rethink the first principles on our existence as a human, and what we can become as a species. To that end, we need to prioritize our own improvement above everything else, and AI is absolutely essential for that. If we do that to a point where we can prioritize our improvement and get fully involved in AI, in a way that we both progress together, I think we can solve all the problems that we are facing, and I think we can create an existence that’s far more magical and fantastic than anything we can imagine.
BRYAN JOHNSON is founder of Kernel, OS Fund and Braintree.
In 2016, he founded Kernel, investing $100M to build advanced neural interfaces to treat disease and dysfunction, illuminate the mechanisms of intelligence, and extend cognition. Kernel is on a mission to dramatically increase our quality of life as healthy lifespans extend. He believes that the future of humanity will be defined by the combination of human and artificial intelligence (HI+AI).
In 2014, Bryan invested $100M to start OS Fund, which invests in entrepreneurs commercializing breakthrough discoveries in genomics, synthetic biology, artificial intelligence, precision automation, and the development of new materials.
In 2007, Bryan founded Braintree (and acquired Venmo), which he sold to PayPal in 2013 for $800M. Bryan is an outdoor-adventure enthusiast, pilot, and the author of a children’s book, Code 7.