René Descartes revolutionized philosophy by systematically doubting all preestablished knowledge, and he was even able to challenge something as immediate, although not at all obvious, as the existence of reality. To revisit his age-old query: How do we know that everything we see around us isn’t just a colossal deception whipped up by an evil genius? The impossibility of simulating the extremely complex interactions we have with the external world and, as Wittgenstein argued, the impossibility of even acquiring the tiniest bit of knowledge without having someone to interact with, leads us to conclude that reality exists. Although reality isn’t necessarily what it appears to be, since we don’t have access to reality itself—quoting Kant, die Dinge an sich—but to our subjective interpretation of it. In other words, the perception we have of reality is a construction of the brain.
We are currently experiencing a similar revolution to the one Descartes unleashed, which, like before, is once again shaking the foundations of philosophy. Descartes posited a division between mind and matter, but scientists have set aside Cartesian dualism, and we now mainly support materialism. We no longer see the mind as a separate entity from the brain, but as the latter’s activity. Mind and brain are one and the same, just like the temperature and the kinetic energy of the microscopic movement of molecules—two different ways of describing the same phenomenon. However, adopting materialism gives rise to other apparently unsolvable issues that have been keeping both philosophers and neuroscientists up at night for a while. Perhaps the biggest one of all has been the “hard problem,” which is related to consciousness. If the mind is nothing more than the firing of neurons, how can electric impulses in my brain make me feel and be aware of, for example, the smell of jasmine?
The sci-fi movies we have discussed were, to a degree, a springboard that led us to raise these questions—questions that great thinkers such as Aristotle, Descartes, Locke, and Hume, among others, have been asking for centuries; questions whose answers we are now beginning to unlock. The key is always the same: understanding that mental processes are a construction of the brain.
Does consciousness exist? Of course it does, though not as an ethereal and mysterious entity, but as the coordinated activity of groups of neurons. How can we then explain the feeling of smelling jasmine? Easy. When we smell jasmine, we activate groups of neurons that identify this particular smell, which in turn activate other groups of neurons coding related feelings, experiences, and emotions: the color and texture of its petals, what my grandmother’s jasmine plant looked like, the delight she felt when smelling its fragrance, and many other things that when evoked, consciously or unconsciously, generate the subjective experience that I have when I smell jasmine—in other words, qualia.
But, how do we explain the feeling of pain? It is clear that pain is a defense mechanism, for example, to prompt us to remove our hand from a hot fire. Still, how is the feeling that something hurts generated from the firing of certain groups of neurons? Why does the activation of those neurons make us tremble with pain, while others give us pleasure? The answer likely comes from the connectivity of those neurons and the group of activations they trigger, but we are still far from understanding how these processes actually work. However, departing from Leibniz’s view—who famously argued that if cognitive processes were like a large windmill, when entering it, we would only see pieces interacting with each other but nothing that could explain these processes—we currently believe that by knowing how different mechanisms work in the brain, we can, in fact, begin to explain cognitive processes and, particularly, something as elusive as consciousness.
Another big issue that keeps philosophers (and Western religious scholars) up at night is free will, especially after the scientific revolution of the seventeenth century and the rise of determinism. How can we be free if everything we are about to do is already predetermined? Scholastic philosophers, notably Thomas Aquinas, asked themselves how we could be free if God knows beforehand every one of our actions (and, on the other hand, what’s left of our freedom if we can’t choose to sin).
The idea of an autonomous mind that makes decisions using its own will becomes unsustainable from a scientific point of view. Free will is another construction of the brain, generated by consciously anticipating actions that are about to be executed. But these actions are generated through the firing of neurons and are, therefore, predetermined. This gives rise to devastating consequences that prompt us to reconsider the responsibility we have for our actions. Furthermore, a lack of free will seems to make our lives meaningless, since everything we might decide in the future, every single eventual success and failure, has already been predetermined. However, chaos theory and the resulting unpredictability of the universe and, specifically, of the activity of our brain (and therefore of our behavior) saves us from this great philosophical dilemma. Our actions are unpredictable and, in practice, it doesn’t really matter that the future is predetermined if no one can see it.
We are currently experiencing a revolution in artificial intelligence, and sophisticated algorithms are already surpassing humans at endless tasks. How then do we know that a machine isn’t intelligent or capable of thinking? The question is far from trivial. We assume that other people think like us because they show similar behaviors (which is the basis of the theory of mind; of “putting ourselves in someone else’s shoes” to interact socially). But could we say the same of a computer? What would a computer or an android have to do to show us that it is intelligent? These are the questions that Turing asked himself, giving way to his famous test, which, however, is quite controversial when trying to put it into practice.
Nevertheless, we are still far from being surpassed or even paralleled by machines; they can excel in specific tasks, but they lack general intelligence. And I think this is key to determining if a machine can think: What matters is not that it can solve a certain problem, but whether it can solve problems that it hasn’t been trained for, showing understanding and the power to transfer knowledge based on analogies and inferences. This is something we do constantly, when we face situations we have never experienced before, scenarios in which we know how to react using common sense, making inferences through past experiences. We don’t need, as does a machine, a zillion examples and specific practice to successfully deal with the new problems we face so often in our lives. And on many occasions, we can’t really judge if something is right or wrong; there’s no rule for what to do, and not even a clear outcome—like winning or losing a chess game—that we can use to tune our behavior. Rather, we have a sense of a given situation, we understand what’s going on, and somehow decide what’s best. But our ability to understand is based on extracting meaning from things, processing that which is essential, discarding infinite irrelevant information. This is precisely the basis of brain function, the result of millions of years of evolution. And it wouldn’t surprise me if the two outmost challenges of AI—general intelligence and consciousness—are somehow linked; if the ability to develop general intelligence, a capacity for making sense of things, would lead to the development of a sense of self, the so-far elusive problem of how to make a machine aware of its own existence.
The key to brain function is that it stores very little information because it mainly uses its resources to understand. This is the big difference with how a computer works. A computer can reliably store an enormous amount of information, but it doesn’t understand its content. The brain, in contrast, deals with little information because it focuses on the essential and infers the rest to attribute meaning to things. Information is not simply stored in the brain—it is constructed. And it is tempting to argue that this active construction process, so far lacking in machines, is key for general intelligence, and may also be the crucial feature that gives rise to consciousness. However, we do want machines to accurately store and process information; we don’t want them to construct this information and potentially fail when making incorrect assumptions, as we, for example, do when we experience visual illusions or false memories. This is not the way we build our computers, and who knows what will happen if one day we do so.
Delving further into this discussion, we could also ask ourselves: Why don’t other animals have our intelligence? A chimpanzee’s brain isn’t that different from that of a human. However, there’s a noticeable difference that stems from the use of language and everything it facilitates. Language allows us to communicate, transmit knowledge, discuss past experiences, and plan the future. But beyond these unique possibilities, I think the greatest benefit of language is the development of a more conceptual and abstract thought, setting aside the immediate details and peculiarities of things. As Borges said: “Every noun is an abbreviation,” and “to think is to forget differences, to generalize, to abstract.” Furthermore, Borges argued (in “Tlön, Uqbar, Orbis Tertius”) that in a world without nouns, science and even knowledge would be impossible. So, I dare speculate that, after tens of thousands of years of evolution, one of the consequences of language is the development of concept cells, like those I discovered a few years ago doing experiments with humans—neurons that, incidentally, have yet to be found in animals and may very well be the cornerstone of our capacity for conceptual and abstract thought.
The representations given by concept cells are the basis of our memory, since the memory of our experiences derives from storing relatively few associated concepts (for example, remembering that I was at a café with a friend talking about a particular subject). The rest is mainly lost in oblivion, allowing us to focus on the essential to generate associations and contextualize each new piece of knowledge—something we usually do in our sleep, where we probe even the most bizarre associations. This ability to forget details and focus on the meaning of things is precisely the basis of our intelligence and creativity, our capacity to make associations between initially disparate concepts to understand, for example, that an apple that falls to the ground and the moon that orbits in the firmament respond to the same phenomenon: the law of gravity.
Helmholtz maintained that perception is based on processing very little information and making unconscious inferences. We see very little and assume the rest. And like perception, memory is also a construction of the brain. Our feeling of seeing an object is based on identifying it through just a few features. Likewise, the feeling of remembering our past as if it were a movie is based on creating a coherent story by recalling very few events, filling in the blanks using inferences based on common sense. Such constructions of the brain underlie our sense of identity and the feeling that our bodies belong to us. The hand I see typing on the keyboard is my hand because it responds to commands dictated by my brain. If it didn’t, I would feel that it was someone else’s hand, as with the alien hand syndrome. Conversely, I would consider as my own a synthetic hand which, using the latest achievements in neuroprosthetics, responds reliably to commands from my brain.
The sense of identity, of continuing to be the same person, is a major philosophical dilemma and, as Locke argued, does not come from our body but from memory continuity. I am the person who not so long ago decided to write this book, the one who studied physics at the University of Buenos Aires, and was blown away by movies such as 2001, Blade Runner, The Matrix, and Until the End of the World. But, surprisingly, this story we create—our story—is a dynamic process that is constantly evolving, because each time we consciously access a memory we are changing it.
So, what’s left of the self? How can it be that what I am most certain about, my own existence, is generated by something as unstable and malleable as my memory? The notion of self is perhaps one of the most elusive and controversial concepts in philosophy. To Hume it is nothing more than a bunch of sensations, but to others, the self, the feeling of being a person, is undeniable. In my view, the self exists and it is the most elaborate construction of the brain, giving entity and identity to that bunch of sensations. (Moreover, we saw that there is a representation of the self in the brain, when we found neurons in patients that responded to pictures of themselves.)
By concluding that the construction of the self is based on our memory, and knowing that memory is, in turn, triggered by the activity of neurons and their connections, we then wonder if we could preserve our identity—in a clone or supercomputer—and somehow become immortal. That’s twenty-first-century philosophy. Therein lies the revolution where the most challenging topics in philosophy—identity, consciousness, free will, and animal and machine intelligence among them—now seem approachable, and we can even face something as elusive as immortality as a biological and not just a metaphysical problem.
To defeat death is perhaps our most compelling challenge; however, we saw that even if we could make an exact copy of our brain, we wouldn’t be what was generated. We would create a person with our same memories, our same feeling of identity—of having a self—but that person wouldn’t be us. For a third person, the difference between us and our clone would be unrecognizable, but we wouldn’t see through the clone’s eyes or feel what he or she feels. When conceiving the possibility of cloning or teleportation, like in science fiction, we then realize that the self doesn’t necessarily have to be unique.
Fear of death is the most natural instinct and comes from the construction of the self, because if there is no self that is going to die, the fear is lost. That is, in my opinion, the fundamental consequence of our brain’s creation of a self: Spinoza’s conatus, the need to try to preserve ourselves. Computers and other complex systems like the internet are not aware of themselves (at least to our knowledge). The internet lacks a purpose, a sense of self-preservation; its raison d’être comes from the desires of billions of users, just as if each one of the billions of neurons in our brain advocated for their own well-being and existence, but without fostering the self, the person’s well-being. The internet could disappear tomorrow without a problem, if its users found another more efficient way to share and exchange information, just like the cells in my liver wouldn’t mind (or be conscious of) their extinction if I have a transplant. But it would be a very different situation if the neurons in my brain could somehow manage to reconnect themselves and create another person, because that would determine my death, the disappearance of the greater purpose, and I would certainly do everything in my power to prevent this from happening.
Therefore, consciousness provides us with a preservation instinct. Returning to the discussion about Turing’s test, a machine could prove that it is intelligent if, like HAL 9000 in 2001 or Skynet in Terminator, it tried at all costs to avoid being disconnected—of course, assuming this is not something that has been preprogrammed and instead results as a consequence of its self-awareness. But how could we induce this higher purpose in a machine? How could we create the awakening of its consciousness and its fear of death? Would the induction of general intelligence do that? This is without a doubt one of the most fascinating and recurring questions in science fiction and twenty-first-century philosophy.
We have begun to see how the brain generates a construction of the self, but what determines its continuity? What makes me the same when I wake up each morning but not so if I’m revived after years of being frozen? Are we perhaps constantly dying and resuscitating as another self that believes it is the same as always? Is our dreaded death nothing more than a disruption in this chain of rebirths?
In recent years, science has made enormous advances, but questions like these are still standing, unanswered. Despite how frustrating this may seem, it is the Holy Grail for scientists, that which leads us to tirelessly work to satisfy our curiosity, enjoying each moment in the search for answers that we may never find. However, what is undeniable is that, like never before in the history of humanity, we now have methods and tools that we did not even imagine a few years ago, placing us in a privileged position to discuss these questions one-on-one with the best thinkers of all times and scientifically tackle the greatest philosophical challenges.