Scientific and Buddhist Views
FOR CENTURIES, theologians, philosophers, and scientists have expressed diverse beliefs about what makes us distinctively human. Theologians have emphasized our ability to distinguish between good and evil, with great ramifications for our immortal souls. Philosophers commonly cite our ability to think rationally, and biologists look to our genetic makeup and enlarged brains. Buddhist thinkers value human communication and reasoning very highly; nevertheless, their attention is focused on our capabilities of seeking genuine happiness, deliberately applying ourselves to leading ethical lives, cultivating our minds through meditation, and realizing freedom from suffering and its causes by directly understanding the nature of reality. In short, Buddhists emphasize the rarity and great value of human existence because it provides unparalleled opportunities for realizing our highest potential.
The conceptual framework that dominates biology and the cognitive sciences today is neo-Darwinism, which combines Darwin’s theory of natural selection and Mendelian genetics. This view, along with neuroscience, is embraced by its followers as an ideological framework to answer all questions about the nature of life and human existence. In this belief system, human nature is explained in terms of biological evolution and genetic influences on the brain.
HUMAN NATURE AND NATURAL SELECTION
Alfred Russel Wallace was one of the most prominent nineteenth-century scientists to publicly challenge the adequacy of the theory of natural selection to explain certain aspects of human existence. In particular, he cited three occasions in the history of life that cannot be explained by natural selection: the initial origin of organic life; the emergence of sensation or consciousness in living organisms; and the existence of humanity’s most characteristic and noblest faculties, including mathematical reasoning, aesthetic appreciation, abstract thinking, morality, and cultural intelligence. He commented in this regard:
Neither natural selection or the more general theory of evolution can give any account whatever of the origin of sensational or conscious life. . . . But the moral and higher intellectual nature of man is as unique a phenomenon as was conscious life on its first appearance in the world, and the one is almost as difficult to conceive as originating by any law of evolution as the other.1
Nevertheless, under the increasingly dominant influence of scientific materialism through the twentieth century, mainstream biologists instead followed the lead of Soviet biologist Aleksandr Oparin (1894–1980), who proposed in 1924 that the first living organisms formed spontaneously out of nonliving substances, implying a smooth continuum from inorganic to organic matter.2 Although this hypothesis has received no better empirical confirmation than Darwin’s proposal of divine intervention, scientific acceptance of Oparin’s materialistic hypothesis gained great momentum in 1953, when American biologist Stanley Miller (1930–2007) exposed gases simulating Earth’s early atmosphere to an electric discharge and produced amino acids, suggesting a chemical basis for the emergence of living organisms.3
Many scientists at that time were confident that this discovery would swiftly open the way to creating life in the laboratory, but the transition from an amino acid to a living cell turned out to be far more challenging than biologists imagined, for the two are vastly different entities. The amino acids that Miller created may be likened to individual screws in a functioning watch, which corresponds to a living cell. Individual screws are needed to make a watch, just as amino acids are needed in the formation of a cell; nevertheless, isolated amino acids are dead matter incapable of the interactions that characterize a living cell. A watchmaker knows how to assemble a watch from screws and other mechanical components, but biologists have made little progress in artificially transforming amino acids into a living organism. Biologists understand the evolution of species by mutation and natural selection, but this process presupposes the existence of a living, reproducing organism. Despite the media hype to the contrary, they have not succeeded in creating a living organism “from scratch,” using inorganic chemicals. Although biologists have imagined a variety of circumstances under which life may have originally emerged on our planet, they have come to no consensus on this matter, and none of their hypotheses has lent itself to scientific validation or rejection.
The science of genetics has thus far provided some surprising results that belie assumptions about the uniqueness of human nature. Prior to the Human Genome Project, many geneticists believed that the number of human genes, which they estimated to be as many as 100,000, would clearly set our species apart from all other living organisms. The completion of this project was celebrated in April 2003, and it is now estimated that the number of human protein-coding genes is between 20,000 and 25,000. This lower figure came as a shock to scientists who had assumed that the number of genes in a specific organism would correspond to its complexity, providing a genetic explanation for the unique characteristics of human existence. Instead, the human gene count is barely greater than that of the simple roundworm, with about 20,000 genes. Even more unsettling was the discovery that the genome of rice, the first to be sequenced of the great grains, has around 50,000 genes, roughly the same as the genomes of soybean and apple!4
HUMAN NATURE AND THE ORIGINS OF CONSCIOUSNESS
Contemplating the origins of sensation and consciousness in living organisms, Wallace found no way for materialist theories to bridge the gap from molecules to subjective experience. Even if molecules are organized in configurations of increasing complexity, nothing in their physical makeup suggests that they might give rise to sensation or consciousness.5 Despite his objections, over the past century, cognitive scientists have overwhelmingly embraced the notion that the brain is solely responsible for the generation of all states of consciousness. Such confidence is remarkable in light of the fact that 1) there has been no agreement on a scientific definition of consciousness; 2) there are no objective means for detecting the presence of consciousness or mental phenomena in any living organism (let alone in computers or robots); 3) scientific knowledge of the necessary and sufficient causes for consciousness is very limited; 4) the neural correlates of consciousness are only beginning to be understood; and 5) there is no explanation for how the brain generates and influences mental phenomena and no understanding of how subjective states, such as belief, desire, and confidence, influence the brain and the rest of the body. Most brain scientists remain hopeful that they will eventually solve these problems within the context of the materialist paradigm, but nonbelievers have grounds to be deeply skeptical.
For Alfred Wallace, a major limitation of the theory of evolution was the failure of natural selection to explain the increased size of the human brain versus the brains of our ancestors. According to Darwin and Wallace’s theory, species adapt to their changing environments on the basis of two fundamental processes—survival and procreation—and natural selection enables them to acquire traits that support those two endeavors. But Wallace found no answer in evolutionary theory to explain how the human brain developed so far beyond the needs of human survival and procreation. Seeing no natural explanation for the size and complexity of the human brain, he attributed this to the intelligent design of some overruling intelligence that organized the universe to support or even encourage the indefinite advancement of humanity’s mental and moral nature.6 Charles Darwin disagreed vociferously, insisting that the evolution of the human brain would eventually be explained through a combination of natural selection and sexual selection. He defined the latter as the effects of the struggle by the individuals of one sex, generally male, for sexual access to the other sex.
Over the past centuries, biologists have proposed various hypotheses to explain the evolutionary development of the extraordinary capacities of the human mind, but they have yet to agree upon or empirically validate these theories. Some have proposed that our advanced mental abilities are “spandrels,” a term coined by Harvard paleontologist Stephen Jay Gould and population geneticist Richard Lewontin to describe a phenotypic characteristic that is a byproduct of the evolution of some other feature and not necessarily adaptive in itself. Although the term may indeed be useful for understanding evolution as a whole, it has little explanatory power concerning the specific emergence of the unique qualities of the human mind.
Recent extraordinary advances in cognitive neuroscience have demonstrated an ever-increasing array of close correlations between specific parts and functions of the brain and specific mental traits and characteristics; this includes the defining human abilities highlighted by Wallace, such as mathematical reasoning, aesthetic appreciation, abstract thinking, morality, and cultural intelligence. Biological processes in the cerebral cortex are closely correlated with a number of these abilities.
Such mind-brain correlations were dramatically raised to scientific attention in 1848, when a foreman named Phineas Gage, who was helping to build a railway in Vermont, tamped down an explosive charge with a three-foot iron bar; the charge detonated prematurely, driving the rod through his head. Remarkably, Gage survived this gruesome accident, but his personality underwent a radical change. Before his injury, he had been a sober, well-respected, industrious individual, but afterward, he became a foul-mouthed, alcoholic drifter. The popular conclusion was that his personality had been altered in specific ways by damage to specific parts of his brain.
Thirteen years later, French neurologist Paul Broca (1824–80) systematized the study of the effects of brain damage on the mind with the discovery that certain speech defects result from damage to the left temporal lobe of the brain. Since that time, neuroscientists have been studying the correlations between mind and brain. The closeness of these correlations has led many cognitive scientists to claim that a human being is essentially a brain, implicating this organ as the master key to understanding human nature.7
For many years, psychiatrists distinguished between psychiatric and neurological illnesses, based on the assumption that the psyche could malfunction independently of physical processes in the brain. That idea has been abandoned by some professional therapists due to the perceived effectiveness of pharmaceuticals, such as antidepressants, in treating psychiatric illness. In 1989, Lewis L. Judd, then Director of the National Institute of Mental Health, acknowledged that clinical depression may be catalyzed by psychological processes, such as excessive rumination on a miserable experience. But he insisted that the primary causes of major depression can be traced to interactions between genes and the physical environment. At that time, roughly thirty types of medication were available to treat depression, and he believed that the vast majority of cases could be successfully managed through such pharmaceutical interventions.8
More recent studies, however, have shown that popular antidepressants, including Prozac and Paxil, have little impact on most patients; only those who are diagnosed “at the upper end of the very severely depressed category” receive meaningful benefit from these widely prescribed drugs. For less severely affected patients, antidepressants are barely more effective than a placebo. Judd’s belief that most cases of depression can be managed with drugs has proven to be overly optimistic.
There are clear indications of a trend toward higher rates of depression, especially in Western countries, and at increasingly younger ages. There is evidence suggesting that the occurrence of clinical depression has doubled in the past sixty years. The value of talk therapy in combination with drug therapy—especially when administered by a compassionate caregiver—has been demonstrated by numerous studies. Thus, there remain compelling pragmatic reasons for distinguishing between psychological and physical influences on the human mind, regardless of one’s metaphysical beliefs.9
With the rise of cognitive psychology and neuroscience over the past few decades, many people have come to think of the brain as analogous to a computer, with the mind being like the software that runs on that computer. One eminent scientist who has promoted this view is Stephen Hawking, who recently remarked, “I think the brain is essentially a computer and consciousness is like a computer program. It will cease to run when the computer is turned off. Theoretically, it could be re-created on a neural network, but that would be very difficult, as it would require all one’s memories.”10 This is a case of seeking to explain something scientists do not understand—namely, consciousness—simply by likening it to things they do understand—namely, a computer and its program.
For the moment, let’s assume that the brain is indeed essentially a computer. The information in a computer is fundamentally like information in a book. In both cases, the information exists only relative to a conscious agent who stored it there and relative to those conscious agents who are able to retrieve it. Apart from consciousness, there is no semantic information anywhere in the universe. Programs are installed in computers by conscious agents in much the same way that chapters are inserted into a book. They don’t emerge by themselves from the internal components of computers, and they have no existence of their own independent of those conscious agents. The silicon chips and electrons in a computer do not know, remember, or learn anything, nor do they talk among themselves or make decisions. Such notions are just as silly as the idea that the ink marks in a book could know the book’s plot or that the components of a thermostat could know a room’s temperature.
The elementary particles, atoms, molecules, and complex networks in a computer do give rise to emergent properties, but all of them are physically detectable and understandable in terms of the laws of physics. There is nothing mysterious about the workings of computers or their programs, for they are all man-made. All the emergent properties of the particles, atoms, molecules, and complex networks of neurons in a brain, such as heat and electromagnetic fields, are also physically detectable and are understandable in terms of the laws of physics. Their functions and emergent properties are all physical, physically detectable, and comprehensible in terms of the laws of physics. But consciousness is not physical or physically detectable, and its alleged emergence from chemical and electromagnetic activity in the brain—no matter how complex the interactions—is not comprehensible in terms of the laws of physics. The parallel between a computer and its programs and the brain and consciousness simply collapses; such an analogy merely obscures the actual nature and origins of consciousness rather than illuminating anything about it.
Consciousness no more resides in the brain than it does in silicon-based computers or between the covers of books. It is not an emergent property or function of matter, and the unquestioned belief that it must be is the greatest superstition promoted by scientists today. Consciousness, its origins, and its role in nature remain unknown to modern science. Until scientists adopt suitable methods for directly investigating consciousness—instead of confining their inquiries to its neural correlates and behavioral expressions—it will continue to remain a mystery.
On purely scientific grounds, some scientists reject the analogy between the brain and a computer, observing that even the subtlest damage to a computer will generally cause errors or failure. In contrast, many neuroscientific studies have shown that a human being can survive, albeit changed, despite quite severe brain damage. A unique and dramatic example of neuroplasticity was reported in 2009. A ten-year-old girl from Germany was born with only the left hemisphere of her brain; the development of the right hemisphere had terminated prior to the seventh week of gestation. Dr. Lars Muckli, a researcher from the Center for Cognitive Neuroimaging at the University of Glasgow, commented, “The brain has amazing plasticity but we were quite astonished to see just how well the single hemisphere of the brain in this girl has adapted to compensate for the missing half. . . . Despite lacking one hemisphere, the girl has normal psychological function and is perfectly capable of living a normal and fulfilling life. She is witty, charming and intelligent.”11
Thirty years ago, British pediatrician John Lorber (1915–96) brought several extraordinary case studies to the attention of the scientific community.12 Since the mid-1960s, he had treated patients with spina bifida, a congenital defect involving incomplete closure of the spinal canal. These patients usually suffer from hydrocephalus, commonly known as water on the brain, an abnormal buildup of cerebrospinal fluid that can result in severe retardation and death if not treated. In 1980, Lorber addressed a conference of pediatricians, giving accounts of patients with hydrocephalus who nevertheless exhibited normal mental development. A university student with a slightly larger than normal head was referred to Lorber. Although the young man had an IQ of 126 and a first-class honors degree in mathematics, a CAT scan revealed only a thin layer of cortical tissue; most of his cranium was filled with cerebrospinal fluid. Lorber documented over 600 scans of patients with hydrocephalus, which he divided into four groups: nearly normal brains; 50–70 percent of the cranium filled with cerebrospinal fluid; 70–90 percent of the cranium filled with cerebrospinal fluid; and 95 percent of the cranium filled with cerebrospinal fluid.
Many patients in the last group, which comprised less than 10 percent of the study, were severely disabled, but half of them had IQs greater than 100. Lorber concluded that there must be “a tremendous amount of redundancy or spare capacity” in the brain, and that the cortex is “responsible for a great deal less than most people imagine.”13 These conclusions were contested by neurologists at the time, but Lorber’s admittedly provocative question, “Is your brain really necessary?” garnered much public attention; it may be partly responsible for the myth that we only use 10 percent of our brains.14 Some researchers now believe that the normally functioning individuals studied by Lorber had an approximately normal number of brain cells confined in a smaller space, but this unconfirmed hypothesis hardly lays to rest Lorber’s question.15
Evidence in support of materialistic interpretations of the mind-brain relationship dominates scientific journals and the public media, while evidence to the contrary is often overlooked or downplayed. The close correlations between specific brain processes and mental abilities are undeniable, but it is less clear that all states of consciousness and mental functions are always contingent upon brain activity. Despite sharing the predominant conviction among cognitive scientists that the brain produces mental activity, behavioral psychologist William Uttal calls the use of imaging technology to locate specific traits in the brain “the new phrenology.”16 Reviewing current neural theories of mind, he concludes that “we do not yet have the barest glimmerings of how the brain makes the mind.”17 Perhaps this assumption should be seriously questioned.
GULLIBILITY OR SCIENTIFIC INTEGRITY?
Alfred Wallace not only was concerned with the inadequacy of the theory of evolution for explaining the origins of life and consciousness and the distinctive features of the human mind within a materialistic framework, but he also developed a keen interest in reports of paranormal mental phenomena that seemed to undermine the materialist view of human existence altogether. By the age of fourteen, he had abandoned the Christian doctrine of his youth, becoming a confirmed materialist. At the age of thirty-nine, he still believed, like most of his scientific peers, that reports of clairvoyance, precognition, and evidence of life after death were either fraudulent or delusional. But his scientific curiosity was insatiable, and he began to explore psychic phenomena with all the rigor he had applied to biological research. After three years of investigating the available evidence, he was compelled by the empirical facts to accept that some such phenomena withstood rigorous scientific scrutiny and must be accepted.
Charles Darwin, in contrast, chose to simply dismiss all such reports out of hand. Likewise, Thomas Henry Huxley showed no interest in evidence of psychic phenomena. He deemed the evidence to be worthless and distrusted even his fellow scientists’ abilities to test its validity, claiming that this was the work of detectives rather than scientists. Darwin and Huxley’s attitude has been adopted by mainstream cognitive scientists throughout and since the twentieth century. Evidence that contradicts the fundamental assumptions of neo-Darwinism is dismissed; approaching it with critical, open-minded scientific curiosity engenders ridicule or silence. This was the attitude of medieval scholastics who refused to look through Galileo’s telescope, confident that anything they might see that contradicted their beliefs could only be an optical illusion. The empirical sciences continue to wear the blinders of dogmatic scientific materialism.
Wallace was convinced that psychic phenomena were not miraculous in the sense of violating the laws of nature; he believed that they accorded with natural laws yet undiscovered. Given the scientifically undetectable nature of all mental phenomena—including consciousness itself—he rightly concluded:
There is nothing self-contradictory, nothing absolutely inconceivable, in the idea of intelligences uncognizable directly by our senses, and yet capable of acting more or less powerfully on matter. Let direct proof be forthcoming, and there seems no reason why the most skeptical philosopher should refuse to accept it. It would be simply a matter to be investigated and tested like any other question of science. The evidence would have to be collected and examined. The results of the enquiries of different observers would have to be compared.18
In response to the contempt and derision he received from his scientific peers, he countered, “The day will assuredly come when this will be quoted as the most striking instance on record of blind prejudice and unreasoning credulity.”19
Since Wallace’s day, medical advances have provided new and provocative evidence suggestive of paranormal phenomena, such as near-death experiences, which have now been studied both prospectively and longitudinally, generating diverse responses in the scientific and popular literature.20 One of the most striking cases of such an experience occurred in 1991, when Pam Reynolds, a professional singer and songwriter from Atlanta, Georgia, was diagnosed with an aneurysm in her brain stem, one of the most inaccessible regions of her brain. She subsequently underwent a radical procedure called hypothermic cardiac standstill. During this complex operation led by neurosurgeon Robert Spetzler, Pam was placed under general anesthesia and her skull was opened to assess the aneurism. Finding it to be as large as expected, doctors placed her on cardiopulmonary bypass and reduced her core body temperature to 60 degrees Fahrenheit, eventually completely stopping her heart, respiration, and brain activity. With her circulation stopped, the blood was drained from Pam’s brain so that the aneurysm could be repaired, after which the bypass machine restarted circulation, warming her body; finally, she was successfully resuscitated. During an operation that lasted seven hours, Pam’s heart was stopped for about an hour, and her brain activity was undetectable for a short part of that time.
The day following her surgery, she reported a near-death experience that began soon after she was anesthetized. She recalled that during the operation, she had a sense of being sucked out the top of her head and of looking down at her body, seeing the room with crystal-clear vision, even though her eyes were taped shut, and hearing sounds in the room with unprecedented clarity, even though plugs had been inserted in her ears. She reported seeing the drill used to penetrate her skull and, being pitch-perfect, she recognized the sound it produced as a natural D. She also heard a female voice saying “something about my veins and arteries being very small.” Pam went on to describe her experiences of moving through a dark shaft toward a light, where she encountered her grandmother and other deceased relatives, and of finally returning to her body. According to reports of her verbal account, at least some of her auditory and visual memories of events in the operating room during her surgery were accurate.21
Some researchers in the field of near-death experiences have concluded that in some cases consciousness may persist without dependence on brain functions, and they cite this case as compelling evidence to support their view.22 However, others have concluded that even though she was paralyzed and sedated, Pam’s accurate auditory memories at the beginning of her out-of-body experience can be explained by “anesthesia awareness.”23 Such partial awakening occurs in about 0.1 to 0.3 percent of general surgical procedures, but, unlike Pam’s experience, they are often extremely unpleasant, frightening, and painful. Memories of such experiences are typically brief and fragmentary and primarily auditory or tactile, but never visual. Claims that adequately anesthetized patients retain significant capacity to be aware of their environment in more than rudimentary ways—let alone to hear and understand—have not been substantiated. In short, many of the contradictory interpretations of Pam’s experience demonstrate a strong “confirmation bias,” in which aspects of the case that support one’s preconceptions are highlighted while contrary evidence is ignored or explained away.
Despite the contradictory claims of researchers who base their conclusions on third-hand reports, it would be reasonable to place the greatest credence in those who were most directly involved in this event, namely Pam herself and the surgeons who operated on her. Thus, in 2008 the National Geographic Channel aired a documentary on her case entitled “Against All Odds: I Came Back from the Dead,” in which they interviewed Pam and her doctors. When she narrated her own experience on camera, it was with a sense of wonder rather than fear or anguish, and she claimed that her memory of this event had remained clear from year to year without fading or becoming foggy. Chief neurosurgeon Spetzler commented, “She really had a bird’s-eye view of what was going on. Now whether that image came from somewhere else that she then internalized somehow, I don’t think there’s any way to tell. But it was sort of intriguing with how well she described what she shouldn’t have been able to see.”24 This statement would imply skepticism about the conclusion that what she was experiencing was simply one more case of well-documented anesthesia awareness. Dr. Karl Greene, another of the neurosurgeons at the Barrow Neurological Institute, concluded, “When it comes to the whole issue of consciousness and the brain, all bets are off.” In summary, those most intimately involved in this case appear to have kept an open mind regarding the source of her memories, acknowledging that they are not readily explained in terms of mainstream science, at least in a way that is compelling to all intelligent, interested people with diverse starting assumptions.25
The question of whether an individual continuum of consciousness may precede the formation of the brain was extensively researched by Ian Stevenson. He and his team carefully investigated thousands of accounts of alleged past-life recall in children, most of whom stop speaking about these memories between the ages of five and eight.26 Stevenson’s colleague Jim Tucker has also studied cases of children who allegedly remember an intermediate period between the end of their past life and their birth in the current life. These recollections were often associated with unpleasant experiences in the previous life. In at least one case, the subject reported that while in this intermediate state, he did not realize he was dead. Tucker concludes, “Since the children who report such memories tend to make more verified statements about the previous life they claim to remember than do other subjects, and tend to recall more names from that life, their reports of events from the intermission period seem to be part of a pattern of a stronger memory for items preceding their current lives.”27 Such evidence suggests that the factors contributing to the uniqueness of an individual human being are not confined to the brain, genetics, and the physical environment, but may also include influences from past lives that affect biological processes in this life.
Most biologists and cognitive scientists find such a hypothesis untenable, for they cannot imagine how anything immaterial, or nonphysical, could causally interact with the brain, the body, and the physical world. Neurobiologists search for brain mechanisms to explain the generation of mental processes and states of consciousness, but their understanding is generally based on a nineteenth-century view of physics. The simple reason for this is that professional training as a biologist or cognitive scientist rarely includes rigorous exposure to twentieth-century physics.
The history of physics reveals that categories of “material” and “physical” have gradually evolved over the past four centuries, with profound changes in the past century due to the revolution in quantum mechanics and relativity theory.28 When Darwin and Wallace formulated their theory of evolution via natural selection, in the infancy of modern psychology, physicists endorsed a mechanistic view of electromagnetic fields rippling through the ether, interacting with substantial, material atoms embedded in absolute space and time. Since then, quantum physics has demonstrated that the elementary particles composing atoms have no real existence in dependent of measurement and do not simultaneously possess precise locations and velocities. Likewise, although the principle of the conservation of mass-energy remains a central pillar of modern physics, energy has been reduced to a mathematical principle rather than a concrete substance or mechanism, and no one today knows what energy actually is.29
Scientific materialists use the terms “natural” and “physical” interchangeably, but in today’s physics, definitions of “physical” have become increasingly diverse and abstract. Physicists regard many natural phenomena as physical even if they are not composed of matter, including energy fields, probability waves, relativistic space-time, the seven higher dimensions of superstring theory, Hilbert space, and dark energy.30 One practical definition of “physical” is anything that can causally interact with the material world, but this is obviously an open-ended category. Unless we presume to know all the laws of nature, we cannot claim to know the full range of natural phenomena that can influence matter. Moreover, quantum mechanics has shown that violations of the principle of conservation of mass-energy do occur, and the briefer the violation, the greater it may be. This opens the door to the possibility of nonphysical influences in the material world, even though they are bound to be undetectable using our current systems of observation and measurement.
The possibility of a nonphysical continuum of consciousness contributing to the development of a mind of a human embryo therefore cannot be dismissed on purely scientific grounds. But Stevenson and his colleagues have studied a number of cases in which the manner of death in one lifetime is apparently correlated with birthmarks on the body in the next life, implying a causal influence on the physical body of the embryo.31 In eighteen cases, they identified birthmarks on children that corresponded to the entry and exit gunshot wounds of deceased persons whose lives the children reportedly recalled. Often a small birthmark corresponded to the entry wound and a larger and more irregularly shaped mark appeared at the site of the exit wound.32 To account for the transference of such birthmarks from one body to the next incarnation, Stevenson postulated the existence of a “field” that retains memories and dispositional characteristics of the deceased. He called this hypothetical field a “psychophore.”33 Just as anomalies in their observations of the cosmos prompt theoretical physicists to propose the existence of new classes of phenomena in explanation, progress in our understanding of the mind-body problem requires cognitive scientists to do so as well.34
Mainstream scientists in the twentieth century generally embraced the doctrine of neo-Darwinism to explain the central features of human existence. Most have rejected the existence of all kinds of psychic phenomena, concluding that there is a complete absence of compelling empirical evidence. Those who adopt this position commonly regard themselves as “skeptics” and dismiss anyone who rejects their conclusions as ignorant, delusional, or simply gullible.
Alternatively, critics of scientific materialism have countered that scientific research has failed to produce any testable hypotheses regarding the origins of life or of consciousness, and that there is no consensually accepted scientific explanation for the unique characteristics of the human mind. These critical issues highlighted by Alfred Wallace have not been explained by twentieth-century science. To escape the ideological rut of scientific materialism, we may employ other systems of rational and experiential inquiry into the nature of consciousness, the human mind, and their relation to the natural world as a whole; Buddhism is one such system.
BUDDHIST VIEWS ON THE ORIGINS OF HUMAN EXISTENCE
In stark contrast with the scientific theory of evolution, the Buddha declared that human beings, rather than having evolved from more primitive organisms, have devolved from luminous, incorporeal, genderless beings that were sustained by bliss, without needing to partake of food or drink. Over the course of eons, due to the increase of craving and other mental afflictions, these beings gradually degenerated morally and physically, devolving to manifest our present human condition.35 The Buddhist texts contain no account of anything like natural selection or a Darwinian theory of evolution.
Modern Buddhists may be tempted to conclude that the Buddha simply adopted this account of the origins of human existence from beliefs that were current in India in his day. But this explanation is not warranted by the historical record. In a number of discourses, the Buddha expressly refuted various cosmological theories that were popular at the time, and he generally discouraged such speculations as being irrelevant and im practical, for they concern issues that transcend the limitations of the conceptual mind and cannot be explained within the limits of a single lifetime.36
This does not mean that the Buddha never addressed cosmological issues at all; in fact, over the past two millennia, the Buddhist tradition has produced multiple theories about the origins of the universe and life in our world, and there are significant differences among these accounts.37 This naturally leads us to ask: Which, if any, of these theories corresponds to objective reality? This is an important question for Buddhists who are metaphysical realists, believing that there is a single, real, objective universe that exists prior to and independent of all conceptual frameworks. However, according to the Middle Way philosophy, regarded by many as the pinnacle of Buddhist thought, there is no description of the universe as a simple representation of objective reality. Rather, all the stories we tell ourselves about the origins, nature, and future of the universe are at best true relative to a conceptual framework. The only universal, invariant truth is that all phenomena are empty of an inherent nature of their own, independent of all conceptual frameworks.38
If this is so, then no theory about the origins and evolution of life on earth is absolutely correct. Before Darwin set out on his expedition around the world and Wallace began his research in the Amazon and Malaysia, they both believed in evolution and were searching for empirical evidence to support and clarify their views. The theory of evolution thus arose in response to the evidence they found, within this conceptual framework. According to the Middle Way perspective, their theory is true relative to the questions they were posing and the evidence they discovered, but it is not a unique or absolutely true account of life on earth. The Buddhist view echoes physicist Anton Zeilinger’s claim that any statements about an objective universe existing independently of what can be said about it are meaningless.39
BUDDHIST VIEWS ON THE ORIGINS OF CONSCIOUSNESS AND THE HUMAN MIND
Regarding the origins of life, the Buddha stated that some primitive life forms emerge from inorganic matter combined with heat. However, the whole Buddhist tradition agrees with the view of Alfred Wallace that consciousness never emerges from matter, however complex its organization. Just as emergent properties of matter arise from primitive material constituents in complex configurations, so do complex configurations of consciousness emerge from a primitive continuum of awareness, which was known in early Buddhism as the ground of becoming (Skt. bhavanga). This bears a strong resemblance to what later schools of Buddhism referred to as the substrate consciousness (Skt. alaya-vijñana).
Theravadin Buddhist commentators identify this ground of becoming with a luminous continuum of consciousness that manifests in dreamless sleep and at death.40 During the waking state, ordinary consciousness illuminates all appearances, sensory and mental, but this can be diminished or extinguished due to damage to the physical senses or the brain. However, the innate luminosity of the ground of becoming remains unchanged, even if it is obscured by mental or physical influences.41
This subtle continuum of awareness is conjoined with the egg and sperm at conception, and it gradually takes on the distinctive characteristics of a specific human psyche as it is conditioned and structured by a wide range of physiological and cultural influences. Since the substrate consciousness carries innumerable imprints from past-life experiences, it also influences human physical and mental development during gestation and throughout life. Specifically, there are mental imprints derived from one’s conduct in past lives that lead to rebirth as a human, as opposed to an animal or another being. Since all sentient beings are believed to have imprints from past lives as animals and other forms of life, our distinctive qualities as humans are latent in all other species as well. Moreover, human beings carry imprints for rebirth in all realms, from infernal to celestial, which accounts in part for our capacity to engage in all kinds of conduct, from diabolical to sublime. This theory of karma is presented in great detail in many Buddhist treatises.42
The ancient commentaries describe an intermingling of physical and nonphysical influences from past lives, from the genes of a particular organism, and from the environment. Buddhist tradition has never confined its understanding of the natural world to a particular human conceptual construct, such as mind or matter, and has always acknowledged that natural causality includes both physical and nonphysical influences. It has never adopted a thoroughly mechanistic view of the universe, which has severely limited the scientific imagination over the past few centuries.
According to the Buddha, three things are necessary for the emergence of a human psyche and the formation of a human embryo: ovulation on the part of the mother, the parents’ sexual intercourse, and the presence of a being in the intermediate state (between lives) who has the karma to be reborn to specific parents at a particular time.43 Such beings in the intermediate state are certainly influenced by their karma, or actions in their past lives, but their own inclinations also influence the selection of the parents to whom they will be reborn. The general principle in Buddhism is that sentient beings are driven by desire to take rebirth, and when desire is accompanied by craving, hostility, and delusion, the results are painful.
In describing the capability to see the subtle connections between the actions of beings and the results in this life and future lives, the Buddha said it was like having the perspective of a man with good eyesight who stands in a lofty building above a crossroads. He might readily see “people entering or leaving a house, walking in the street, or sitting in the middle of the crossroads.”44 Similarly, the Buddha was able to clearly see the results of good and bad conduct of body, speech, and mind leading to good and bad outcomes in an endless cycle of rebirths, or cyclic existence (Skt. samsara).
The Buddha characterized the intermediate state between death and one’s next embodiment as a three-phased period of “wandering and wavering” and “coming and going,” during which beings are “seeking to be.” The first phase consists of leaving the body with a desire for a further rebirth, like a man leaving a house, or like a metal fragment flying from a hot, beaten piece of iron. The second phase is one of wandering back and forth seeking a rebirth, like a man wandering on a road or between houses, or like a hot iron fragment that flies up in the air. The third phase entails falling from one’s previous state into a new rebirth, like a man settling down in a square or entering a house, or like a hot iron fragment falling and penetrating the earth.45 The early Buddhist schools differed in their descriptions of the karmic linkage between death and rebirth, but all agreed on the lack of an independent self or immutable soul that reincarnates in one body after another.46
Although the ground of becoming, or substrate consciousness, is immaterial and nonphysical, some Buddhist schools propose that it is indivisible from a subtle continuum of energy, sometimes called a life force (Skt. jiva). One early Buddhist account rejects the possibility that such a life force could be measured objectively, although it doesn’t reject the possibility that this force could be detected in terms of one’s own subjective experience, like consciousness itself.47 Other early Buddhist sources refer to a mind-made body that survives death and has form, including limbs and parts. Even while one is alive, it can be meditatively drawn forth from and return to the coarse physical body. The life force is partly dependent upon the body, but can leave it by means of the mind-made body, which occupies space but does not impinge upon matter. The coarse mind, in contrast, arises together with the formation of the fetus and is dependent upon the physical body.48 According to Tibetan Buddhism, the subtle continuum of consciousness that carries on from one life to the next is conjoined with a subtle continuum of physical energy that is considered to be the actual repository of memories, mental traits, behavioral patterns, and even physical marks from one life to the next.49 This is remarkably similar to the “psychophore” proposed by Ian Stevenson to explain the transference of physical marks from a former to a later life.
This union of the substrate consciousness and the life force is therefore prior to and more fundamental than the ordinary conceptual categories of mind and matter. So this hypothesis rejects both Cartesian dualism and materialistic monism. Moreover, it may be put to the test of experience, regardless of one’s ideological commitments and theoretical assumptions.50
As noted earlier, Alfred Wallace also looked to the evidence for psychic phenomena to explain the origins of consciousness and the human mind. The existence of extrasensory perception and other paranormal abilities has long been accepted on experiential grounds in the Buddhist contemplative tradition and many others. Moreover, practical instructions on how to develop a wide range of such abilities are presented in various Buddhist meditation manuals, which have allegedly been validated, including in recent times.51
BUDDHIST VIEWS ON HUMAN NATURE
According to Buddhist tradition, one of the most important features that distinguish human from animal existence is our ability to differentiate hedonic pleasures from genuine happiness. The former refers to those pleasures we experience in response to pleasurable stimuli, both sensory and mental, while the latter refers to the well-being experienced as a result of leading ethical lives, cultivating mental balance, and gaining experiential insight into the nature of our own existence and of the world around us. In short, hedonic pleasure is something we seem to get from the world, whereas genuine happiness results from what we bring to the world.
In making this distinction, we humans can examine our own experience to see which forms of conduct of body, speech, and mind lead to our own and others’ genuine happiness and which forms of conduct undermine it. Ethical conduct provides us with the possibility of social and environmental flourishing, enabling us to live in harmony with each other and the natural world at large. Ethical values may at times supersede our impulses for survival and procreation.
A second, uniquely human dimension of human flourishing is the genuine happiness that stems from the cultivation of exceptional mental health and balance. In part II, we will examine four kinds of mental balance: conative, attentional, cognitive, and affective. Buddhism claims that by making full use of our metacognitive abilities to monitor our mental processes and states of consciousness, we can experience very high degrees of psychological flourishing that persist both in solitude and in a socially active way of life. In terms of our attitude toward others, one of humankind’s distinctive potentials is the deliberate cultivation of a literally boundless sense of loving-kindness and compassion for all sentient beings, both human and nonhuman. Moreover, our capacity for cultivating and refining metacognitive awareness also allows us the possibility to exercise free will in terms of making wise choices about which mental impulses to act upon and which to release.
The pinnacle of genuine happiness is a spiritual flourishing that is experienced as a result of gaining insight into fundamental aspects of reality and thereby dispelling suffering’s fundamental cause, which the Buddha identified as ignorance. Such understanding, or wisdom, is traditionally gained initially through the use of human faculties of communication, by which we share in the knowledge of past generations. Language skills are thus essential to the transmission of such knowledge. A second level of understanding employs our distinctively human capacities for rational thought, memory, and imagination, including the ability to envision fu ture consequences of current events—including our own mortality. This intelligence far exceeds the needs of individual survival and procreation, but it is essential for realizing genuine happiness. The Mahayana tradition of Buddhism calls the ideal of the cultivation of intelligence the “perfection of wisdom,” in which we fully realize our deepest potential as sentient beings.
The development of mental balance and its resultant psychological flourishing focuses on training the distinctively human mind, or psyche. In developing deep states of samadhi, the psyche is deliberately dissolved into the substrate consciousness, thus transcending the unique characteristics of the human mind. Through the cultivation of contemplative insight, or vipashyana, one transcends even the substrate consciousness and breaks through to the ultimate dimension of awareness, which transcends all concepts, including individuality, time, and space.
In the earliest records of the Buddha’s teachings, he spoke of an ultimate state of awareness experienced by those who realize nirvana, which he called “consciousness without characteristics,” for it is undetectable by all ordinary states of perception. It can be nondually known only by itself, and not by any other means of observation; it cannot be accurately conceived by anyone who has not experienced it. It persists even after one who has achieved nirvana has died, and this unconditioned, timeless dimension of consciousness is imbued with immutable bliss. In this inconceivable, timeless, radiant state of awareness, one is completely freed from physical embodiment. The ordinary mind and body have been transcended, and they vanish without a trace.52 The Buddha declared that such consciousness is “unsupported,” for it has no physical basis; he likened it to a ray of sunlight that never contacts a physical object and so does not “alight” anywhere.53
CONCLUDING REFLECTIONS
During the past century of domination by the principles of materialism, scientific inquiry has exhibited an ongoing tendency toward what psychologists call “confirmation bias,” which occurs when we seek and find confirmatory evidence for what we already believe, while ignoring contrary evidence.54 Research into evidence that is compatible with materialism has been well funded and receives great attention in the media, whereas research into phenomena that do not easily lend themselves to materialistic explanations has been underfunded and is generally ignored or marginalized by the media.
Science, since it is conducted by human beings, is always firmly embedded in human culture, which shapes scientific methods and findings, which in turn reinforce the culture’s dominant values in a feedback loop. Twentieth-century scientific education was dominated by the unquestioned principles of scientific materialism, and scholarship has become increasingly specialized; this narrows students’ fields of interest and deprives them of exposure to methods and theories of disciplines outside their focused interests. The Buddhist education that is offered in traditional monasteries, modern Buddhist centers, and Buddhist studies programs in secular academic institutions also tends to be relatively narrow in scope.
Ernst Mayr (1904–2005), one of the leading evolutionary biologists of the twentieth century, commented that one “characteristic of successful scientists is flexibility—a willingness to abandon a theory or assumption when the evidence indicates that it is not valid. . . . Almost all great scientists . . . have a considerable breadth of interest. They are able to make use of concepts, facts, and ideas of adjacent fields in the elaboration of theories in their own fields. They make good use of analogies and favor comparative studies.”55 To educate successful biologists, cognitive scientists, and physicists and empower them with flexible, open minds, exposure to the theories and methods of inquiry found in Buddhism may prove to be of great value. Likewise, the education of successful Buddhists, freeing them from unconscious confirmation biases, may be enhanced by including training in modern scientific and philosophical ideas along with methods for exploring Buddhism’s central issues. Such multidisciplinary, cross-cultural education can expand the horizons of scientifically and spiritually minded people alike, and may lead to unprecedented collaboration between science and Buddhism.