THE IMPENDING FOURTH TRANSITION ALONG WITH THE FUTURE PROSPECTS OF SCIENCE
Having described the third revolution, the challenge now is to access what physics portends for the future. In the last decades of the twentieth century the main problems confronting physicists were the unification of the electroweak and electromagnetic forces, forging a more definitive model of the atom, and confronting the revisions in physics owing to the discovery of the strange quantum of energy. Along with other outstanding contributors, two contrasting worldviews were offered as solutions by the two most influential physicists of the century, Einstein and Bohr.
Despite being the originator that light and matter consist of swarms of quanta that raised all the perplexing questions as to if and how they compose physical reality, Einstein was adamant in his defense of an eventual interpretation based on the discoveries of the underlying objective properties of an independently existing physical world. As he wrote: “The belief in an external world independent of the perceiving subject is the basis of all natural science.”122
In contrast, Bohr defended the view that given the impossibility of observing quanta due to their infinitesimal size it was necessary to rely on measurements based on the interactions of the measuring instruments with the quanta, despite realizing that as a result the measured data were not inherent properties of independently existing particles as normally believed, but “complementary” because they could not exist separately. It also meant that the measurements were uncertain and probable and that the quantum itself did not exist until measured! As Bohr stated: “There is no quantum world. There is only an abstract quantum mechanical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.”123
This limitation to the measurable data led to what has been called the “Copenhagen Interpretation” of Quantum Mechanics named after Bohr’s Institute whose extremely brilliant visiting students and scholars, usually with his guidance, played the leading role in the development of quantum mechanics. The interpretation was that since all the quantum evidence was based on and limited to the mathematical measurements the resultant experiential phenomena composing the theory consisted solely of the system of measurements without representing or being dependent on any independent underlying physical reality as it’s cause! This in turn led to fantastic interpretations by psychedelics, Eastern mystics, and Christians who claimed that it was the direct word of God.
Despite its having no empirical content, it nonetheless helped create many of the outstanding technological innovations of the twentieth and twenty-first centuries, such as radios, television, computers, mobile phones, iPod®s, and quantum cryptography, leading most physicists to accept it despite its obscurity and tenuousness. Although Bohr’s Copenhagen Interpretation claimed that there was no possibility of a credible theoretical interpretation, several new theories were introduced such as a “Cosmic Multiverse,” an “Inflationary Universe,” “Parallel Worlds,” and “String Theory.” The first three present different explanations of the inflationary universe or that in addition to our known universe there are multiple universes each with its own unique laws. Since it is not practical to discuss and evaluate all four theories, I will restrict my discussion to String Theory, which is perhaps the best known and that has been the subject of intensive research for about four decades and yet still has not produced any supporting experimental evidence, even though nearly all the predicted—though probable—experimental evidence for quantum mechanics, as bewildering as it is, has been confirmed, which is the reason so many physicists accept it as a final state.
Yet the confirmation of other new theories continue to be announced sustaining a certain confidence in the continuing progress of physics. The Washington Post (July 5, 2012) published an article subtitled, “Subatomic Particle Is Breakthrough in Understanding [the] Basics of the Universe” declaring that “scientists in Europe [announced] that they’d found the Higgs boson or something remarkably Higgs-like, [that] was a stunning triumph of both theory and experiment” (brackets added).124 The particle was so crucial that it was called the “God particle” by Leon Lederman, its discovery hailed in the article as “a tremendous breakthrough with enormous explanatory significance.”
Yet the particle itself has never been directly detected, the evidence for its existence inferred from the confirmation of what has been mathematically predicted characteristic of quantum mechanical inquiries. As the authors state:
Regardless of whether it’s the Higgs or a Higgs imposter, it’s a very real particle, and newly known to science, and apparently fundamental to the texture of the universe. . . . It was also the most important, because it is thought to give rise to the “Higgs field,” a sort of force field that permeates everything. (pp. A1, A2)
Whether this is true or not is one of the crucial questions facing physicists today. Still, many have retained their confidence in their ability to solve such problems, as implied in the statement by Michael S. Turner, a professor of physics at the University of Chicago, at the end of the Higgs article: “Okay, the particle physicists got their Number 1 wish—the Higgs. Now we cosmologists want ours—the dark-matter particles.”
Although it is undeniable that extraordinary progress has been made since the seventeenth century in understanding those domains of the universe that we have attained access to, is it realistic to assume, given the duration, extent, and complexity of the universe, especially in contrast to the finitude of human existence, that we are capable of attaining a final or complete understanding of the cosmos as most physicists aspired to and many still do? A number of diverse views regarding the future prospects of physics and cosmology have been published, such as The Shaky Game: Einstein Realism and the Quantum Theory, by Arthur Fine (1986); The End of Physics: The Myth of a Unified Theory, by David Lindley (1993); Dreams of a Final Theory, by Steven Weinberg (1993); The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Age, by John Horgan (1996); The Closing of the Western Mind: The Rise of Faith and the Fall of Reason, by Charles Freeman (2003); The Beginning of Infinity: Explanations that Transform the World, by David Deutsch (2011); and, Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100, by Michio Kaku (2012).
However, as discussed in the previous chapter, that the existence of the observed world, along with the subatomic realm and especially the even more basic quantum domain that had been accepted by many scientists as the “standard model,” are now believed to depend solely upon the instruments of detection and the type of mathematics used, this does not necessarily mean that they have no “objective” nor “real status” independently of the method of inquiry. However bewildering quantum mechanics seems to be, it represents the only credible view of physical reality at that level of inquiry, but not the final truth.
How could the Periodic Table have been organized; contagious epidemics eliminated by inoculations; Hubble’s first telescope built at Mount Wilson’s Observatory that disclosed the existence of additional galaxies and the accelerating inflationary universe; Hubble’s later space telescope followed by the James Webb Space Telescope that revealed trillions of stars and exoplanets existing in about 125 billion galaxies, enabling astronomers eventually to examine the exoplanets more closely to determine whether intelligent beings exist on those other planets, if nothing we have learned about the universe were true? In addition, the new technology includes nuclear reactors; computers and artificial intelligence; an enormous cyclotron constructed to simulate the early physical conditions of the big bang; and the decoding of the human genome. Any prognoses of the future of science must acknowledge our past and impending achievements, even if attaining a conclusive, all encompassing final theory may be beyond our reach.
Since quantum mechanics especially has risen in significance in scientific inquiry with an increasing reliance on a mathematics with inherent uncertainties and probabilities, this has reduced the traditional definiteness, firmness, and pictographic nature of theories and explanations making them more conditional, contingent, and obscure. It is this that has led to the disbelief expressed by David Lindley and also by Gregory Chaitin, as quoted by John Horgan in The End of Science, who denounced real numbers as “nonsense” declaring that “Physicists know that every equation is a lie . . . .”125 But surely this is not true!
How could terrestrial motions be predicted with accuracy if Newton’s formula F = ma were a lie; the atomic and hydrogen bombs have been constructed if Lisa Meitner’s explanation of uranium fission or Einstein’s equation E = mc2 were false; or the successful landing of the rover Curiosity on Mars if all the precise calculations were erroneous? Like Kant’s assertion that to know the independent “noumenal world” we would have to know “things as they are in themselves,” which he declared impossible, Lindley also denied that we have knowledge of “an objective, real world,” in opposition to Einstein’s assertion, quoted previously, that “[to] believe in an external world independent of the perceiving subject is the basis of all natural science.”
It seems to me that there is considerable evidence that Einstein’s assertion has been vindicated, as in our theory of electromagnetism, light waves and photons, charged particles such as the electron, proton, neutron, and positron, strong and weak nuclear forces, gluons, gravitons, quarks, the structure of biological cells, evolutionary theory, the discovery of the helical design of the genome, and the Big Bang theory of the universe. Among the things we still do not understand is what caused the massive concentration of energy known as the “big bang” or what preceded it. Was it just an offshoot of multiuniverses perhaps having inherently different laws? If the universe, as some claim, is composed of 23% dark matter (including dark holes), about which little is known, and 72% of equally mysterious dark energy (but considered the expansive force that counteracts gravity causing the earth to recede at an accelerated rate) totaling 95%, that leaves only 5% of the universe of which we have any understanding on which to base a definitive conclusion.
Yet as is usual in scientific inquiry, there now is an attempt to discover the nature of this dark energy believed to surround every galaxy, surmising that it is composed of subatomic particles called WIMPs standing for “weakly interacting massive particles.” As described in an article in the Washington Post:
The idea is to hang densely packed strands of DNA — quadrillions per layer — from thin sheets of gold foil. When a dark matter particle smacks into a gold atom, it would knock the nucleus through the DNA, shearing strands as it goes. Researchers could figure out the path the particle traveled by seeing where the strands were cut.126
Katherine Freese, one of the theoretical physicists involved in the investigation, said that “if the detector works, finding evidence of just 30 WIMPs will be enough to prove that these elusive particles do, in fact, exist. . . . An 80-year cosmic mystery solved” (E-5). Yet some cosmologists hoping to achieve a greater unification, along with a simpler and more elegant theory perhaps compressible into a single equation, as Einstein aspired to, have created string or superstring theory as the solution. Michio Kaku’s dissertation topic being on problems in Gabriele Veneziano and Mahiko Susuki’s string theory introduced in 1968, along with his collaboration with Keiji Kikkawa at Osaka University, enabled him to “successfully extract” from “the field theory of strings . . . an equation barely an inch and a half long” that “summarized all the information contained within string theory”127 shows his own involvement in working out the theory.
Attempting to explain all the diverse particles and forces of recent physics in terms of a deeper level of reality, they have returned, ironically, to the ancient Greek philosophy of Pythagoras who conceived of the universe as a musical harmony. As Kaku states:
If string theory is correct . . . the link between music and science was forged as early as the fifth century B.C., when the Greek Pythagoreans discovered the laws of harmony and reduced them to mathematics. They found that the tone of a plucked lyre string corresponded to its length. If one doubled the length of a lyre string, then the note went down by a full octave. If the length of a string was reduced by two-thirds, then the tone changed by a fifth. Hence, the laws of music and harmony could be reduced to precise relations between numbers. . . . Originally, they were so pleased with this result that they dared to apply these laws of harmony to the entire universe. Their effort failed because of the enormous complexity of matter. However, in some sense, with string theory, physicists are going back to the Pythagorean dream. (p. 198)
As there is no point in my trying to paraphrase Kaku’s excellent description of string theory, I will quote his own account.
According to string theory, if you had a supermicroscope and could peer into the heart of an electron, you would see not a point particle but a vibrating string. (The string is extremely tiny, at the Planck length of 10-33 cm, a billion billion times smaller than a proton, so all subatomic particles appear pointlike.) If we were to pluck this string, the vibration would change; the electron might turn into a neutrino. Pluck it again and it might turn into a quark. In fact, if you plucked it hard enough, it could turn into any of the known subatomic particles. In this way, string theory can effortlessly explain why there are so many subatomic particles. They are nothing but different “notes” that one can play on a superstring. . . . The “harmonies” of the strings are the laws of physics. (pp. 196–97)
Moreover, he believes the theory also can explain their interactions and most of the problems of theoretical physics.
Strings can interact by splitting and rejoining, thus creating the interactions we see among electrons and protons in atoms. In this way, through string theory, we can reproduce all the laws of atomic and nuclear physics. The “melodies” that can be written on strings correspond to the laws of chemistry. The universe can now be viewed as a vast symphony of strings. (p. 197)
He goes on to show how field string theory can explain Einstein’s special and general theories of relativity and even provide a possible explanation of the “riddle of dark matter” and “black holes.” But while he asserts that string theory can “effortlessly explain” the creation and interaction of the basic physical particles and many other puzzles confronting physics, one can understand why none of it has ever been confirmed. To me it reads like scripture in which declarations are presented with a kind of doctrinal authority, but based on mathematics rather than revelation.
It not only seems unlikely that minuscule vibrating strings can be the source of the universe, the theory requires a multi-dimensional hyperspace for their existence.
Only in ten-or eleven-dimensional hyperspace do we have “enough room” to unify all the forces of nature in a single elegant theory. Such a fabulous theory would be able to answer the eternal questions: What happened before the beginning? Can time be reversed? Can dimensional gateways take us across the universe? (Although its critics correctly point out that testing this theory is beyond our present experimental ability, there are a number of experiments currently being planned that may change this situation. . . . (p. 185; italics added)
He goes on to discuss refinements in the theory, such as “supersymmetry,” “M-theory,” “heterotic string theory,” and the “Brane World,” along with the possible experiments being planned to confirm it, though no supporting empirical evidence that I know of has been announced since 2005 when Kaku’s book was published.
Yet despite his somewhat optimistic assessment of the theory, he seems to have the same reservations about it that I have expressed, as the following quotation indicates. Recalling Pauli’s version of the unified field theory that he developed with Werner Heisenberg described by Niels Bohr as “crazy,” but not “crazy enough,” Kaku states:
One theory that clearly is “crazy enough” to be the unified field theory is string theory, or M-theory. String theory has perhaps the most bizarre history in the annals of physics. It was discovered quite by accident, applied to the wrong problem, relegated to obscurity, and suddenly resurrected as a theory of everything. And in the final analysis, because it is impossible to make small adjustments without destroying the theory, it will either be a “theory of everything” or a “theory of nothing.” (pp. 187–88)
Given how much is still unknown or conjectured about the universe, how likely is it that we are close to a “final theory of everything” that would resemble string theory, or that it is even attainable? I have recently read Marcelo Gleiser’s work titled The Island of Knowledge, published in 2014 after I had written my book, thus I did not have the benefit of reading his exceedingly informed and, in my opinion, correct interpretation of the current controversy in physics, as to whether quantum mechanics represents a realistic and final account of physical reality. His conclusion is that “Unless you are intellectually numb, you can’t escape the aweinspiring feeling that the essence of reality is unknowable” (p. 193), although there is no sounder method of inquiry now than science. While one can concede that quantum mechanics is in a sense correct in that it mainly agrees with the current experimental evidence, this does not mean that it is true and is thus a final theory of reality. I strongly recommend reading Gleiser’s book if one is interested in the prospects of quantum mechanics.
In addition to the question of whether it is presumptuous or realistic to suppose that finite creatures living in this infinitesimal speck and moment of the universe will ever arrive at a final theory, there is the additional problem of whether we can afford the tremendous costs of further research. The discovery of the Higgs or Higgs-like boson cost 10 billion dollars involving 6,000 researchers and the creation of a 17-mile circular tunnel under the border of France and Switzerland with thousands of magnets. The international fusion mega-project now in construction in southern France is estimated to cost 23 billion dollars and whose completion is projected to take a decade. Even continuing research on whether WIMPS exist will ultimaely depend upon the costs, as well as the experimental and theoretical ingenuity.
As examples of how difficult it has become to finance such projects, in 1993 the US Congress discontinued the financing of the superconducting, supercollider after already spending 2 billion dollars digging a tunnel 15 miles long in Texas in which to house it on the grounds that the cost of completing the project was too great. Given the current economy, President Obama has requested a budge cut for our fusion research by 16% to 248 million dollars, a foreboding sign of the future.
I am not suggesting that funding scientific research has not paid off; just consider all the technological, economic, social, medical, and intellectual benefits derived from scientific inquiry to acknowledge the opposite. Everything we know about the universe and human existence and all the economic, educational, social, and medical improvements and advances in our standard of living we owe entirely to the genius and dedication of scientists. But one can’t help wondering whether the cost of delving further into the universe will overreach at some point our financial assets and/or capacities. Thus, though I admire most of what he says in his very stimulating book, I question Alex Rosenberg’s confident assertion that “Physics is causally closed and causally complete. The only causes in the universe are physical. . . . In fact, we can go further and confidently assert that the physical facts fix all the facts.”128 If true, this would confirm Einstein’s worldview. I wish I could be so confident.
Having expressed my reservations that attaining a final theory of the universe is within reach or even possible, I will conclude this study of the major transitions in our conceptions of reality and way of life by citing the amazing scientific and technological advances that are predicted to take place by the end of this century or the next based on the knowledge already attained or anticipated. This, fortunately, has also been comprehensively described by Michio Kaku in his prophetic book previously cited, Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100. As described on the back cover of the book:
Renowned theoretical physicist Michio Kaku details the developments in computer technology, artificial intelligence, medicine, and space travel that are poised to happen over the next hundred years . . . interview[ing] three hundred of the world’s top scientists—already working in their labs on astonishing prototypes. He also takes into account the rigorous scientific principles that regulate how quickly, how safely, and how far technologies can advance . . . forecast[ing] a century of earthshaking advances in [science and] technology that could make even the last centuries’ leaps and bounds seem insignificant.129 (brackets added)
An unexpected and exceptional added attraction of the book is his occasional indication of how the extraordinary modern scientific and technological achievements have often replicated the divine exploits attributed to the gods in ancient mythologies and current religions, such as creating miraculous cures and performing marvellous feats like conferring on humans supernatural powers and eternal life.
The challenge is to present as briefly, clearly, and objectively as possible the range of the incredible developments, greater than the Industrial Revolution, that are predicted to radically change the conditions and nature of human existence in this century or the next, and try to discriminate between the fanciful and the possible, along with the beneficial and harmful outcomes. According to Kaku, one of the basic factors driving this process is the rapidity in the development of computers and how this has altered our lives owing to what is called Moore’s law.
The driving source behind . . . [these] prophetic dreams is something called Moore’s law, a rule of thumb that has driven the computer industry for fifty or more years, setting the pace for modern civilization like clockwork. Moore’s law simply says that computer power doubles about every eighteen months. First stated in 1965 by Gordon Moore . . . this simple law has helped to revolutionize the world economy, generated fabulous new wealth, and irreversibly altered our way life. (p. 22; brackets added)
The technological developments that were most instrumental in creating the computer revolution apparently were the following: (1) by relying on electrical circuits computers could perform close to the speed of light that permits nearly instantaneous transmissions and communication with the rest of the world; (2) these electrical conductions are further enhanced by the development of miniaturized transistors or switches; and (3) the creation of the computer chip or silicon wafer the size of one’s fingernail that can be etched with millions of tiny transistors to form integrated units making it possible to carry out instantaneously enormously intricate calculations that, otherwise, would have taken decades, years, or even centuries.
Turning now to Kaku’s account of the various conceptions and predictions of the future developments that will be brought about by the computer revolution, the one I find the most startling and threatening is based on computerized artificial intelligence and the creation of robots that in the most extreme case could, it is predicted, replace or convert human beings into computerized robots, as indicated in the initial section chapter 2 of his book The End of Humanity? (p. 75).
As of now the most advanced robot is ASIMO created by the Japanese “that can walk, run, climb stairs, dance, and even serve coffee” and “is so lifelike that when it talked, I half expected the robot to take off its helmet and reveal the boy who was cleverly hidden inside” (p. 77). In addition, there “are also robot security guards patrolling buildings at night, robot guides, and robot factory workers. In 2006, it was estimated that there were 950,000 industrial robots and 3,540,000 service robots working in homes and buildings” (pp. 87–88). But while these are remarkable achievements, they are not indications that the robot has attained an ounce of control over or initiates any of its behavior. Everything ASIMO does has been preprogrammed so that its actions are entirely beyond its control. It of course has no conscious awareness of its surroundings or any feelings since every action it performs is computerized. In some cases it is controlled by a person who directs the actions from the images on a computer thousands of miles away, similar to controlling a drone.
More remarkable was the event in 1997 when “IBM’s Deep Blue accomplished a historic breakthrough by decisively beating world chess champion Gary Kasparov. Deep Blue was an engineering marvel, computing 11 billion operations per second” (p. 80). Nonetheless, Deep Blue cannot take credit for the achievement that has to be attributed to the intelligence of the gifted programmers who devised all the correct moves to beat Kasparov.
This fact was not lost on the artificial intelligence (AI) researchers who then began attempting to “simulate” conscious awareness by installing object recognition, expressing inner emotional states and feelings by facial expressions, and initiating intelligent actions. Thus, instead of the top-down approach of treating robots like digital computers with all the rules of intelligence preprogrammed from the very beginning, they began imitating the brain’s bottom-up approach. They tried to create an artificial neural network with the capacity of learning from experience that would require conscious awareness of the environment, along with emotions and affective feelings that are the source of value judgments, such as whether things are beneficial or harmful.
In addition to attempting to replicate the learning process of human beings, they would have had to install such mental capacities as memory, conceptualizing, imagining, speaking, learning languages, and reasoning, all of which exceeds just following electronic rules. Given the fact that the brain is an organ with unique neuronal and synaptic connections composed of biomolecular components directed by numerous chemicals that produce a great deal of flexibility, the challenge of trying to duplicate this with just an electrical, digital network proved formidable.
Unlike a computer program, the brain has evolved into various areas representing evolutionary transitions responsible for lesser or more advanced anatomical structures and functions. This includes the reptilian area near the base of the brain that is the source of basic instincts, automatic bodily processes, and behavioral functions; the limbic system or mid-brain that comprises the amygdala, hippocampus, and hypothalamus that together are responsible for memory, emotions, and learning, including much of the hormonal activity of more highly socialized mammals and primates; and the newest, most important convoluted gray matter called the cerebral cortex or cerebrum divided into the frontal, parietal, and occipital lobes that produces such human capacities as language acquisition, learning, reasoning, and creativity.
That Kaku is aware of these differences between computers and human capabilities is indicated in the following statement.
Given the glaring limitations of computers compared to the human brain, one can appreciate why computers have not been able to accomplish two key tasks that humans perform effortlessly: pattern recognition and common sense. These two problems have defied solution for the past half century. This is the main reason why we do not have robot maids, butlers, and secretaries. (pp. 82–83)
But, as he adds, programmers have been able to overcome these obstacles to some extent. One robot developed at MIT scored higher on object recognition tests than humans, even performing equal to or better than Kaku himself. Another robot named STAIR developed at Stanford University, still relying on the top-down approach, was able to pick out different kinds of fruit, such as an orange, from a mixed assortment that seems simple enough to us, yet very difficult for robots because of the dependence on object recognition. Yet the best result was achieved at New York University where a robot named LAGR was programmed to follow the human bottom-up approach enabling it to identify objects in its path and gradually “learn” to avoid them with increased skill (cf., p. 86).
Furthermore, an MIT robot named KISMET was programmed to respond lifelike to people with given facial expressions that mimicked a variety of emotions (which have now been programmed into dolls), yet “scientists have no illusion that the robot actually feels emotions” (p. 98). While programmers are striving to overcome these differences they still have a long way to go, as Kaku indicates.
On one hand, I was impressed by the enthusiasm and energy of these researchers. In their hearts, they believe that they are laying the foundation for artificial intelligence, and that their work will one day impact society in ways we can only begin to understand. But from a distance, I could also appreciate how far they have to go. Even cockroaches can identify objects and learn to go around them. We are still at the stage where Mother Nature’s lowliest creatures can outsmart our most intelligent robots. (p. 87)
Apparently there are two major approaches to resolving this problem. As indicated previously, Kaku identified two crucial capacities that robots lack that prevent their simulating human behavior: pattern recognition and common sense, both of which require conscious awareness that humans possess and computers and robots entirely lack. One way of solving the problem is to try to endow a computer or robot with consciousness, using a method called “reverse engineering of the human brain.” Instead of attempting to “simulate” the function of the brain with an artificial intelligence, it involves trying to reproduce human intelligence by replicating the neuronal structure of the brain neuron by neuron and then installing them in a robot.
This new method, “called optogenetics, combines optics and genetics to unravel specific neural pathways in animals” (p. 101). Determining by optical means the neural pathways in the human brain presumably would enable optogeneticists not only to detect which neural pathways determine specific bodily and mental functions, but also duplicate them. At Oxford University Gero Meisenböck and his colleagues
have been able to identify the neural mechanisms of animals in this way. They can study not only the pathways for the escape reflex in fruit flies but also the reflexes involved in smelling odors. They have studied the pathways governing food-seeking in roundworms. They have studied the neurons involved in decision making in mice. They found that while as few as two neurons were involved in triggering behaviors in fruit flies, almost 300 neurons were activated in mice for decision making. (p. 102)
But the problem is that identifying the neuron’s function is not the same as reproducing it. The intended purpose was to model the entire human brain using two different approaches. The first approach was to “simulate” the vast number of neurons and their interconnections in the brain of a mouse by a supercomputer named Blue Gene constructed by IBM. Computing “at the blinding speed of 500 trillion operations per second . . . Blue Gene was simulating the thinking process of a mouse brain, which has about 2 million neurons (compared to the 100 billion neurons that we have)” (p. 104). But the question is whether simulating is equivalent to reproducing?
This success was rivaled by another group in Livermore, California, who built a more powerful model of Blue Gene called “Dawn.” At first in “2006 it was able to simulate 40 percent of a mouse’s brain. In 2007, it could simulate 100 percent of a rat’s brain (which contains 55 million neurons, much more than the mouse brain” (p. 105). Then progressing very rapidly in 2009 it, “succeeded in simulating 1 percent of the human cerebral cortex . . . containing 1.6 billion neurons with 9 trillion connections” (p. 105).
Although it convinced optogeneticists that simulating the human brain was not only possible, it was inevitable, yet again the crucial question is whether “simulating” is equivalent to “reconstructing” or “reproducing,” although it seems to me that the distinction has been overlooked and assumed to be the same. Significantly, in addition to meaning “imitating,” the term “simulate” has the additional adverse connotations of feigning, pretending, and faking.
The second approach, perhaps to avoid the above problem, is called “reverse engineering of the brain” and it confronted problems of even greater magnitude since it consisted of dissecting the entire system of neurons in the brain into miniscule slices no more than 50 nanometers wide (a nanometer is 1 billionth of a meter) in order to examine each of them under an electron microscope to “reconstruct” their function. Illustrating the enormity of the task, after producing a million slices
a scanning electron microscope takes a photograph of each, with a speed and resolution approaching a billion pixels per second. The amount of data spewing from the electron microscope is staggering, about 1,000 trillion bytes of data, enough to fill a storage room just for a single fruit fly brain. Processing this data, by tediously reconstructing the 3-D wiring of every single neuron of the fly brain, would take about five years. To get a more accurate picture of the fly brain, you then have to slice many more fly brains. (p. 107; italics added)
Although “the human brain has 1 billion neurons more than the fruit fly,” it was nevertheless assumed
that sometime by mid-century, we will have both the computer power to simulate the human brain and also crude maps of the brain’s neural architecture. But it may take until late in this century before we fully understand human thought or can create a machine that can duplicate the function of the human brain. (p. 108; italics added)
Here the distinction between ‘simulate’ and ‘duplicate’ seems to be recognized but not considered. Moreover, since we still do not “understand” how the chemical-electrical neural processes of the human brain produce human awareness, perception, memory, emotions, and thought, etc., it is questionable whether “constructing” the brain with a wholly electronic computer would actually create a “duplicate” of the brain that could function as the original brain.
Among those creating robots there is a consensus that, although in an entirely different way, they can be programmed to “exceed us in intelligence.” There is considerable disagreement as to how long this will take, but not that it can be done. According to Kaku a “large part of the problem with these scenarios is that there is no universal consensus as to the meaning of consciousness. . . . Nowhere in science have so many devoted so much to create so little” (pp. 110–11). He then offers what he believes are the three capacities essential for being conscious (p. 111):
l. sensing and recognizing the environment
2. self-awareness
3. planning for the future by setting goals and plans, that is, simulating the future and plotting strategy.
I would agree that these are essential aspects, but I do not see what the difficulty has been in attaining a consensus as to the nature of consciousness. When one considers the difference between being awake and being in a dreamless sleep or being conscious and then made unconscious by a sedative, blow, or death, we have distinct examples of being conscious and unconscious: in the former cases one is completely aware while in the latter cases one is entirely unaware and unconscious.
There is a difference between the awareness (as minimal as it is) of a fruit fly and a worm and the absence of any awareness in a rose or a rock in that the former involves a sensory content while the latter does not. Of course there are degrees of consciousness, but if one is just sensing, smelling, or feeling one is in a state of minimal awareness. Even dreaming is a kind of pseudoconsciousness in which one is aware that the dream is frightening or pleasant, but that one has no control over it because it is entirely a product of the brain disconnected to one’s normal self-awareness and behavioral responses.
That an automaton can be programmed to respond as if it had feelings and conscious awareness or to simulate either is not sufficient to consider it actually having either. That Deep Blue could defeat Gary Kasparov in a chess match was not an indication that Deep Blue was conscious of the moves it took to defeat the world champion, as Kaku acknowledges. I am not sure whether “self awareness is easier to achieve” as a condition of consciousness than his other two criteria as he claims, but I certainly agree with his assessment of the current state of robotics.
Today, AI researchers are clueless about how to duplicate all these processes in a robot. Most throw up their hands and say that somehow huge networks of computers will show “emergent phenomena” in the same way that order sometimes spontaneously coalesces from chaos. When asked precisely how these emergent phenomena will create consciousness, most roll their eyes to the heavens. (p. 114)
As previously stated, since it is still a complete mystery as to how the neural discharges in the brain produce consciousness, it is not surprising that AI researchers do not know how electrical circuits, transistors, and computer chips, without any chemical components or regulators, can duplicate or create consciousness. Moreover, while we know that organic evolution produces “emergent phenomena,” there is no indication that purely electrical circuits do! When discussing the tremendous progress that has been made in computer science the word “evolution” has been used (or misused), although technical advances have none of the emergent features of evolution that is a biological process.
One of the recent breakthroughs in genetics was the discovery of the crucial role the intercellular fluids, composing most of the genome, play in producing the proteins that direct the functions of the individual genes by turning their activities on and off. So is it plausible that even the extraordinary computational advances in artificial intelligence, utilizing only electrical circuits and transistors, can compensate for the lack of these biomolecular, physiochemical, and genetic processes to create robots that will exceed humans in conscious awareness, self-consciousness, intelligence, and creativity? I doubt it.
Roboticists describe the use of prosthesis, inserting electronic devises into the body to restore hearing and vision or replacing amputated limbs with artificial limbs, as heralding a new era in robotics that will integrate electronic devises with biology to create robots with more humanlike capabilities. Kaku cites the example of a twenty-two-year-old young man who had his hand amputated and replaced by “four motors and forty sensors” connected to the nerves in his arm that relay the hand signals to his brain enabling him to “feel” that he had sensations in his hand and control of the movements of his fingers (cf., p. 126).
As remarkable as this is, it must not be overlooked that the sensations the young man was feeling in his fingers and his control of them ultimately depended on and occurred in his brain, not in the prosthetic devices that only transmitted the original nerve impulses. People who have had limbs amputated report that they feel sensations in the missing limb, indicating that the sensations are produced by the brain even though felt in the limb—a reminder that the brain is an organ, not a computer.
Yet computer scientists, in a kind of science-fictional world, worry about the day when robots will be produced that are smarter than we are as if computational power were the sole factor in human intelligence. They imagine a world in which the human brain can be duplicated in an electronic system consisting wholly of electrical currents, transistors, and computer chips which, when implanted into a robotic skull, can create a human robot. As reported by Kaku,
Robot pioneer Hans Moravec . . . explained to me how we might merge with our robot creations by undergoing a brain operation that replaces each neuron of our brain with a transistor inside a robot. The operation starts when we lie beside a robot body without a brain. A robotic surgeon takes every cluster of gray matter in our brain, duplicates it transistor by transistor, connects the neurons to the transistors, and puts the transistors into the empty robot skull. As each cluster of neurons is duplicated in the robot, it is discarded. (p. 130)
Although the tremendous advances in artificial intelligence that disproved the original skepticism about its future achievements should make one cautious about denying its present predictions, I still find this projection, based on dubious assumptions, difficult to understand and accept. Even the Oxford Dictionary defines a nerve impulse as just “a signal transmitted along a nerve fibre consisting of a wave of depolarization,” implying it is simply an electronic process. But that overlooks the role of axons and dendrites in connecting the nerve cells via synapses and the neurophysiological and biochemical conditions that facilitate the processes that hardly seems replicable merely by electric circuits and transistors. (For an excellent description of how the various chemicals, hormones, and glands in our brains influence our brain processes and mental states see Joshua Reynolds’s excellent account in 20/20 Brain Power.)130
According to Kaku’s reported explanation by Hans Moravec, “[a] robotic surgeon takes every cluster of gray matter in our brain, duplicates it transistor by transistor, connects the neurons to the transistors, and puts the transistors into the empty robot skull” while “[we] are fully conscious as this delicate operation takes place.” But maintaining that “we are fully conscious during the process” this makes the unlikely assumption that consciousness can be maintained solely by a set of transistors when we do not even know how the normal brain produces it! Also, if the neurons constituting the grey matter in our brains are duplicated by transistors, why must they be reconnected to neurons?
Continuing the description of the operation, Kaku states that following the operation, our brain has been entirely installed into the body of a robot. “Not only do we have a robotic body, we have also the benefits of a robot: immortality in superhuman bodies that are perfect in appearance. This will not be possible in the twenty-first century, but becomes an option in the twenty-second” (pp. 130–31). But this assumes that all life functions can be duplicated entirely by electrical transistors lacking all our neurophysiological functions. After all, a robotic body is simply made of metal and silicon completely lacking in the life-giving functions that evolution has endowed humans with!
Moreover, since robots are not capable of reproducing sexually, even if we were “perfect in appearance” what would it matter if we do not have to seek mates to reproduce or even have feelings of sexual attraction and love? Not only would we have lost the happiness of having loving parents and a loving wife, we could never experience the joy and tribulation of having children. And since all who became robots would be immortal, eventually there would be the same group of robotic humans existing forever which, rather than appealing, would seem rather boring since it would be lacking any of the experiences that make human life meaningful, as well as despairing at times. He concludes with this bizarre description:
In the ultimate scenario, we discard our clumsy bodies entirely and eventually evolve into pure software programs that encode our personalities. We “download” our entire personalities into a computer . . . [that] behaves as if you are inside its memory, since it has encoded all your personality quirks inside its circuits. We become immortal, but spend our time trapped inside a computer, interacting with other “people” (that is other software programs) in some gigantic cyberspace/virtual reality. (p. 131; [brackets and italics added])
Can one imagine allowing oneself to be reduced to a software program so that one exists inside a computer? What would be the advantage of existing—I can’t say “living” because it would not be living in any recognizable way—under such conditions? As a computer program would one have any freedom or control over one’s existence? How could we react to other people as mutual “software programs”? Apparently some robots would remain existent to do the downloading, but humans would be “trapped,” as Kaku states, in computers. It reads like pure fantasy but roboticists are supposed to be scientists, not fabricators. Being immortal under such conditions would make it even more horrifying. I was relieved when I read Kaku’s conclusion: “Some science fiction writers have relished the idea that we will all become detached from our bodies and exist as immoral beings of pure intelligence living inside some computer, contemplating deep thoughts. But who would want to live like that?” (p. 132). No one I am sure; I couldn’t agree more!
Turning now to a another recent development: throughout the past it was thought that nonphysical spiritual entities such as the “soul,” “life force,” Newton’s “subtle spirit,” Henri Bergson’s “élan vitale” were the creative powers in living organisms and thus the force behind evolution culminating in human consciousness, feelings, and thoughts. It wasn’t until 1953 when the biophysicist Francis Crick and the geneticist James Watson, aided by the biophysicist Maurice Wilkins and Rosalind Franklin’s revealing X-ray crystallographic images of the double helical structure of the DNA, that they were able to eliminate such spiritual explanations. In an article, “A Structure for Deoxyribose Nucleic Acid,” they exclaimed that it was the molecular compound, known as DNA, that was the secret of life. Watson, Crick, and Wilkins received the Nobel Prize in 1962 for their achievement (Franklin unfortunately dying four years earlier of ovarian cancer, apparently due to overexposure to the radiation involved in her X-ray crystallography, was ineligible to receive the prize).
It was not until the 1980s, when the theory was finally confirmed, that the earlier psychic or spiritual causes could be definitely rejected. Every molecule of DNA within our cells consists of the twisted ladder-like strands forming a double helix each rung of the helical ladder composed of four nucleic acid bases or nucleotides: adenine (A), thymine (T), cytosine (C), and guanine (G), whose letters convey the genetic code of each individual, along with their ancestral history. Genes are specific sequences in the DNA and RNA that transmit our particular inherited traits and contain the instructions for making proteins. Due to the information processing power of computers, the Human Genome Project, costing 3 billion dollars, succeeded by 2003 in discovering the anatomical blueprint of a human being: the complete sequencing of the genes in each human cell. Since genes are alleged to control about 50% of our neural, physiological, and mental abilities, along with recording our genetic inheritance, they should be considered the “God particles,” along with the Higgs Boson.
Again as a result of the extraordinary advances in computational power, it is now possible to encode one’s genes on a computer chip or CD. Reading our genetic code enables doctors to identify and forecast the liabilities due to the effects of our genes and, if detected early enough, make it possible to avert various physical and mental disorders. A dramatic discovery showing how genetic research, in addition to disclosing inherited traits, helps explain previously unexplained diseases and abnormalities that can be traced to malfunctioning genes, was announced recently by the National Human Genome Research Institute (NHGRI), a branch of the National Institutes of Health, but involving hundreds of researchers throughout the world.
What they discovered is that the vast amount of intercellular genetic fluids that previously where thought to play no part in the functioning of the genes and thus dismissed as “junk DNA,” actually serve a crucial role in directing and regulating the activities of the genes. Composing 98% of the cell substance, instead of being “junk DNA,” this fluid substance contains “micro-switches” that convey crucial instructions to the individual genes and their transmissions to other genes. Thus by activating or deactivating a gene’s functions they are now considered to be a major factor in producing genetic defects that cause cancer, diabetes, Parkinson’s disease, strokes, and heart failure, as well as mental disabilities such as loss of memory, bipolar disorder, schizophrenia, Alzheimer’s disease, dementia, and senility.
Having located about four million of these DNA switches, this will enable physicians to learn at a much earlier stage how to diagnose, treat, and eventually prevent the infirmities mentioned above. These findings clearly demonstrate the role of the genome containing the proteins that activate or deactivate the genes and also determine how the chemical modifications of DNA affect gene functions and locate the various operative forms of RNA, another form of nucleic acid similar to DNA, that helps regulate the entire system. Supporting what I wrote earlier about the unlikelihood of the nervous system being replicable by transistors alone, Parkinson’s disease is caused by a deficiency of the chemical neurotransmitter dopamine in the brain having the structure C8 H11NO2.
As a consequence of these enhanced computer investigations a new treatment has been developed, called “tissue engineering,” enabling physicians to “grow skin, blood, blood vessels, heart valves, cartilage, bone, noses, and ears in the lab from your own cells” (p. 144). An even more promising discovery was made that has aroused considerable controversy because it involves the destruction of human embryos, a process opposed by the Catholic Church on the grounds that they are living beings. Called “stem cells,” they are the earliest cells in the developing human embryo that have not yet differentiated into specialized cells (so they hardly can be considered human beings), but have the potential to develop into all the various cells of the body.
By injecting these cells into a person with defective organs or who has suffered certain accidents such as spinal cord injury, one will be able to replace the damaged tissue. Presently there are transplants using another person’s organs, but given the shortage of replacement organs, the capability of replacing or restoring the defective tissue or organ with engendered stem cells is much more promising. A “pixie dust” has even been created with the power of regrowing tissue.
This dust is created not from cells but from the extracellular matrix that exists between cells. This matrix is important because it contains the signals that tell the stem cells to grow in a particular fashion. When this pixie dust is applied to a fingertip that has been cut off, it will stimulate not just the fingertip but also the nail, leaving an almost perfect copy of the original finger. Up to one-third of an inch of tissue and nail has been grown in this fashion. (p. 149)
Such are the truly marvellous advances already achieved and others awaiting us!
Another area where there has been considerable medical progress is in reversing aging and increasing our longevity. Kaku reports that medical researchers “have now isolated a number of genes (age-1, age-2, daf-2) that control and regulate the aging process in lower organisms” (p. 168), and since there are counterparts in humans this “has allowed scientists to narrow the search for ‘age genes’ and look for ways to accelerate the gene repair inside the mitochondria to reverse the effect of aging” (p. 169). He predicts that by 2050,
it might be possible to slow down the aging process via a variety of therapies, for example, stem cells, the human body shop, and gene therapy to fix aging genes. We could live to be 150 or older. By 2100, it might be possible to reverse the effects of aging by accelerating cell repair mechanisms to live well beyond that. (p. 169)
Some optimists have suggested that when we fully understand the aging process we not only can reverse the process to prolong our lives, but like the robots previously discussed, achieve the religious promise of immortality. But whether or not that would be desirable is another question. In any event it seems unlikely considering that the geological history of the earth indicates that our human species, as all previously advanced species, is doomed to extinction due to drastic climatic changes or the impact of massive meteors or asteroids on the earth unless we can escape to an exoplanet.
Given these amazing neurophysiological, medical, and genetic advances there is now the attempt to explain the problem that has perplexed philosophers since ancient times, usually referred to as the “mind-body problem.” Having been so accustomed to directly experiencing thoughts, memories, feelings, emotions, intensions, fears, anger, hate, love, affection, etc., in the past they were simply considered consciousness endowments so different from the body that they were attributed to a soul, vital spirit, or divine endowment. But the recent striking success in correlating these mental states with complex neurological structures in our brains has led some researches to consider these underlying neurological processes not just correlated with conscious processes, but the conscious states themselves and thus not needing a separate cause. As Alex Rosenberg, an advocate of this view, states:
Neuroscience is beginning to answer these questions. We can sketch some of the answer in the work that won Erick Kandel the Noble Prize in Physiology or Medicine. The answer shows how completely wrong consciousness is when it comes to how the brain works. Indeed, it shows how wrong consciousness is when it comes to how consciousness works.131
Based on his studies of the conditioning formation of the neurons of sea slugs, rats, and humans, Kandel concluded that all our learning and behavioral responses can be explained as due to the evolutionary development producing more neurons with greater molecular complexity and synaptic connections. Continuing Rosenberg’s description:
A little training releases proteins that open up the channels, the synapses, between the neurons, so it is easier for molecules of calcium, potassium, sodium, and chloride to move through their gaps, carrying electrical charges between the neurons. . . . The genes in the nuclei of each cell that control its activities are called somatic genes, in contrast with the germ-line genes in sperm and eggs, which transmit hereditary information. Both kinds of genes contain the same information, since the order of DNA molecules in each of them is the same. Somatic genes are copied from germ-line genes during embryonic development. (p. 181)
Obviously this is a completely reductionistic conception of how we experience the world assisted by how the computer functions as a model for the brain.
The brain is a computer whose “microprocessors”—its initial assemblies of neural circuits—are hardwired by a developmental process that starts before birth and goes on after it. Long before that process is over, the brain has already started to modify its hardwired operating system and acquired data fed through its sensory apparatus. . . . Beliefs, desires, wants, hopes, and fears are complex information storage states, vast packages of input/output circuits in the brain ready to deliver appropriate and sometimes inappropriate behavior when stimulated. (p. 189)
While my own loss of memory and hearing as I get older is explainable by the deterioration of the areas and functions of the brain correlated with the loss, I find it hard to believe that all the qualitative aspects and impacts of the world and interactions with other people that constitute our experiences and thoughts are nothing more than brain processes. Yet Rosenberg states: “When consciousness convinces you that you, or your mind, or your brain has thoughts [or experiences] about things, it is wrong” (p. 172; brackets added). It seems to me if that were true then we should be experiencing the brain processes themselves, which is quite different from what we normally are aware of.
Moreover, it would deny the referential functions of ordinary language, prose literature, poetry, opera, paintings, applied mathematics, and scientific explanations that are not experienced as neuronal structures of the brain, but mental states referring to or descriptive of the world and enabling us to communicate about it. It is these capacities that enrich our lives and they surely are not about the brain but what we are experiencing or thinking. What we see or think when we look at the moon and stars at night are not neuronal discharges but the night sky!
Also the argument that attributing experiences to consciousness is analogous to outmoded explanations in terms of souls, vital spirits, or divine endowments is mistaken because the former were never experienced as such, while the fact of having conscious experiences and thoughts of our surroundings can hardly be considered illusory. That our normal experiences are correlated with and depend upon the complex chemical-electrical neuronal processes in our brains cannot be denied, but how our brains produce our conscious experiences still remains a great mystery.
One of the most dramatic examples of how the brain produces extraordinary experiences is that of Joan of Arc whose astonishing religious beliefs and achievements have recently been attributed to a “hyperreligiosity” caused by “temporal lobe epilepsy that can also be induced by what is called “transcranial magnetic simulation” or TMS, along with the epileptic lesions. As described by Kaku in his latest book, The Future of the Mind:
More recently, another theory has emerged about this exceptional woman [Joan of Arc]: perhaps she actually suffered from temporal lobe epilepsy. People who have this condition sometimes experience seizures, but some of them also experience a curious side effect that may shed some light on the structure of human beliefs. These patients suffer from “hyperreligiosity,” and can’t help thinking that there is a spirit or presence behind everything. Random events are never random, but have some deep religious significance. . . . The neuroscientist Dr. David Engleman says, “Some fraction of history’s prophets, martyrs, and leaders appear to have had temporal lobe epilepsy. Consider Joan of Arc, the sixteen-year-old girl who managed to turn the tide of the Hundred Year’s War because she believed (and convinced the French soldiers) that she was hearing voices from Saint Michael the archangel, Saint Catherine of Alexandria, Saint Margaret, and Saint Gabriel.132 (brackets added)
Kaku adds that “Some scientists have gone further and have speculated that there is a “God gene” that predispose the brain to be religious. Since most societies have created a religion of some sort, it seems plausible that our ability to respond to religious feelings might be genetically programmed into our genome” (p. 198). This might explain the universal historical appeal of religions, but not their truth.
Before closing this discussion of how recent science has achieved the major advances in our conception of reality and thus largely replaced religion at least among philosophers and scientists, I think it would be appropriate to continue Kaku’s discussion in his previous book of the discovery of the genome and the structure and function of DNA. As previously indicated, DNA is the molecular deoxyribonucleic acid controlling heredity by the genes and, along with the ribonucleic acid RNA, conveys information to proteins directing their essential and ubiquitous functions. These discoveries have been the foremost contributions of genetics to demystifying human origins and revealing our genetic ancestry that disclosed our striking similarity (90% of our genetic makeup correlates with those of mice) to and common origin with other species.
Along with Darwin’s evolutionary theory of “natural selection,” explaining how our adaptive traits evolved confirmed by the discovery of fossil remains and reinforced by the hereditary evidence encoded in the genome in the nucleus of our cells, they have provided an entirely naturalistic explanation of the origin and nature of human beings completely refuting creationism. Moreover, knowing the genetic functions offers a futuristic means of improving human nature.
For the first time in history, due to the decoding of the genome and the structure and function of DNA, RNA, and more recently proteome (the key to explaining the creation of proteins), we now have the means of relieving or remedying the greatest source of human misery. Even more than natural disasters, it is the tyrants, theocrats, terrorists, murderers, rapists, alcoholics, drug addicts, sadists, psychotics, paranoids, and deranged human beings, owing in large part to destructive genes (think of Hitler, Stalin, Hosni Mubarak, Bashar al-Assad, and Putin), who have been and are the major cause of the suffering in the world.
Recall the setting fire to the American Consulate in Benghazi that killed three American diplomats, along with the esteemed Ambassador J. Christopher Stevens who, ironically and tragically, had devoted his life to promoting better relations with Arabic and Muslim nations. Initially explained by US intelligence agents as a reaction to a video defaming Mohammad, after a thorough inquiry a State Department Panel concluded that it was not the video, but the increase in local militia assaults that was the cause that could have been prevented if two State Department Bureaus had responded to requests by officials at the Benghazi Embassy for increased security. Interestingly, following the assaults most shahs and ayatollahs denounced the attacks while thousands of Benghazi residents also demonstrated their opposition.
Of all the various accounts of international conflicts, such as strident nationalism, ethnic conflicts, border and territorial disputes, claims to colonial possessions, economic competition, and ideological or religious disagreements, the one I find must convincing in explaining the uprisings in the Middle East and recent vicious assault on Western embassies by Middle Eastern terrorists is David Deutsch’s explanation that they are due to the conflicting contrast between open dynamic and closed static societies in his book The Beginning of Infinity: Explanations that Transform the World.133 He describes dynamic societies as those open to criticisms of the traditional beliefs, institutions, ethnic discriminations, and social and economic discrepancies of their societies often due to changing conditions, acknowledging the right of people to express their convictions according to their individual assessments and dispositions, however unorthodox or erratic, as long as they do not pose a threat to others. It is this tolerance of the critical examination, revision, and rejection of religious doctrines and rituals, along with other repressive cultural and political institutions, while also recognizing the scientific method as the only known reliable method of inquiry that has made many Western countries so progressive. As Carl Sagan in his usual open- minded, astute manner has stated:
Science is different from many another human enterprise—not, of course, in its practitioners being influenced by the culture they grew up in, nor in sometimes being right and sometimes wrong (which are common to every human activity), but in its passion for framing testable hypotheses, in its search for definitive experiments that confirm or deny ideas, in the vigor of its substantive debate, and in its willingness to abandon ideas that have been found wanting. If we were not aware of our own limitations, though, if we were not seeking further data, if we were unwilling to perform controlled experiments, if we did not respect the evidence, we would have very little leverage in our quest for the truth.134
Despite the opposition of the Catholic Church and some Protestant denominations, consider the progress that has been achieved in Western democratic societies by rejecting such false conceptions, explanations, and institutions as fixed species, geocentrism, creation ex nihilo, predestination, rigid hierarchical societies, the divine right of kings, papal or biblical infallibility, and racial or sexual discrimination. In contrast to dynamic societies, static societies (as the West was after the decline of Greek and Roman culture and ascendance of Christianity during the medieval period or dark ages) are closed authoritarian societies opposed to change. The cause of much social unrest and terrorism in the world today, most Islamic states still have theocratic rulers that traditionally repress the civil rights of women and reject practically all democratic reforms due to their acceptance of Muhammad as the prophet of their religion and the Koran as the sacred book believed to be the word of God as dictated to him by the archangel Gabriel.
Thus Muslims generally have rejected scientific inquiry and explanations, such as the Big Bang theory, the theory of evolution, the decoding of the genome with its naturalistic explanation of the origin of human beings, along with the use of contraceptives to avoid unwanted or defective births and overpopulation, and generally are opposed to homosexuals and granting women greater freedom and access to higher education, although recently they were undergoing some liberating changes referred to as the “Arab Spring” that unfortunately has declined.
In dynamic societies, like many in the West today, the separation of state and church has been crucial because it has precluded religious institutions from exerting a controlling influence on society, as has the present opposition of the Catholic Church to same-sex marriages. In democratic or republican forms of government there are documents such as constitutions to define “the rights of man,” along with elected heads of state such as presidents or prime ministers and parliaments or congresses entrusted to pass laws to enforce these rights and promote social and economic equality with law courts to adjudicate the law. Finally. they insist on access to a free press and uncensored televised news so the public can be well informed and compare their political and social systems to others which has been a major motivation for change, such as the Arab Spring.
Static societies, in contrast, historically have mainly been religious (but include secular dictatorships and fascist regimes) that do not accept the separation of the state and the church or the right of the people to rule. This in turn has justified the repression of free speech and control of the news media to avoid challenges to their authority, along with the enforcement of their laws and restrictions by the fear of imprisonment, torture, and death. Thus the culture of most Islamic countries, excepting the United Arab Emirates, resemble that of the Western medieval period and explains from the Western perspective why they appear to be so backward.
It is this schism between the static societies of the Middle East and the dynamic societies of the West that I believe, as does David Deutsch, to be the major cause of many of the political conflicts in the world today and that has been a deterrent to the democratic liberation and intellectual enlightenment of those ethnic regions. Yet in attributing the conflict to these contrasting differences, it should not be overlooked that religious and political institutions, along with cruel and repressive behaviors, are created by human beings who therefore are ultimately responsible for the resultant conflicts and atrocities and thus any effective remedy must be traced to them.
Consider the imprisonment, torture, and assassination of opposition leaders in Russia, China, Iran, Liberia, and Syria for their advocacy of free speech, an open press, democratic and legal rights. Or recall the murderous assaults of innocent people by deranged individuals in places in the United States such as Aurora, Fort Hood, Tucson, Columbine, and the Sandy Hook Elementary School in Newtown, Connecticut. Recall two bombs that devastated Patriots Day during the Boston Marathon detonated by two immigrant brothers, Tamerlan and Dzhokhar Tsarnaev from the Russian republic of Chechnya, killing four people and causing severe injuries, including numerous amputations to more than 250 others, and the more recent attack in Paris on the newspaper Charlie Hebdo and later on a Jewish grocery shop.
Despite all the past changes brought about in the West to enhance its social, economic, democratic, intellectual, and technological developments, no attempt has been made to improve human nature because there lacked the knowledge and means to do so, and therefore instilling reasonable, benevolent, humane behavior was entrusted to religious indoctrination, family instructions, ethical teachings, and governmental constraints. But as Kaku graphically describes, with the advances in genetic engineering and the ability to etch everyone’s entire DNA on a computer chip or CD, it now will soon be possible to identify the malevolent genes or systems of genes and deactivate, remove, or replace them.
Instead of pouring so much of our financial and scientific resources into developing artificial intelligence and humanoid robots we should devote more of our efforts to improving human nature by detecting and removing the genes causing the depraved behaviors and murderous assaults that pervade society. Just reading the newspaper accounts is extremely depressing. But just as geneticists have discovered the single gene or genes causing such previous fatal afflictions as Parkinson’s and Huntington’s diseases, diabetes, cystic fibrosis, and hemophilia, and now are on the verge of discovering the genetic cause of various forms of cancer, Alzheimer’s disease, schizophrenia and other mental infirmities, they are acquiring the capability to identify the particular genes related to mental derangements that are a primary cause of such abusive behaviors as deadly crimes, vicious rapes, child abuse, sexual trafficking, and tortuous killings. There will be many ethical issues to address and concerns about individual privacy, but this is a social debate well worth having.
This application of genetic engineering to enhancing human nature has been disparaged as attempting to create “designer genes,” “recreate eugenics,” or rejected as “playing the role of God.” But the latter is precisely what scientific knowledge has achieved, as Kaku indicated in showing how scientific attainments have often mirrored the earlier alleged supernatural powers of the gods. In fact, as I write this an analogous example of the contentiousness due to the attempt to remove and replace certain genetically inheritable defects has just been reported in the Washington Post.135 The question of replacing defective genes to eliminate harmful traits from being inherited has already arisen in the recent FDA debate over the possibility of creating three-parent babies called “three-parent IVF.” The new procedure would enable mothers who carry mutated or defective DNA in their mitochondria that would transmit such tragic inheritable defects as blindness, epilepsy, schizophrenia, and Down’s syndrome to their embryos could have them surgically removed from their extracted egg cells and replaced by refined mitochondria taken from the eggs of a healthy or normal women. Then, after being fertilized in a laboratory, the mother’s refined eggs would be replanted in her uterus so that the embryo would not carry the abnormal mitochondrial inheritance.
While this gene replacement procedure would have to be very carefully supervised and regulated, it does not deserve the harsh moral criticism and rejection it has received by some moralists and religionists. Yet such a way of eliminating abominable human traits, analogous to removing sources of horrible diseases and crippling human physical defects, would seem to be another major benefit of genetic discoveries. Consider the tremendous advantages of replacing such inherited genes causing such innate human drives as sadism, pedophilia, hedonism, harmful addictions, avariciousness, vindictiveness, dishonesty, treachery, viciousness, and despotism. Improving human nature and conduct is what I would like to see as the primary achievement of the fourth transition in our revision of reality and way of life. In my view, this would do more to alleviate human suffering and enrich our lives than all other achievements and advances, and it’s within our reach.