CONCLUSION:

What Is on the Other Side?

Accessing the Pleroma

Central to this book is the idea that the brain acts as a receiver and that consciousness is located somewhere else. In this respect the brain is analogous to a radio in that it processes and ‘embodies’ information from elsewhere. No one would think of taking the back off a radio to meet the musicians whose work they have just been listening to!

However, this mistake is made consistently when reviewing brain processes. The assumption is that the person who is speaking to you is located within the brain. However, on taking the brain apart the actual location of the person cannot be found, nor can her memories, hopes, fears or even any kind of facsimile of her personality.

Most scientists accept that every discharge from the synapse of a neuron is an individual event. But what makes up you with regard to your hopes, your dreams, your memories, your experience of a beautiful sunset, a subtle red, a richly flavoured chocolate, a sharp pain, a moving passage of Mozart, a puzzling dream, the all-encompassing love you have for your children and/or partner, and the very source of your referential self-consciousness … all this is created by trillions of these individual events. How can a collection of individual electrical discharges create any of these sensations? How can these firings of electromagnetic energy (facilitated by neurochemicals called neurotransmitters), each one of which seemingly has no life, let alone awareness, create the perceived sensations that are presented, fully formed, to consciousness; and how can that self-consciousness spontaneously ‘appear’. Indeed, at what point does ‘consciousness’ appear? How many cell firings are needed to bring it about? Ten billion? A hundred billion? Is there a ‘tipping point’ whereby the firing of one more neuron brings up ‘consciousness’ from somewhere? Where was consciousness before that ‘tipping point’ and is there a point where consciousness disappears into non-consciousness when the increasing complexity of synaptic firing goes into reverse?

Can a single cell be said to be conscious in any way? Possibly not. However, there are many single-celled organisms that seem to function extremely effectively within their particular environment. They can swim and find food and seem to show a very rudimentary ability to learn things from the environment. From this comes the million-dollar question: how can something with one cell learn anything? Indeed, if a single cell shows motivational behaviours, then could it be that that sentience can be individual or collective? Does this suggest that neurons are individual ‘receivers’ of information and each receiver contributes to a web of information that in turn creates consciousness? Is this analogous to individual cells of another variety: the ones that collectively make up a solar panel? Each solar panel is made up of thousands of individual solar cells. The source of the energy is not the cells themselves but the sun, 93 million miles (150 million km) away. I suggest that an analogous process is taking place in the brain. Each neuron is picking up ‘energy’ from within itself.

This ‘energy’ is actually information (or, as David Bohm termed it, ‘in-formation’) and is drawn up, to use Bohm’s term, from the zero-point field (Ervin László’s ‘Akashic Field’). I believe that the process is akin to that suggested by Stuart Hameroff and Roger Penrose in their ORCH-OR model. As we discovered earlier (p18), within each neuron are billions of structures called microtubules. The internal walls of each microtubule are known to give off pulses of single-photon light. These are fired inwards and towards each other inside the cylindrical, hollow microtubule. These photons (collectively manifest as electromagnetic waves) ‘interfere’ with each other and in doing so create interference patterns. As many of you will know, holograms are created by interference patterns of coherent light. Now we know that holograms are odd, in that each part of a holographic image contains the whole image. David Bohm, in his book Wholeness and the Implicate Order, suggested that in this respect the universe itself may be holographic in nature. Coincidently, at the same time, psychologist Karl Pribram was suggesting that memory location in the brain worked on holographic (distributed) principles.

If the above model has any validity, it would explain how a brain can ‘create’ sentience and consciousness. It is in fact not creating it, but is uploading it from a digital in-formation field, a field that fills everything and is, in effect, everything. Matter is created from in-formation. This means that it is matter that is the brain-generated hallucination. But, more importantly, the physical processes within the brain (the neurochemicals, the neurons themselves and all the other physical processes) are themselves created from digital information. This is the famous ‘it from bit’ concept we owe to quantum physicist John Archibald Wheeler. The phrase came from a paper that Wheeler wrote in 1989, wherein he stated:

I suggest that we may never understand this strange thing, the quantum, until we understand how information may underlie reality. Information may not just be what we ‘learn’ about the world. It may be what ‘makes’ the world. An example of the idea of it from bit: when a photon is absorbed, and thereby ‘measured’ – until its absorption, it had no true reality – an unsplittable bit of information is added to what we know about the world, ‘and’, at the same time, that bit of information determines the structure of one small part of the world. It ‘creates’ the reality of the time and place of that photon’s interaction.1

Of course materialist-reductionists will argue that it is self-evident that the brain processes information and that all we are is the ‘epiphenomenon’ of all the neurological interactions, the neurochemical exchanges and the electrical energy surging around our brains. They will argue that, working together, individual components can create something that is not evident in their individual parts. For example, the atoms of individual elements can come together and create something very different from their individuated reality. Oxygen and hydrogen come together to create water, which is very different from either of its ‘parent’ elements. The coming together of the atoms in certain configurations creates additional functionalities. However, water is not a brain, although interestingly enough the human brain is made up of around 78 per cent water, the remainder consisting of 10 per cent lipids, 8 per cent proteins, 1 per cent carbohydrates, 2 per cent soluble organic substances and 1 per cent inorganic salts.2

In recent years it has been discovered that the brain shows incredible abilities to repair itself after damage. In a continuous process known as neuronal plasticity, it can re-organize its internal structures to compensate for damage in one area by recreating the lost functionality in another.3

All this suggests that the brain has access to information fields that are not part of the structure evident from the present scientific model. Indeed, there is evidence that the brain uses quantum-field information in its processes of rejuvenation.

We know from research that the hippocampus is the place in the brain where memories are processed and recalled. This was powerfully proven by the famous case of ‘H M’, a young man whose hippocampi were removed in an attempt to cure him of his seizures. In that regard the operation was totally successful. However, there was an unfortunate side-effect. After the operation H M was unable to lay down any new memories. He existed in the ‘now’, believing that he was still 25 years old and consulting a surgeon about a possible operation to cure his epilepsy. He had no new memories after 1953. This famous case proved the role the hippocampus has in the initial processing of memories.

However, though the hippocampus may be able to reconstitute memories, it is not where memories are physically located. In an intriguing paper published in the late 1950s, James McConnell of the University of Michigan did a series of experiments with planeria worms. McConnell’s team taught a group of worms to react to light in a non-instinctive way. They were then killed, chopped up and fed to a new group of planeria. McConnell claimed that the new worms acquired the same non-instinctive behaviours. This caused a sensation and created something known as the ‘molecular theory’ of memory.

Three years later a group of Californian researchers had similar results with rats. This suggested that memories could be transferred, via molecules, from one animal to another.

In 1972 a paper by George Ungar of Baylor University, Waco, Texas, was published in Nature. In this peer-reviewed article Ungar described how his research team had trained rats to fear the dark. The rats were then killed and their brain cells transferred to the brains of a new set of rats who had the natural affinity to darkness found in other rodents. Subsequent tests showed that fear of darkness had been transferred to the new group.

The implications of Ungar and McConnell’s discoveries caused a great deal of consternation in conservative scientific circles. In an attempt to discredit them, it was argued that if memories were carried in molecules, then on the basis of the known informational storage capacity of computers, a lifetime of memories would need 220 pounds of molecules to be encoded. However, from our present knowledge of holographic storage methods, and the potential of the microtubule model, we know that a much smaller amount of storage material would be needed. Indeed, if memory is stored non-locally in a variation on our modern ‘cloud’ storage, the whole objection disappears and we are back looking at the raw evidence supplied by McConnell and Ungar. Of course, the major objection regarding storage capacity could be explained away if the encoding mechanism was not weighty molecules but digital information encoded within microtubules.

Much to the consternation of those wedded to the ‘brain-entrapped mind’ model, the plucky little planeria raised their heads (or lack of them) above the parapet again recently. In July 2013 a paper by biologists Tal Shomrat and Michael Levin of Tufts University (Medford, Massachusetts) appeared in the Journal of Experimental Biology. Shomrat and Levin were aware of the original McConnell paper and were keen to revisit his findings using new protocols. Unlike McConnell, the Tufts researchers did not need to kill and feed the planeria to other planeria: they simply cut their heads off. This may sound rather drastic but planeria have an amazing ability to regrow a replacement head or tail if it is lost in an accident. Indeed, the head will grow a new body and the tail likewise. In 2011 it was discovered that a whole worm can be cloned from one single cell.4

Of probably less interest to most people is that planerians do not like light. If light is shone on them, they will try to find shade. This is because they associate light with being visible to predators. Apparently they are also very partial to liver. The researchers placed a piece of liver on a plate located underneath a bright source of light. The worms were then placed next to the plate. They initially moved away from the light, but over a period of time the more adventurous learned that underneath the light was food and that there was no danger there. The worms that showed the most effective learning were subsequently beheaded and the researchers waited for two weeks for new heads to grow. The newly headed worms ‘remembered’ the location of the liver and showed no fear of bright light.5 In an interview with National Geographic magazine Levin was asked how a worm can remember things after losing its head and brain. His response was very interesting: ‘We have no idea. What we do know is that memory can be stored outside the brain – presumably in other body cells – so that [memories] can get imprinted onto the new brain as it regenerates.’6

This suggests that despite the negative reactions at the time, McConnell was quite right in his conclusions. Of course, such findings beg huge questions of the present materialist-reductionist paradigm. If the ‘mind’ of a lowly worm can be totally destroyed only to reappear again in a seemingly identical form in another brain, what does this suggest with regard to the human brain? After all, we are simply a more evolved version of the humble planeria. If we consider that the destruction of the brain equals death, and this is the criterion applied by modern medicine, then what we have here is clear evidence of life after death and the potential demise of the ‘brain-entrapped mind’ model.

The question here is whether the brain is actually needed for consciousness to exist within consensual reality. I would argue that just as a receiver is needed in order to convert radio waves into sound and vision, as facilitated by a TV set (the analogy can be adapted by those who prefer to talk about laptops, tablets or smartphones), so it is with the brain. All that happened with the planeria was that they grew new ‘receivers’ in the same way that a new radio can be quickly tuned to pick up your favourite stations. So the real question here is, what is actually doing the signal modulation in the brain? Some have suggested that it may be the most curious organ in the brain, something we have touched on many times: the pineal gland.

Beach Barrett and ‘Metatonin’

For centuries the role of the pineal gland has been the subject of intense speculation. The philosopher René Descartes suggested, in all seriousness, that this tiny organ located at almost the direct centre of the brain was the seat of the soul. Others, including many mystical schools and even some religions, have believed that the pineal gland is the ‘third eye’ and that by developing it adepts could use it to ‘see’ the real universe, the hidden world denied to us by our everyday senses.

In 1958 Aaron Lerner discovered a new hormone which he called ‘melatonin’. This was of great interest because a few years earlier, two researchers at the Harvard Medical School, Mark Altschule and Julian Kitay, had published a landmark monograph which suggested that the pineal gland had a direct effect on the sexual maturation of mammals. They extracted the pineal glands from young rats and found that in these particular individuals sexual maturation came earlier. In 1960 a Dr Virgina Fiske of Wellesley College made the related discovery that if rats were exposed to continual light, their pineal gland decreased in weight. This stimulated researcher Richard J Wurtman to try and discover what the substance responsible for this inhibition was. This led him to a collaboration with Dr Julius Axelrod, and together they discovered that the substance responsible for these mysterious effects was none other than Lerner’s melatonin.

The link between hallucinations and Alzheimer’s disease has long been known. Until recently the source of the effect has been located within the visual cortex of the brain. In July 2014 two groups of researchers, one Australian and one American, announced at the Alzheimer’s Association International Conference (AAIC) that a huge breakthrough had been made in this respect. Both their research results had shown that these odd visual images may originate within the eye itself, not the brain. Of course, this is not that surprising. The retina is recognized as being a developmental outgrowth of the brain and may therefore be vulnerable to the same inflammatory injury that causes neurodegenerative disease.

The Australian team, from the Commonwealth Scientific and Industrial Research Organization (CSIRO), reported that that all 40 subjects who had tested positive to high levels of plaque within the brain had also tested positive for ß-amyloid (Aß) within the retina.7 This protein has been associated with cell death in individuals experiencing schizophrenia. Later in the conference the American team, led by Paul D Hartnung of Cognoptix Inc., reported results identical to those of their Australian associates.

You will recall that earlier in our discussions I suggested a link between Alzheimer’s disease and Charles Bonnet syndrome (CBS). The 2014 papers present us with direct experimental proof that Alzheimer’s involves retina-generated hallucinations. But there is more. It has been discovered that the retina has another role: the indirect generation of melatonin. The retina is responsible for signalling to the pineal gland that external light levels are dropping. This in turn stimulates the pineal to excrete sufficient amounts of melatonin to bring about drowsiness and sleep. So any damage to the efficacy of the retina will, it is reasonable to conclude, bring about disruption of melatonin production. And this seems to be the case. It is known that the creation of melatonin decreases with age8 and some researchers have linked this directly to the development of Alzheimer’s.9 Others have suggested that it is Alzheimer’s that brings about this decrease in the creation of melatonin. I would like to suggest that the observed link between Alzheimer’s and decrease of melatonin production is related to the disruption of retinal-pineal communications brought about by the proliferation of ß-amyloid within the retina. This would, in turn, explain a known Alzheimer’s-associated behaviour popularly known as ‘sundowning’, whereby patients with advanced cases of the disease develop extremely agitated behaviours during the evening hours.

What is potentially significant here is that we now seem to have a direct link between the effective functioning of the pineal gland and the development of Alzheimer’s. It has been discovered that melatonin and other structurally related indolic compounds, such as indole-3-propionic acid, are very effective in preventing the deposition of ß-amyloid (Aß) plaques. In effect the decrease of melatonin production facilitates the development of the disease.

Other papers have shown that Alzheimer’s is also linked to the calcification of the pineal gland.10 So it seems that Alzheimer’s seems to damage the pineal gland’s abilities to keep the communication channels open. However, long-term administration of externally generated melatonin seems to act as a therapeutic agent in the relief of some Alzheimer’s symptoms.

Here we have direct links between Alzheimer’s, the pineal gland and the production of melatonin. Related to this is the work of American researcher Beach Barrett. In a fascinating paper Barrett suggests that the pineal gland secretes both melatonin and a related substance that he terms ‘metatonin’.11 In effect, metatonin is his term for endogenously generated DMT. This works with the melatonin to bring about a liminal state of consciousness whereby alternative realities can be perceived. Of course, modern neurology has an alternative explanation. The technical term for this is ‘REM intrusion’, which proposes that the subject is actually in a dream state which superimposes itself upon the waking mind. We have already encountered this surprisingly common state under the label of sleep paralysis (see p22).

Of further significance, Barrett also suggests that metatonin is found in high concentrations in the blood of embryos and children up to the age of three, adding that the concentration increases again at the point of death and in doing so brings about the near-death experience.

As we have already seen, during early childhood many youngsters report powerful hallucinations (p137). Many of these involve encounters with ‘imaginary friends’, including entities similar to those reported by adults during DMT ‘trips’, alien abductions and near-death experiences. I would like to suggest that Barrett’s ‘hypothetical’ metatonin may be responsible for the facilitation of similar experiences in adulthood. I would further argue that Alzheimer’s-induced metatonin release in old age creates the ‘hallucinations’ associated with CBS.

In fact, however, Barrett’s metatonin is far from hypothetical. You will recall that in 2009 Dominique Fontanilla at the University of Wisconsin-Madison published a paper in Science stating that the mysterious neurotransmitter that binds with the sigma-1 receptors in the brain, endopsychosin, is actually endogenous dimethyltryptamine.12 This was followed up with a paper published in 2014 showing that DMT and its close cousin 5-MeO-DMT work directly with the sigma-1 receptor sites of human immune cells.13 The final proof, if final proof were needed, came when Steven Barker and Jimo Borjigin reported the discovery of DMT in the brains of live rats.14 Barrett’s ‘metatonin’ actually exists.

So have we, at long last, discovered a crucial piece of evidence that all perceptions are, in a real sense, hallucinations? In which case, what is the actual source of these hallucinations? Could it be that everything we perceive is a form of brain-facilitated simulation?

Epilogue: Playing the Game

So far I have proposed that there is one single universe. This is the totality of all that is and it is created out of information. Information is non-physical, in the same way that the digital information that creates the illusion of a three-dimensional space in a computer graphic is non-physical. What processes the image is a conscious observer who through various sensory organs creates a representation of the information. The great physicist David Bohm used the term in-formation to describe information that is processed in this way.

Imagine that you are playing a computer game in which you take the role of the central character – in computer terminology this in-game character is known as an ‘avatar’. Such games are designed as a totally immersive experience. To create this, the ‘game player’ is given a sense of embodiment within the avatar: he or she sees on screen a three-dimensional environment rendered as if it were being viewed through the eyes of the avatar. Stereo sounds from the computer speakers create a three-dimensional sound-field identical to how sounds are processed in the non-game world. Some modern games involve tactile body suits in which the game player’s movements are reproduced on screen as movements by the avatar. These movements can be seen on screen as the game player looks down at their avatar body. Tactile feedback software is also incorporated into the suit, together with ambient temperature responses and possibly even pain and pleasure simulations.

Incorporated into these suits is a headset with a pair of wide-angle goggles with a screen in each goggle. The images projected on these two screens are designed to create a three-dimensional reproduction of the game environment and a perfect facsimile of the visual field as seen in normal life. A pair of headphones reproduces a similar three-dimensional sound-field.

To make the game even more real, let us say you agree to take a short-acting amnesiac drug. This temporarily wipes clean all your memories. So when the game begins, you have no memories of who you are or, indeed, of the existence of an environment outside of the game. You are, in effect, born into the game; and for you, the game is all that there is. All your sensory feedbacks confirm that this is the case.

Now imagine a game that is designed to be a whole lifetime. Using exactly the same protocols as above, you are dropped into the game as a newly born avatar. The digital body that you find yourself in is a helpless infant. You have no memories of who you are and therefore no prior knowledge of anything. As your in-game visual systems get used to the new environment, so you begin to orientate yourself in this new world. Like any new-born, it will take time for you to control your body and to make sense of what you are seeing, hearing and feeling. Now remember you are no longer you, you are that baby, and future in-game experiences will develop and nurture you into a personality totally different from your ‘real world’ personality.

A few years ago a computer ‘life simulation’ called Second Life became hugely popular. This was a three-dimensional world that could be explored within game play. What made this interesting was that each game player shared the virtual environment with others who had created their own avatars. In effect, this meant that all entities with whom you engaged within the game were real people who could be located anywhere on Earth. Each onscreen avatar had its own out-of-game motivations sourced from the mind of the game player, and each game player interfaced with the Second Life environment from an avatar-embodied viewpoint.

To make my ‘life game’ even more powerful, imagine that my hypothetical designers have placed another active agent into the amnesiac drug. This is a time dilator. In effect, this expands the subjective in-game time so that a few hours in the external world becomes 70 or 80 years within the game. This is not such a weird idea. Time dilation effects are regularly reported by individuals who take DMT, ayahuasca and many other psychedelics. Indeed, many of us experience time dilation effects every night when we dream.

So here we have a scenario where a full life of 70 years can be experienced in a few hours. Now here is my twist. Imagine that the avatar that you are born into is you at the moment of your birth. Imagine that the environment in which the game is experienced is a re-creation of your actual real-life environment. You are born to the same parents in the same town in the day you were actually born. So now you are a version of yourself existing in a virtual-reality re-creation of your life. How would you ever know that this was a simulation? In which case could your actual life be a simulation?

This is not as crazy as it seems. In 2003 Oxford University philosopher Nick Bostrom had an article published in Philosophical Quarterly. In it he suggested that it was almost certain that we are all living in computer-generated simulations of our lives. In simple terms, his argument goes as follows. We know from the famous ‘Moore’s law’ that computer processing power is doubling every two or three years. There is no reason to believe that this will not continue for some time to come, the only restrictions being how small we can make printed circuit boards. A new area of research called ‘quantum computing’ suggests that processing power may be almost limitless for future generations. So what will our descendants do with this processing power? Bostrom suggests that it is inevitable that they will create what he calls ‘ancestor simulations’.

Bostrom argues that a single, planet-sized computer could, using less than a millionth of its capacity, ‘simulate the entire mental history of humankind’. By ‘mental history’ he means all the memories, hopes, dreams and anticipations of every single human being that has ever lived. He further suggests that when uploaded into an in-simulation ‘avatar’ (a digital facsimile of an individual existing within the simulation), this personality data would create self-awareness in that entity. In effect, the avatar will become sentient, aware of itself within the simulated environment. However, as the avatar is unware that it exists within a simulation, it will believe that all that exists is the sensory information it receives from the program. Also, because it has no other knowledge base, it will have no option other than to believe that it is the person whose original perceptions and experiences populate its mind.15

There is a raging argument in both philosophy and computing as to whether it would ever be possible to simulate self-referential consciousness within a computer program. This, of course, goes back to the David Chalmers debate discussed earlier (p17). But let us assume that Chalmers’ ‘hard problem’ (that is, that modern science has absolutely no idea how the inanimate matter that makes up the brain can spontaneously ‘create’ self-referential consciousness) can be overcome. This would mean that future scientists could use this knowledge to create self-awareness within the ‘avatars’ populating their ancestor-simulation. I am sensitive to the difficulties of grasping the enormity of what Bostrom and his supporters are suggesting. Space does not allow me to expound more on this intriguing idea. I therefore advise that any interested readers should invest some time reading Bostrom’s original paper and the various on-line websites discussing this hypothesis.

Physicist Tom Campbell has carried this argument forward to suggest that everything around us is made up of digital information, and in his Big Toe Trilogy he presents an all-encompassing theory that explains in great detail the workings of the simulation and effectively deals with every objection to the simulation argument.

Recently, some startling discoveries have been made with regard to the fine-structuring of the observed universe. These findings are the first to show that the perceptual cosmos may, indeed, be created out of digital information, and that it is, in effect, a huge hologram. The work is being done by a team of Fermilab scientists, in Batavia, Illinois, near Chicago, led by cosmologist Craig Hogan.

In physics there is something known as ‘entropy’. In effect, any system moves from a state of order to a state of disorder in a gradual but inexorable process. This only ever goes in one direction. It has never been observed that something in a state of disorder gradually changes to order. For example, an egg in its initial state is in a state of total order: its shell is intact and the yoke and white are perfectly separated. If that egg is dropped onto the floor, it smashes. The shell is shattered and the yoke and white are mixed up. This process can never be reversed. We interpret this unidirectional process as a series of changes through time. It is correct to state that this process is time, or at least a visible and measurable aspect of time. And time, like entropy, only ever goes in one direction. As a system becomes more disordered, its state of entropy is said to be increasing.

States of entropy are described by information. An ordered, stable, low-entropy system needs less information to describe it than a disordered, high-entropy system. This shows that there is a link between information and entropy: put simply, more entropy needs more information to describe it. Information and entropy are, in this way, linked, particularly if we accept that the universe in which we exist is actually created from digital information rather than physical matter.

Of course, according to Einstein, physical objects are created out of energy. For example, a nuclear bomb’s power comes from the release of a huge amount of energy from a tiny amount of matter. From this a direct link can be made between energy and the information that describes that energy.

The law of the conservation of energy is an axiom of modern physics. In simple terms, it states that the total energy of any isolated system remains constant: its energy can be neither created nor destroyed. Of course, in our everyday world energy does seem to disappear. For example, your coffee gets colder, showing again the power of entropy. However, its heat energy is not lost, it is converted and spread over an ever-increasing area. A complication here is that the heat is lost because your kitchen is not an isolated system. It is contained within the universe itself. But the universe is such a system. By definition there is nothing outside of the universe. The universe simply cannot lose energy or the information that describes that energy. But it seems to.

It all comes down to black holes. A black hole is a star that has collapsed upon itself, becoming so massive that nothing can escape from its gravitational field – not even light. Let me explain this. Gravity is caused by the mass of an object. The more massive the object, the greater the gravitational force it contains. You and I are held on the surface of the Earth because the Earth is much more massive than we are. However, at certain speeds – known as ‘escape velocities’ – an object can escape the gravitational force of a much more massive object. To overcome Earth’s gravitational force an object needs to travel away from the surface at a speed of 25,020 miles per hour (40,270 km/h). As the moon is much less massive (and is, in fact, captured by the Earth’s gravitational field, which is why it is in orbit around us) the required escape velocity there is only 5,324 miles per hour. However, black holes are so massive that the escape velocity is greater than the speed of light. This is why it is a black hole: it gives off no electromagnetic energy (light) and therefore is totally black. However, as nothing can travel faster than the speed of light, nothing can escape a black hole once it has been sucked in. But this is where things get weird. According to modern science, when anything is sucked into a black hole, it is totally destroyed: it ceases to be. However, this violates the law of the conservation of energy. The universe is an enclosed system and yet, in certain ‘areas’ within that system, energy and its accompanying information are totally destroyed – not converted into anything else.

Theoretical physicist Juan Martín Maldacena has suggested a solution. This works in a similar way to holograms. Indeed, the process may be directly linked to holographic principles and may therefore be yet more evidence that what we believe to be a three-dimensional, physical universe is, in fact, created out of non-physical digital information. If we look at a holographic image we see a three-dimensional object located in the space in front of us. But this is simply a trick of the light. What is actually there is a two-dimensional image projected into three-dimensional space. Maldacena argues that the answer to the mystery of how energy is lost from an enclosed system is that our three-dimensional universe is actually a huge two-dimensional hologram. This universal ‘image’ is created by this no-longer-lost energy spilling back out from the ‘surfaces’ of trillions upon trillions of tiny black holes that pepper the inner surface of the ever-expanding universe. I am aware that this is a very difficult concept to grasp. For those readers interested in the details, I suggest that they check out an article written by Jacob D Bekenstein in the August 2003 edition of Scientific American. A link to a full pdf of the article can be found in the endnotes.16

In a later article published in July 2014, astrophysicist Craig Hogan hypothesized that our macroscopic world is like a ‘four-dimensional video display’. He argued that if we stare deeply enough into the structure of matter we will discover the bitmap of our holographic universe, in the same way that if you look closely enough at a computer screen you will eventually spot the individual pixels.17 In 2013 the German GEO600 gravity wave-detector found the ‘pixilation’ that Hogan had suggested would exist if the universe was a hologram.

In other words, there is strong reason to believe that this universe and everything in it is a super-hologram created out of digital information. If this is so, then Bostrom may be right: we may be all existing in a computer game of our own lives created by our descendants.

This has huge implications with regard to the life-as-a-computer-simulation model with which I started this chapter. If our descendants can program one life for each of us, why would they not program in all possible lives as well? One of the leading theories of modern quantum physics is the ‘many worlds’ or ‘many minds’ hypothesis. This suggests that in order to explain certain observed quantum effects, the only solution would be that the outcome of each and every quantum effect brings about an alternative universe. First suggested by American physicist Hugh Everett III in 1957, this has become the accepted theory by an ever-growing number of quantum physicists. This would explain, for example, why this particular universe has been seemingly hard-wired for the evolution of humanity since the first nanoseconds of the Big Bang. This is extremely fortuitous if this is the only universe there has ever been; but following the many worlds model, we can see it as a totally logical outcome. Humanity, according to this view, has evolved in the one universe that contained all the elements needed for its evolution; it has not evolved in the trillions of others that were not right.

Interestingly, the ‘many worlds interpretation’ (MWI) is being accepted by more and more scientists as the best explanation we have for the way in which the observed universe presents itself. Bostrom‘s ancestor-simulation similarly suggests that all possible outcomes of all decisions also exist in potentiality.

In 2006 Stephen Hawking, with his CERN-based associate Thomas Hertog, proposed a new variation on the Everett hypothesis that presents an even closer match to the ancestor simulation model. It is known as the ‘top-down interpretation’ (TDI). In simple terms, Hawking and Hertog argue that the universe did not have one single unique beginning but a countless number of them, each one following its own unique path. We exist in the universe that was fine-tuned for our own evolution. Over billions of years each quantum event brings about a change in the evolution of our particular universe and the alternate paths remain in potentiality with regard to our own point of observation.18

So if Everett’s MWI and Hawking and Hertog’s TDI are accepted as powerful explanatory models of how the universe actually works, then Bostrom’s ancestor-simulation hypothesis fits in very well within this overall structure. Just as MWI and TDI propose that the outcomes of all actions are contained as potentialities within the developing universe, Bostrom’s model locates all the potentialities as digital information encoded within a huge quantum computer. If we accept this as a possibility, then within Bostrom’s ancestor simulation can be found the data for each and every possible life a sentient ancestor-avatar can experience.

The ancient Greeks believed that when a person passed away and they had been taken to the land of the dead, they were presented with a choice. They were offered a drink from one of two chalices. One chalice contained water from the ‘Spring of Memory’ and the other held water from the ‘River of Forgetting’, the tributary of the Styx known as the Lethe. If the person chose to drink from the ‘Spring of Memory’, they walked along a path leading off to the right, to heaven. If they chose the waters of the Lethe, they took the left path and were reborn with all their past-life memories wiped clean.

The Gnostic Book of the Saviour says the same thing. It explains that the righteous person will be born into his next life without forgetting the wisdom he has learned in his last life. He will not be given the ‘draughts of oblivion’ before his next birth: rather, he will receive a ‘cup of intuition and wisdom’ which will cause the soul to ‘seek after the Mysteries of Light, until it hath found them’.19 Many esoteric texts throughout the centuries have proposed that enlightenment involves the realization that we are living in a simulated universe, unaware of the fact that all we perceive is maya, an illusion. When we ‘wake’ from this we experience ‘anamnesis’, a loss of forgetting. Of course, most modern people will recognize the connection with taking either the blue pill or the red pill in the hugely popular Matrix movies.

In my computer game analogy, this is exactly what I suggest. The game player takes an amnesiac and for the duration of the game forgets that they may have played the game before – possibly many, many times.

So how does this all relate to my Huxleyan spectrum? Well, as you recall, I have consistently maintained that conscious awareness has not one but two locations. I call these Eidolonic and Daemonic consciousness. In my model described above, the amnesiac in-game avatar is Eidolonic consciousness. It has no idea that it is experiencing its whole life in a simulation. It is embedded within maya, or, if you prefer, the Matrix. It can play the game many, many times, and at the start of each game it has its memory banks swept clean and a new life is started. A game can only come to an end with the in-game death of the avatar. This is exactly what happens in modern first-person role-playing computer games: the avatar is killed and the game player restarts the game, either at the point just before the sudden death or right back at the start of the game.

In this way, multiple in-game lives can be experienced, each one feeling totally unique as far as Eidolonic consciousness is concerned. As long as the ‘waters of the Lethe’ keep Eidolonic consciousness in a state of amnesia, there is no awareness of the true nature of reality, the place I have called the Pleroma.

In effect, the Pleroma is the location of the Nebuchadnezzar ship in the Matrix movies (where, not with any originality, it is termed the Real World). It is the place outside of the program, the reality that is seen when Blake’s ‘doors of perception’ are cleansed.

The word ‘Pleroma’ is taken from the Gnostics, a group of very early Christians who explained the existence of evil in the world by arguing that this world is an illusion created by a false god known as the Demiurge. The realm of the true God, outside of the illusion, is known as the Pleroma. The word is from the ancient Greek pleroma, ‘that which fills’. It is the totality of everything that is, including the illusory universes of the Demiurge.

In my model the Pleroma is simply the place outside of the Eidolonic program. It is the location of the game player who, you will recall from the start of this discussion, is the person actually doing the perceiving – from an exterior viewpoint. The game player is embodied within the game as Daemonic consciousness. Imagine playing a normal first-person role-playing game (RPG). As the game player you are always aware of who you are and you remember all the previous iterations of the game that you may have played. You plan the onscreen movements of your avatar based upon this knowledge. You know where to avoid the monsters and you remember the oncoming dangers before they are actually rendered on screen. In effect you are, with regard to the game, a precognitive.

I would argue that this is what may actually be happening. We are all Eidolons trapped within a Matrix-like program of our lives. Our Eidolonic consciousness suffers from amnesia with regard to any knowledge of previous games and of the Pleroma in general. However, we all have an element of the Pleroma within us: this is the Daemon which, through its state of anamnesis, can remember all the events that took place in previous lives (previous runs through the game).

And this is where the Huxleyan spectrum comes in. In this book I have given scores of examples of how certain individuals can break out of the program and, for a few seconds or longer, can glimpse a universe outside of the controls of Eidolonic consciousness. They can throw off William Blake’s ‘mind-forged manacles’ and see through the bars of Philip K Dick’s ‘Black Iron Prison’: they are like the prisoner in Plato’s Myth of the Cave who breaks free and realizes that everything he had, until that moment, thought to be real was just shadows on a wall.

Unfortunately, the rest of us are firmly trapped within the manacles and cannot move sufficiently to see through the prison bars. We firmly believe that what our senses tell us is real actually is real – and that is all there is. In Plato’s myth the escaped prisoner returns to his fellow inmates and tells them that by turning around he has seen the true nature of the universe. They simply do not believe him. They wish to be left alone to enjoy their illusions. To use another Matrix trope, they prefer to remain under the control of the blue pill. Meanwhile, the escaped prisoner, feeling the effects of the red pill, becomes more and more frustrated. In the end he is simply labelled as insane. In our modern world we would simply announce that he is hallucinating, and that the world he sees is just a creation of his addled mind.

I have discussed the many different ways that the Pleroma can be perceived by migraineurs, temporal lobe epileptics, schizophrenics, Alzheimer’s patients, autistics and people who have near-death experiences, out-of-body experiences and many other ‘altered states’. I also have given examples of how certain neurological states can open up everyday Eidolonic consciousness to Daemonic consciousness. The individuals who experience this can tune in to the information sources usually available only to the Daemon. In doing so, these Eidolons sense the presence of their silent partner.

Remember that the Daemon has lived its life many times – that is, it has ‘played the game’ many times. Therefore it has access to information from the previous games. To an Eidolon trapped in the linear nature of the game, such information will be interpreted as precognitive. It will seem to the Eidolon that they are glimpsing the future. In fact, they are remembering the past. Sometimes they will have sudden recognition of the circumstances in which they find themselves. Some will sense that they know what is about to happen next. This is interpreted as a déjà vu or deja vécu sensation. All that is happening is that they are receiving a vague lifting of amnesia and an accessing of Daemonic consciousness.

The farther along the spectrum one goes, the greater the access to the Daemonic. You will recall the Japanese TLErs who believed that they had already lived this life many times. The simple truth is that they had. The sad thing is nobody believed them. But why would they? All we have here is another example of Plato’s escaped prisoner.

It is our changing from one game to another that is particularly fascinating, and this is why I placed the chapters on autism and Alzheimer’s back-to-back at the end of the book. This is because this is literally what they do: they are the same ‘illness’ working in different temporal directions. Alzheimer’s happens at the end of a long, probably successful game – by which I mean that to survive to a great age suggests that other potential death situations have been avoided; while autism is the start of a new game.

The above model also explains near-death experiences. It accommodates the panoramic life review, the sensation of time slowing down, the encountering of one’s own Daemon. In fact, all the phenomena discussed in this book, with one exception, take their place within a comprehensible worldview. The one situation that does not is the encounters with seemingly sentient entities, human or otherwise, that those on the Huxleyan spectrum regularly experience. These inhabitants of Magonia are a mystery to me. They suggest that within the Pleroma there are places inhabited by all kinds of wonderful and possibly even dangerous creatures. Usually the barriers of Eidolonic consciousness protect the Eidolon from such encounters. However, when the doors of perception are opened, all kinds of entities can come through into this world.

I am intrigued by the consistency in the reports of aliens and other beings during DMT trips, CBS encounters and TLE auras and also in the fairy folk and cuddly creatures that fill childhood with such wonders. The location of Jacques Vallée’s Magonia and the motivations of its inhabitants are a great enigma to me, which I hope will become the subject of another book.

Thus far, I have discussed in passing what I think may be the neurochemical and neurophysiological facilitators of the Pleroma. I have argued that Daemonic consciousness resides in the non-dominant hemisphere and its Eidolonic counterpart in the dominant hemisphere. There is strong evidence in support of this case, and I still believe that this is the best model available. However, I am also of the opinion that there is another bicamerality of consciousness, which may mirror or even override the hemispheres model. It is to this I now turn our attention.

The Glutamate Connection

We have already discussed the role of glutamate with regard to the near-death experience (NDE) and its involvement in migraine-related cortical spreading depression (CSD) and epilepsy. Glutamate, probably the most fascinating of all neurotransmitters, is the one I suspect may be the major facilitator of my Huxleyan spectrum.

Technically speaking, glutamate is the monoamide of glutamic acid. It is the only amino acid that can readily cross the barrier between blood and brain. With glutamic acid, it is thought to account for about 80 per cent of the amino nitrogen of brain tissue. The majority of large neurons in the cerebral cortex use glutamate as their neurotransmitter. Glutamate is the key chemical messenger in the temporal and frontal lobes, and is central to the function of the hippocampus. It plays a vital role in the cognitive processes involving the cerebral cortex, including thinking, memory formation of memories and recall, and is vital in perception.

Glutamate was discovered together with the other three amino acids (aspartate, GABA and glycine). These four organic compounds were found in high concentrations in all cells and organs, and it was clear that they were involved in a great many metabolic highways. As glutamate participates in virtually all mammalian brain functions, it has also been speculatively identified as the major nutrient within the ‘primordial soup’ in which life on Earth presumably originated. Glutamate is essentially the same substance that adds flavour to food, particularly in oriental cooking. As well as being a neurotransmitter, it is also a precursor for the inhibitory neurotransmitter gamma-aminobutyric acid (GABA). It has a number other roles as well, so it is impossible to tell exactly what role it is fulfilling when discovered in a synapse.

Glutamate may also have a major role in another element of the Huxleyan spectrum, schizophrenia. It has been discovered that a major site of changes in the schizophrenic brain is the dosolateral prefrontal cortex (DLPC). The DLPC consists of mostly pyramid cells, and these cells use glutamate as their neurotransmitter. This suspected linkage between glutamate and schizophrenia has been further reinforced by the way in which a drug popularly known as ‘angel dust’ (phencyclidine) can induce schizophrenia-like psychosis in otherwise normal people. It does this by acting upon glutamate receptors, and in doing so creates a feeling of euphoria, combined with hallucinations and paranoia. It has also been proposed that in some way glutamate and dopamine work together to bring about schizophrenia-like symptoms and, in doing so, open the doors of perception. This would certainly explain why phencyclidine is so effective in generating schizophrenic-like behaviours.20

A curious feature of glutamate is that it interacts with glial cells, or glia: little-understood brain cells that outnumber neurons by a ratio of at least 10 to one. There are around 100 billion neurons in the human brain and therefore a trillion glial cells. This accounts for approximately 90 per cent of the brain (this is where the much-discussed idea that we only use 10 per cent of our brains came from). Unlike neurons, glia lack axons and dendrites, and they do not directly participate in synaptic signalling: they were considered to be simply the glue that held the brain together. In fact, the word ‘glia’ is from the German for ‘glue’. In effect, they were considered to be simply the insulators for the much more important neurons. At least seven types of glial cell have been observed: Schwann cells, Müller cells, ependymal cells, oligodendrocytes, tanycytes, microglial cells and astrocytes. Of these, astrocytes are the most numerous and have been observed only in the brain and spinal cord. Many of them are star-like in appearance: hence the ‘astro’ part of the name.

In 1963 Stephen Kuffler and David Potter at Harvard Medical School decided to test out the idea that glial cells were neuronal insulators. They took some astrocytes from the brain of a leech and added potassium. They discovered that the cells responded to the potassium in a similar way to neurons in that they exhibited an electrical potential.21 These results surprised many neurologists, and research began into the role of these hitherto neglected elements of the brain. It was discovered that the further you go up the evolutionary ladder, the number and size of the astrocytes increases within the cortex. Humans have the most astrocytes and the largest of any animal. This suggested that in some way astrocytes are involved in increased intelligence and the development of self-aware consciousness. It therefore came as no real surprise that, in 1989, a team of researchers at Yale School of Medicine discovered that astrocytes can communicate with each other and are capable of sending information to neurons.22 This is a finding of great importance. It suggests that glial cells have their own, exclusive communication system across the brain, and this web of cells contains at least 10 times as many cells as the neurological network. Some have termed this the ‘other brain’ network. Glial cells do not have synapses but they have structures called ‘gap junctions’. These work electrically rather than chemically, in that they release calcium ions. This is a far more effective method of cell communication than the neuron-to-neuron synaptic process. Astrocytes join together and in this way create a ‘intercellular calcium wave’23 (ICW) of information that spreads out from the source in all directions. This form of communication may solve one of the big mysteries of neuroscience, the so-called ‘binding problem’. We touched upon the question earlier: how does everything seem to come together in the brain to create the illusion of simultaneity when information is being received from many sources and at different speeds. We perceive sensory information as all happening ‘now’, and with the personal sense of being a focal point, a unity. Could it be the astrocytes that create this sense of unity? Well, this certainly seems to be the opinion of Brazilian biologists Alfredo Pereira Jr and Fábio Augusto Furlan. They describe the ‘astroglial’ network as a ‘master hub’ that integrates conscious states by bringing together processing from various parts of the brain.24

It is significant that the neurotransmitter responsible for this release of calcium ions is glutamate. It was also discovered that astrocytes control blood flow within the brain, sending it to regions involved in brain activity. For ‘brain activity’ read thought and conscious awareness.

In his book The Root of Thought,25 neuroscientist Andrew Koob argued that glial cells and calcium waves are directly responsible for thought. Koob is fascinated by the generation of imagery in dreams and during periods of sensory deprivation. In an interview published in Scientific American in October 2009 he made this astounding claim:

Without input from our senses through neurons, how is it that we have such vivid thoughts? How is it that when we are deep in thought we seemingly shut off everything in the environment around us? In this theory, neurons are tied to our muscular action and external senses. We know astrocytes monitor neurons for this information. Similarly, they can induce neurons to fire. Therefore, astrocytes modulate neuron behavior. This could mean that calcium waves in astrocytes are our thinking mind. Neuronal activity without astrocyte processing is a simple reflex; anything more complicated might require astrocyte processing. The fact that humans have the most abundant and largest astrocytes of any animal and we are capable of creativity and imagination also lends credence to this speculation.26

I would like to add to this speculation that when we discuss imagery created during periods of sensory deprivation or dream sequences, what we are actually describing are hallucinations. From this it can be proposed that hallucinations of all kinds, all of which fall within Koob’s description of mental activity ‘without input from our senses’, are facilitated by ion exchanges between astrocytes stimulated by glutamate. This, in turn, suggests that glutamate may be the neurochemical facilitator of consciousness, the processor that assists the glial cells in downloading consciousness from the Pleroma.

What strikes me as potentially significant here is glutamate’s double effect. It is responsible for both the cortical spreading depression (CSD) that moves across the neuronal networks and the ICW that fulfils a similar function in the communications within the astroglial network. Could it be that these are two separate areas of consciousness existing in parallel within the human brain? If this is correct, then the consciousness created by the astroglial network has at least 10 times the processing power of the neuronal network. Could the former be the source of Daemonic consciousness and the latter be responsible for Eidolonic consciousness? And could it be the accidental accessing of the information-processing capacity of the astroglial network by Eidolonic consciousness that brings about fleeting glimpses of the Pleroma?

I would like to suggest that the astroglial network is the neurological equivalent of dark matter/dark energy and junk DNA – the three mysterious substances that have prompted a huge degree of speculation in recent years.

Dark matter has been proposed to explain a huge anomaly in how the universe functions. Ever since the concept of the Big Bang has been used to explain the expansion of the universe, it has been assumed that the rate of expansion is slowing down. As with all explosions, the farther away in time from the source explosion ejected material is, the slower its movement away from that source. However, in 1998 the Hubble Space Telescope, on observing very distant supernovae, discovered that many billions of years ago the universe was expanding more slowly than it is today. This discovery was totally unexpected. The only explanation could be that roughly 68 per cent of the universe is made up of a totally unknown form of energy. Because it seems to be totally undetectable by our present measuring devices, it is known as ‘dark’ energy. A further 27 per cent is made up of a form of matter that is similarly undetectable. Not surprisingly, this is known as ‘dark’ matter. In effect, this means that everything we can observe using our senses and our best detection machines makes up around 5 per cent of what there really is. One candidate for dark energy is something I have already discussed: zero-point field.

DNA (deoxyribonucleic acid) is a molecule that carries all the genetic instructions to create a living organism. It stores biological information in a series of codes. However, there is one fact about DNA that tends not to be generally known, and that is that only 2 per cent of DNA actually carries code. The other 98 per cent of it is technically known as ‘non-coding’ DNA. Modern scientists have no idea what role this non-coding DNA fulfils. Its seeming uselessness is reflected in the fact that it is known as ‘junk’ DNA.

For centuries the defenders of materialist-reductionism have used the ‘law of parsimony’ as defined by William of Occam’s much-quoted statement, ‘Entities must not be multiplied beyond necessity.’ In simple terms this advises that among any competing hypotheses, the one with the fewest assumptions is the most reliably accurate. This has been long argued to be a reflection of the workings of nature. As Galileo famously wrote: ‘Nature does not multiply things unnecessarily; she makes use of the easiest and simplest means for producing her effects; she does nothing in vain.’27 The idea was supported a few years later by Isaac Newton, who stated in his Principia Mathematica (1687) that ‘Nature is pleased with simplicity, and effects not the pomp of superfluous causes.’

So the question must be asked, why does a seemingly parsimonious nature create such huge amounts of unnecessary and seemingly useless DNA? To have only 2 per cent of anything being of any use is, in my opinion, the total opposite of parsimony. The same logic can be applied to dark energy and dark matter. If our universe has any deep significance, why is it that everything we can observe and perceive is less than 5 per cent of what really exists. Is that 95 per cent similarly superfluous? In which case, why has nature created it?

The mystery of the astroglial network in the brain is similar in character. Why would evolution bother with such a huge number of useless cells within a brain that clearly needs more space within the cranium. Why evolve sulci and gyri, the furrows and ridges found on the surface of the cerebral cortex, when additional space could have been made millions of years ago by simply not having a seemingly useless astroglial network?

Materialist-reductionism has been a wonderful tool that has helped humankind mould its environment and create the wonders of modern technology. Newtonian science and its powerful child, quantum mechanics, have created a model of understanding that can effectively explain most of what is taking place in the consensual reality accessible to our collective senses. We now have telescopes that can extend our perceptions to the edge of the universe and microscopes that can show us individual atoms. We are within a few percentage points of understanding everything regarding the workings of our consensual reality. However, here lies the problem. Our consensual reality seems to be but a tiny part of a much greater universe, of whose workings we have no real understanding. Dark matter, dark energy in cosmology; junk DNA in biology and the medical sciences; and the astroglial network in neurology and consciousness … these things have given us fleeting glimpses of the Pleroma. Now is the time to open the doors of perception and glimpse the true universe.

In my last book for Watkins, The Infinite Mindfield, I referred to a story told by Israeli psychologist Benny Shanon regarding the location of the last great mystery. Please forgive me for repeating it here, but for me it sounds a note as true for this book as it was for that one.

On one of his many trips to Latin America, Shanon met up with an indigenous ice cream salesman deep in the Amazonian jungle. This most unlikely of sources shared with Shanon a popular myth known to his people. He explained:

God wanted to hide his secrets in a secure place. ‘Would I put them on the moon?’ He reflected. ‘But then, one day, human beings could get there, and it could be that those who would arrive there would not be worthy of the secret knowledge. Or perhaps I should hide them in the depths of the ocean?’ God entertained this as another possibility. But again, for the same reasons, he dismissed it. Then the solution occurred to Him: ‘I shall put my secrets in the inner sanctum of man’s own mind. Then only those who really deserve it will be able to get to it.’28

In my humble opinion the next great scientific frontier will not be outer space, but inner space. We will break out of the confines of our present consensual reality and in doing so will begin the first few tentative steps in creating a new science to explain the wonders of the Pleroma. But to do that we have to first open the doors of perception a little more.