4.

MAKING BRAINS ONE MODULE AT A TIME

“Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!”

—The Red Queen, in Lewis Carroll’s Through the Looking-Glass

OUR BRAINS LOOK as wonky as Frank Gehry’s Guggenheim Museum in Bilbao, Spain, but as Gehry is apt to point out, the museum doesn’t leak. It works! Gehry is an architectural genius who expanded our imagination of physical structures that perform useful functions. Our brain also has a physical structure that performs useful functions. There is method in the madness of that wonky-looking structure, bits of which we understand, most of which we don’t. Despite centuries of research, nobody fully understands how the convoluted mesh of biological tissue inside our heads produces the experiences of our everyday life. Gazillions of electrical, chemical, and hormonal processes occur in our brain every moment, yet we experience everything as a smoothly running unified whole. How can this be? Indeed, what is the organization of our brain that generates conscious unity?

Everything has an underlying structure; physicists take this truth down to the quantum level (which we will discuss in chapter 7). We are constantly taking things apart to see what makes them tick. Things are made up of parts, and so are bodies and brains. In that sense one could say we are built out of modules, which is to say, components that interact to produce the whole functioning entity we are examining. We need to know the parts and not only how they all mesh together but also how they interact.

There is little doubt that in some way the parts of our brain work collectively to produce our mental states and behaviors. On the surface, it seems logical to think that our brain functions as a global unit to produce a single conscious experience. Even the Nobel laureate Charles Sherrington, writing in the early 1900s, described the brain as an “enchanted loom,”1 suggesting that the nervous system works coherently to create the mystical mind. Yet neurologists at the time would have suggested to him to go on medical rounds. Their clinics were full of patients whose brain injuries told a different story.

Paradoxically, while all of us feel like a single undivided entity (a fact that seems to provide intuitive evidence for Sherrington’s loom), considerable evidence suggests that the brain does not operate in a holistic fashion. Instead, our undivided consciousness is actually produced by thousands of relatively independent processing units, or, more simply, modules. Modules are specialized and frequently localized networks of neurons that serve a specific function.

The neuroscientist, physicist, and philosopher Donald MacKay once commented that it is easier to understand how something works when it is not working properly. From work in the physical sciences, he knew that engineers could more quickly figure out how something, such as a television, worked if the picture was flickering than when it was running smoothly.2 Similarly, studying broken brains helps us understand better how unbroken ones work.

The most compelling evidence for a modular brain architecture arises from the study of patients who have suffered a brain lesion. When damage occurs to localized areas of the brain, some cognitive abilities will be impaired because the network of neurons responsible for that ability no longer functions, while others remain intact, tootling along, performing flawlessly. What is so intriguing about the brain-altered patients is that no matter what their abnormality, they all seem perfectly conscious. If conscious experience depended on the smooth operation of the entire brain, that shouldn’t be what happens. Since this fact—that modules are everywhere—is so central to my thesis, it’s important that we understand how modular the brain truly is.

Missing Modules but Functioning Brains

Take a lobe, any lobe in the brain, and consider people who have suffered a stroke. People with a right parietal lobe injury, for example, will commonly suffer from a syndrome called spatial hemi-neglect. Depending on the size and location of the lesion, patients with hemi-neglect may behave as if part or all of the left side of their world, which may include the left side of their body, does not exist! This could include not eating off the left side of their plate, not shaving or putting makeup on the left side of their face, not drawing the left side of a clock, not reading the left pages of a book, and not acknowledging anything or anyone in the left half of the room. Some will deny that their left arm and leg are theirs and will not use them when trying to get out of bed, even though they are not paralyzed. Some patients will even neglect the left side of space in their imagination and memories.3 That the deficits vary according to the size and location of the lesion suggests that damage that disrupts specific neural circuits results in impairments in different component processes. Mapping the functional neuroanatomy of these lesions has provided strong evidence for this suggestion.4

Now, here is the kicker: while hemi-neglect can occur when there is actual loss of sensation or motor systems, a version of it can also occur when all sensory systems and motor systems are in good working order—a syndrome known as extinction. In this case each half brain seems to work just fine alone, but it begins to fail when required to work at the same time as the other half. Yet information in the neglected field can be used at a nonconscious level!5 That means the information is there, but the patient isn’t conscious it is there. Here is how it works. If patients with left hemi-neglect are shown visual stimuli in both their right and left visual fields at the same time, they report seeing only the stimulus on the right. If, however, they are shown only the left visual stimulus, hitting the same exact place on the retina as previously, the left stimulus is perceived normally. In other words, if there is no competition from the normal side, then the neglected side will be noticed and pop into conscious awareness! What is strangest of all is that these patients will deny that there is anything wrong; they are not conscious of the loss of these circuits and their resulting problems.

It appears, then, that their autobiographical self must be derived only from what they are conscious of. And what they are conscious of is dependent on two things. First, they are not conscious of circuits that are not working. It is as if the circuits never existed and consciousness for what the circuits did disappears with the circuit. The second thing is that some sort of competitive processing is happening. Some circuits’ processing makes it into consciousness while others’ does not. In short, conscious experience seems tied to processing that is exceedingly local, which produces a specific capacity, and that processing can also be outcompeted by the processing of other modules and never make it to consciousness. This has astounding implications.

While some patients are not conscious of parts of their body that are actually there, my all-time favorite clinical disorder is the “third man” phenomenon,* in which a person feels the presence of another who actually is not there! Known as a “feeling of a presence” (FoP), it is the sensation that someone else is present in a specific spatial location, often just over the shoulder. It is so strong that people will continue to turn their head to glimpse or offer food to the presence. This is not the same as walking down a dark alley and getting creeped out by imagining someone following you. This presence pops up unexpectedly. It is actually a common phenomenon among alpinists and others suffering intense physical exhaustion in extreme conditions.

In his book The Naked Mountain,6 Reinhold Messner, widely considered to be the greatest mountaineer of all time (he was the first to solo-climb Mount Everest and, incidentally, never uses supplemental oxygen), described what happened in 1970 while he was making his first major Himalayan ascent, of Nanga Parbat, with his brother Günther: “Suddenly there was a third climber next to me. He was descending with us, keeping a regular distance a little to my right and a few steps away from me, just out of my field of vision. I could not see the figure and still maintain my concentration but I was certain there was someone there. I could sense his presence; I needed no proof.” You don’t have to be an exhausted alpinist, however, to experience such a presence. Nearly half of widows and widowers have felt the presence of their deceased spouses.7 For some, such phenomena are the starting point for tales of apparitions, ghosts, and divine intervention.

Not so, claims the Swiss neurologist and neurophysiologist Olaf Blanke, who came across the phenomenon unexpectedly. He had triggered it with electrical stimulation to the temporal parietal cortex of a patient’s brain while trying to locate the focus of a seizure.8 He has also studied a bevy of patients who complain of an FoP. He found that lesions in the frontoparietal area are specifically associated with the phenomenon and are on the opposite side of the body from the presence.9 This location suggested to him that disturbances in sensorimotor processing and multisensory integration may be responsible. While we are conscious of our location in space, we are unaware of the multitude of processes (vision, sound, touch, proprioception, motor movement, etc.) that, when normally integrated, properly locate us there. If there is a disorder in the processing, errors can occur and our brains can misinterpret our location. Blanke and his colleagues have found that one such error manifests itself as an FoP. Recently, they cleverly induced the FoP in healthy subjects by disordering their sensory processing with the help of a robotic arm.10

When we make a movement, we expect its consequence to occur at a specific time and location in space. You scratch your back, you expect to feel a sensation simultaneously on your back. When the sensation is spatially and temporally matched as expected, your brain interprets the sensation as self-generated. If there is a mismatch, if the signals are spatially and temporally incompatible with self-touch, you rule it as being done by another agent. Now picture yourself blindfolded, arms extended in front of you, with your fingertip in the thimble-like slot of a “master robot” that sends signals to a robotic arm behind your back. Your finger movements control the robotic arm’s movement, which strokes your back as you move your finger. In some trials your finger feels resistance that matches the force with which it is pushing, and in others the resistance is loosey-goosey, not properly correlated to what you are doing. If the sensation on your back is synchronous with your movement, even with your arms extended in front of you, your brain creates an illusion: you will feel as if your body has drifted forward and that you are touching your own back with your finger. If, however, the touch sensation is not synchronous, if it comes a tick late, your brain cooks up something different. Your self-location drifts in the opposite direction, backward away from your fingertip; you feel as if something other than you is touching your back. If, in addition, you also felt no resistance in the fingertip while controlling the arm, this asynchronous touch produces a feeling that another person is behind you touching your back! Blanke, using well-controlled bodily stimulations, demonstrated that sensorimotor conflicts (that is, signals that are spatially and temporally incompatible with physical self-touch) are sufficient to induce the FoP in healthy volunteers. These conflicts were produced by manipulating different localized neural networks—modules.

If the brain worked as a consolidated “enchanted loom,” then removing portions of the brain or stimulating erroneous processing in some circuits would either shut down the system entirely or cause dysfunction across all cognitive realms. In reality, many people can live relatively normal lives even if portions of their brain are missing or damaged. When people have damage to localized brain areas, there almost always appears to be impairment in some, but not all, cognitive domains. For example, a well-developed cognitive domain in humans is language. The language center in most people is housed in the left hemisphere. Two very distinct brain regions within the language center include Broca’s area and Wernicke’s area.

Broca’s area contributes to speech production, whereas Wernicke’s area deals with comprehension or understanding of written and spoken language and helps organize our words and sentences in an understandable way. Specifically, Broca’s area is involved with word articulation, coordinating the muscles in our lips, mouth, and tongue to accurately pronounce words, while Wernicke’s area organizes our words in a comprehensible order before we even speak. People with damage to Broca’s area have difficulty speaking: speech is effortful and comes in bursts, but the words they manage to announce are in a comprehensible order (e.g., “Brains … modu … lar”), though it may lack proper grammar. Broca patients are aware of their errors and are quickly frustrated. Conversely, people with damage to Wernicke’s area primarily have a comprehension disorder. They have speech with normal prosody and correct grammar, but what they say makes no sense. This shows us that each of these areas has a different and specific job; if that area is damaged, then it can no longer perform the job properly. This unambiguously demonstrates that there is hyperspecific modularity in the brain.

Why did modularity evolve in brains? I once heard the CEO of Coca-Cola describe the logic of the company’s corporate organization. As the company grew, the executives realized that having a central plant that made all of their product and then shipped it out to the world was crazy, inefficient, and costly. The shipping, the packaging costs, the travel costs of holding management meetings in “corporate headquarters,” and on and on made no sense. Clearly, they should divide the world into regions, build plants in each of those regions, and distribute their product locally. Central planning was out, local control was in. Same for the brain: cheaper and more efficient.

Evolving a Bigger Brain

Historically, it was assumed that animals with brains bigger than expected for their body size had greater intelligence and abilities. Humans were thought to have overly large brains, proportionally, and this accounted for our diverse abilities and intelligence. This theory has always had a problem, however. Neanderthals actually had bigger brains than we do, yet they didn’t make the competitive cut when Homo sapiens arrived on the scene. From my own research there appears another thorny problem: after split-brain surgery, the stand-alone left hemisphere (one-half of the brain) is nearly as intelligent as the intact whole brain. Bigger isn’t necessarily better. What is going on?

Suzana Herculano-Houzel and her coworkers, armed with a new technique to count neuron and non-neuron cell numbers in human brains, compared them across species. They found that rumors of our big brains were greatly exaggerated! The human brain is not out of whack size-wise but is a proportionately scaled-up primate brain. Although the human brain is much larger and has many more neurons, the ratio of neuron number to brain size is the same for chimps and humans.11 Another rousing finding was that the often-cited ratio of glial cells to neurons of 10:1, albeit tossed around with no references ever noted, was completely off the mark. In fact, the human brain holds no more than 50 percent glial cells, just as other primates’ brains do. Busting another myth, Herculano-Houzel suggests that this overestimate of the glial-to-neuron ratio of 10:1 may have been the basis for the false notion that we use only 10 percent of our brains!12

Yet compared to the brains of other mammals, the human brain has two advantages. First, it is built according to the very economical, space-saving scaling rules that apply to other primates, and among those economically built primate brains, it is the biggest and thus contains the most neurons. Brain size, however, cannot be used willy-nilly as a proxy for neuron number when comparing other species with primates. For instance, in rodents, comparing mice and rats, the rat brain is bigger but not solely because it has more neurons. For the rat, when the number of neurons increased, so did their size. Thus, a single rat neuron takes up more volume than a single mouse neuron: it’s like the size difference between capellini and spaghetti noodles. In primates, when comparing monkeys to humans, however, as neuron numbers increase, the size of the neuron stays the same. The result is that a bigger primate brain has a greater net increase in the number of neurons per volume than a bigger rodent brain. If we took a rat brain and increased its volume to be equal to that of a human brain, the rat would have only 1/7 the number of neurons as a human brain, simply because each of his neurons would take up more space. Increasing brain size is a tricky business, and it looks as if different orders (Primata, Rodentia, and the like) follow different rules when it comes to scaling up.

And this brings us back to modules. If, as human brains increased in neuron number, every neuron were connected to every other one, the number of axons (the cable part of each neuron) would increase exponentially. Our brains would be gigantic—in fact, they would be twenty kilometers in diameter13 and require so much energy that even if we were force-fed like a Toulouse goose, we would still not be able to run the thing.14 As it is, our brains represent about 2 percent of our entire body weight and suck down about 20 percent of our energy. Brains use so much energy because they are powerful electrical systems that are constantly active, like an air conditioner in July, in Phoenix. Another problem would be that the axons would be so long that the processing speeds would take a nosedive.

The neuroscientist Georg Striedter studies how and why differences occurred in brain evolution in different species. He suggests that there are certain laws that govern connectivity as brains increase in size.15 First of all, the number of neurons that an average neuron connects to does not change with increasing size. Instead, the absolute number of neuronal connections stays the same, with the result that the increase in brain size would be more manageable in terms of energy requirements and space. That means, however, that there is decreased connectivity overall as brains enlarge. Decreased connectivity means more independent processing.

The second law is that the connection lengths are minimized. This results in most neurons being hooked up to neighboring neurons. Short connections take less energy, less space, and less time for signaling, producing efficient communication between these localized neurons. Thus, as brains enlarge, wiring reorganization ensues, and their structural architecture changes. The resulting structural architecture is one of clusters of well-connected localized neurons, or “communities.”

This type of organization allows these separate clusters to independently specialize in performing a certain function: a module is born! While most neurons in a module sport intramodular connections, a sparse few have short connections to neurons in neighboring modules, allowing the formation of a neural circuit. A neural circuit is formed when a module receives information, modifies it, and transmits it to another module for further modification. Thus, while most modules are sparsely connected with other modules, the wiring does allow neighboring modules to form clusters for more complex processing. We will learn more about this in the next chapter when we discuss layered architecture.

Some modules are hierarchically arranged, such that they are made up of submodules, which themselves are made up of sub-submodules.16 Yet multiple modules running independently create a need for some efficient communication and coordination between them. This brings us to the third wiring requirement: Not all the connections are minimized; some long connections are maintained and serve as “shortcuts” between distant sites.

The overall architecture that these wiring laws produce is known as “small-world” architecture. This type of architecture is famous for its ability to host a high degree of modularity, yet few steps are needed to connect any two randomly selected modules. Small-world architecture is found in many complex systems, such as the western U.S. power grid and social networks. Multiple studies have borne out the notion that the brain is organized into clusters or modules of functionally interconnected regions.17

Advantages of a Modular Brain

Looking at this design, we can see that there are many reasons why a modular brain is superior to a globally functioning brain. First of all, a modular brain cuts down on energy costs. Since it is divided into units, only regions within a given module need to be active to complete specific assignments. If you used your entire brain for every action, your brain’s electric bill would go through the cranium. It is the same deal in Phoenix in the summer. If you only have the AC running in the bedroom at night, it is cheaper than if it is cooling the entire house. Despite saving energy through modularity, is the brain really that energy efficient if a fifth of your diet is dedicated to powering the thing?

It turns out the brain is fairly efficient despite being an energy hog. Neurons transmit electrical impulses through axons and dendrites, the brain’s “wires.” Although neural wiring functions much differently than the wiring in modern electrical devices, the basic idea is the same: electrical current transfers information from one place to another, and this requires energy. The farther an electrical current travels, the more energy it consumes; and the thicker the axon, the more resistance encountered and thus the more energy needed to overcome it. By working in local modules, the brain saves energy by operating over short distances, with thin wires, with short conduction times for information traveling between those modules. Additionally, given the dynamics of neural systems, a 60 percent wire fraction (the proportion of gray matter that is made up of axons) is what is predicted if wire length and thus conduction delays are minimized. Many brain structures are composed of wiring systems that are near this optimal value.18 If, instead, brains functioned as a global unit, then each brain region would have roughly equal amounts of wiring for short- and long-distance communication, and longer distances mean more “wire” and thus more “cost.” A modular brain cuts costs by keeping the wiring to a lower 3-to-5 ratio (the 60 percent wire fraction), thereby limiting the number of distantly communicating electrical transmissions. Overall, the brain appears to maximize energy efficiency by operating in modules.

Modular brains are also functionally efficient, because multiple modules can process specialized information simultaneously. It is much easier to walk, talk, and chew gum at the same time if many modular systems are working independently, rather than a single system attempting to coordinate all the actions. Plus, if the brain behaved as a single unit, then it would need to be a “jack-of-all-trades” to adequately perform all of our daily duties. It is more efficient to have specialized “master” modules perform specific tasks. Specialization is ubiquitous in complex systems. For instance, economies thrive when the best farmers farm, the best educators educate, and the best managers manage. Bad managers can sink a business, bad farmers can go bust, and bad teachers—well, we have all suffered at least one of those and know the consequences. When people apply themselves and focus on specific jobs without being concerned about all jobs necessary to keep an economy running, they become experts. Experts are more efficient producers. When experts work simultaneously, there is greater economic output than there would be if everyone attempted to do a little bit of everything. Thus, it seems reasonable to think that our brains evolved in a modular way to efficiently process multiple types of information concurrently.

Perhaps most important, a modular brain also allows faster adaptation or evolution of the system to a changing environment: because one module can be changed or duplicated independently of the rest, there is no risk of changing or losing other, already well-adapted modules in the process. Thus, further evolution of one part does not threaten well-functioning aspects of the system.

Even if we take evolution out of the equation, brain modularity is helpful in acquiring new skills. Researchers have found that the architecture of particular networks changes over the course of learning a motor skill.19 Although many skills take considerable time to perfect, we are able to learn new skills through experience. If the entire brain changed the way it functioned whenever we acquired a new skill, we would lose our expertise in old skills. The perks of brain modularity are that it saves energy when resources are scarce, allows for specialized parallel cognitive processing when time is limited, makes it easier to alter functionality when new survival pressures arise, and allows us to learn a variety of new skills. When one stops to think about it, how could the brain possibly be organized any other way?

Going Modular

Human brains are neither the only modular brains nor the only modular biological systems. Worm brains, fly brains, and cat brains are modular, as are vascular networks, protein-protein interaction networks, gene regulation networks, metabolic networks, and even human social networks.20 How did this modularity evolve? What selection pressures produce a modular system? This was the question that puzzled a trio of computer scientists who, after mulling it over, decided to test Striedter’s hypothesis that modularity is the by-product of pressure to minimize connection costs.21

Construction costs in a network include the costs of manufacturing the connections and maintaining them, and the energy it takes to transmit along them, as well as the cost of signal delays. The longer the connections and the more there are, the more expensive the network is to build and maintain.22 Also, adding more connections or length to a signaling pathway could delay critical response times—not good for survival in a competitive environment when a predator starts salivating at the sight of you, bares its fangs, and flexes its claws.

The computer scientists Jeff Clune, Jean-Baptiste Mouret, and Hod Lipson did what computer scientists do: they designed computer simulations.23 They used well-studied networks that had sensory inputs and produced outputs. What those outputs were determined how well the network performed when faced with environmental problems. They simulated twenty-five thousand generations of evolution, programming in a direct selection pressure to either maximize performance alone or maximize performance and minimize connection costs. And voilà! Once wiring-cost-minimization was added, in both changing and unchanging environments, modules immediately began to appear, whereas without the stipulation of minimizing costs, they didn’t. And when the three looked at the highest-performing networks that evolved, those networks were modular. Among that group, they found that the lower the costs were, the greater the modularity that resulted. These networks also evolved much quicker—in markedly fewer generations—whether in stable or changing environments. These simulation experiments provide strong evidence that selection pressures to maximize network performance and minimize connection costs will yield networks that are significantly more modular and more evolvable.

So now we know that modular systems have many advantages, but how do they do it? How do thousands of independent localized modules work together to coordinate our thoughts and behaviors and, ultimately, produce our conscious experience?

Modular Connections

Although modules are highly intraconnected to compute specialized functions, we have learned that they are also loosely connected to other modules. Some communication between modules is vital for coordinating complex behaviors. For instance, Broca’s and Wernicke’s areas have their own specialized functions for language, but they also must converse with each other. Wernicke’s area needs to organize phonemes and words into coherent sentences in order for Broca’s area to guide your lips, mouth, and tongue to produce the correct sound sequence. These language areas are densely connected via the arcuate fasciculus, a bundle of nerve fibers that runs between them like a highway. Your brain minimizes these costly, large communication networks by reducing connections between modules that contribute to different types of cognitive functions. There is no need to activate Broca’s and Wernicke’s areas when you’re smelling a rose, unless you launch into a sonnet on its beauty or a rant about hybridizers selecting for form over fragrance. Brain modules communicate with one another, but there are disproportionately more connections between modules that perform related cognitive processes and many fewer connections between modules involved in dissimilar processes.

Animal and Human Brains: What’s the Difference?

Even when employing strikingly different methods and data analysis techniques, most studies provide evidence that modules, in both structural and functional brain networks, exist across all species and share many of the same properties.24 It is worthwhile to take a moment to understand the difference between a structural and a functional network. “Structure” refers simply to the physical anatomy of a network: how many neurons, how they are arranged, their shape, and so forth. A functional network performs a certain function; it may have to do with speaking language, or it may have to do with understanding language. Importantly, the structure of a network does not reveal its function, or vice versa. It may lend clues, but that is it. For example, you can look at a tree and see its structure, but that tells you nothing about the function of leaves. Studies involving animals, ranging from invertebrates to mammals, have revealed that their neural modules are also highly intraconnected and spatially close to one another to reduce energy consumption. Interestingly, the neuronal network of the transparent nematode Caenorhabditis elegans (an organism possessing a few hundred highly studied neurons) functions in modules as well, despite its being one of the tiniest creatures with a neural system.25 Across species, modularity is efficient and necessary for organisms to effectively function and evolve in a competitive environment.

It is natural to assume that if modular brains are present in animals and humans, they must share similar cognitive aspects, including consciousness. Unfortunately, even though Thomas Nagel would love it, technology today does not permit us to truly understand how different organisms experience the world. Often it is even difficult for us to understand our own perception of the world. The best we can do to empirically understand the experience of others, both animals and people, is to use behavioral and brain-activity measurements.

It is not surprising that we associate conscious experience with our human complex cognitive skills. We jump to the conclusion that an animal, in order to be conscious, must have those same types of skills. We freely map the capacity to experience consciousness onto any number of things, from puppets to robots to, in my case, a 1949 Plymouth coupe.

One way researchers have looked for clues of early conscious states in other animals is to look for signs of tool use. Using tools is one behavior that is considered to indicate complex cognition. It turns out signs are all over the animal kingdom. Corvid birds, for example (in the Corvidae family: crows, ravens, jays, magpies, rooks, nutcrackers), develop tools to get food from hard-to-reach places in a manner similar to the way chimpanzees manufacture and use tools.26 Japanese crows in Sendai City use cars to crush nuts: they drop them onto pedestrian crossings and not only wait until they are crushed but also wait until the light turns red before retrieving them. New Caledonian crows are the whiz kids, making two types of tools, which they use in different ways for different jobs. They carry them when they go foraging, the way a fisherman carries his pole. They also solve “meta-tool” problems, in which they have to use one tool to get a second tool that is needed to retrieve food.27 Crows from different areas have different tool designs, suggesting that they show cultural variation and transmission.28 But basic stick-tool skills can be developed by hand-raised crows without social learning.29 While this most likely means crows are conscious in the sense that they are alive, alert, and experiencing the moment, does it indicate they are aware of their skills? They surely have some specialized modules other birds do not have, but does that make them self-aware? The many studies of their behavior, skills, and learning don’t venture to tackle that question. What about chimps?

Chimpanzees in the wild have long been observed using tools, primarily sticks to scoop up ants and honey, and leaves to scoop up water. Chimps from different geographical locations also use different tools for different purposes—here, too, suggesting cultural variation and social transmission of tool use. Yet, once a tool-use behavior has been learned by a chimp, it becomes a habit, and chimps don’t upgrade to an improved technique if one is discovered and used by a few members of their group.30 On the other hand, chimps that have been living with humans have been observed to solve puzzles and find solutions to complex problems. For example, chimpanzees that had seen a banana hanging from the ceiling out of reach stacked crates on top of one another to create a makeshift ladder to retrieve the banana.31 While the list of chimp tricks is long and dazzling, does this make them conscious beings in the same sense that humans are conscious? This is probably an ill-posed question. Perhaps the question should be “Does our conscious experience hold similar contents to that of a chimp?”

Chipping away at what all of these animal studies mean, many studies have compared chimpanzee thinking to infant thinking. In a simple hidden-object pointing task, where a desired object is placed out of view and the experimenter points to where it is, chimps are bewildered, whereas human infants succeed at fourteen months of age.32 At the same time, if a chimpanzee or a child observes a behavior, they both can mimic the behavior even if they have never performed such an act before. While children imitate all actions that they are shown to attain a reward goal, even the superfluous ones, chimps imitate only the necessary ones. It has been suggested that this indicates that children are compulsive imitators, whereas chimps imitate to attain a goal. If a chimpanzee is not presented with a reward (or punishment), however, then the learned behavior is generally not repeated. In contrast, infants will mimic behaviors regardless of whether there is a reward or punishment, suggesting that human infants have a propensity to learn new behaviors for the sake of learning alone.33 This would constitute a huge difference between humans and the rest of the animal kingdom. Still, it seems that the chimps have more going on than the corvids. Does their added cranial hardware enable conscious experience or simply change its contents?

Humans have an ability to learn and solve abstract problems that exceeds the capacity of other animals. People have invented technologies that are more sophisticated and provide much more utility than any tool an animal has created. Engineers and scientists have developed computers, airplanes, skyscrapers, rockets that take us to the moon … you name it. We only need a small portion of the population to be inventive, however. Through imitation and learning, useful items and discoveries spread like wildfire through the population and become part of our everyday lives. As the gifted psychologist David Premack has pointed out, humans have a “select few” who can develop great technologies, such as controlling fire, the wheel, agriculture, electricity, cell phones, the Internet, and bacon- and cheese-stuffed potato skins. No other living species have any members that achieve such great feats.34 Is all of this extra capacity to learn, problem-solve, and invent what allows us to be conscious? There has to be hardware, which is to say special modules, that allows for these capacities. Are they the key to our understanding?

I find the whole line of thinking that there is a magic potion that produces human consciousness misguided. Seeing all the marvelous things a chimp can do springs our minds into action, and we confer on them a huge special status. We grant them entry into our consciousness club, and we are happy to do so. But it took the person who discovered and articulated the mental life of chimpanzees to ask the question: What do they think about it all? We have a theory about chimps, but do they have a theory about us?

Premack, along with his student Guy Woodruff, was the first to test whether other animals have a “theory of mind.”35 Possessing a theory of mind means that an individual ascribes mental states, such as purpose, intention, knowledge, beliefs, doubts, pretending, liking, and so forth, to himself and to others. Premack and Woodruff, who coined the term, called it a “theory” because such states in others are not directly observable; they are inferred. Humans assume that others have minds and that their mental states drive their actions. Almost forty years after the idea was proposed, the dust still hasn’t settled, but it appears that while some animals do possess some degree of theory of mind, none have it to the extent that humans do. Josep Call, Michael Tomasello, and their colleagues have spent many years whittling away at this question. Chimps understand the goals and intentions of others, and the perceptions and the knowledge of others, to some extent, but despite many attempts to prove otherwise, it appeared that chimpanzees do not understand that others may have false beliefs,36 a test that two-and-a-half-year-old children pass.37 Just recently, however, Call and Tomasello, along with Christopher Krupenye, have found evidence that suggests three species of great apes do have some implicit understanding that others have false beliefs, but they have not yet been shown to make explicit behavior choices based on an understanding of false beliefs.38 How close the apes’ theory-of-mind ability is to that of humans remains to be seen.

Dogs have recently started sharing the limelight in the animal IQ contests in which sociability is concerned. Chaser, the retired psychology professor John Pilley’s famous border collie, knows over a thousand words, understands syntax, and makes inferences about what novel words mean.39 If he is told to bring the “dax” (a word he has never heard before), he will look through his substantial pile of toys and bring the one he has never seen before. Dogs can also make inferences about hidden food and other hidden objects based on social cues, such as human pointing (something that chimps do not do). Michael Tomasello suggests that this involves understanding two levels of intention: what and why. First, the dog must understand that the pointer intends for her to attend to what is being pointed at, and, second, the dog must figure out why she is supposed to: Is the person offering helpful information on where something is located, or does the person want the object for himself? 40 While a chimp often follows the pointing gesture, chimps don’t understand that the food is hidden there: they don’t seem to figure out the second level of intention, the why. In the past twelve years, the impressive ability of dogs to use communicative cues made by humans has sparked the interest of some researchers studying theory of mind, and while there are early indications that dogs do possess it to some extent,41 much more research needs to be done.

While dog lovers are thrilled with these findings, it should be remem-bered that dogs don’t show any special flexibility in nonsocial domains. They are slaves to special interests—I mean special capacities. When presented with nonsocial cues, such as food hidden under a tilted-up board versus one lying flat, they can’t solve the problem (an easy problem for a chimp to solve), nor do they understand that they should preferentially grab a string with food attached to it rather than one that is not attached to the food, again something that a chimp clues in to right away.42 The differing cognitive abilities of dogs suggest that they possess specific yet different modules, which evolved in response to different environmental pressures. The contents of their conscious experience are different from ours and different from chimps’, though some, no doubt, are shared.

Overall, it looks as if trying to nail down a cognitive prerequisite for conscious experience is a mug’s game. A little of this and a little of that doesn’t quite capture what the brain must do to engender conscious experience. The brain is not going to give up this trick easily, if indeed it is a trick. Recall that we do not consciously experience the blind spot in our visual field, even though it is there. Our visual system is performing a consciousness trick. Yet for most people, human conscious experience is not a trick; it is a very real thing, something that is managed by a part or system in the brain, and the hunt is on to find it. Since humans possess advanced cognitive processing that allows for the development and utilization of new technologies and for the making of inferences about the beliefs and desires of others, do human brains possess something that animals do not?

A recent comparative study looked at the neuropil volume in different areas of humans’ and chimps’ brains.43 Neuropil comprises the brain areas that are made of connections: a mixture of axons, dendrites, synapses, and more. The prefrontal cortex—the brain area in humans involved in decision making, problem solving, mental state attribution, and temporal planning—has a greater percentage of neuropil than is found in chimp brains, and the dendrites in this region have more spines with which they connect to other neurons than do other parts of the brain. This anatomical finding suggests that the connectivity patterns of the prefrontal neurons may contribute to what is different about our brains. Interestingly, corvids have a relatively larger forebrain than most other birds, especially the areas that are thought to be analogous to the prefrontal cortex of mammals.44 Yet, as we shall see, while this way of thinking may explain increased abilities, it is not going to get us to the goal of understanding how consciousness is enabled. Backsliding into the assumption that there is special sauce or a special brain region that gives us conscious experience is a nonstarter.

Where Is Consciousness?

We have to shift gears. We have to rid ourselves of the notion of the special sauce, the special place and thing. We have to think about the aggregate of largely independent modules and how their organization gives rise to our ever-present sense of conscious experience. As cognitive scientists, we get too fixated on the idea that consciousness is a phenomenon separate from our other psychological processes. Rather, we should be thinking about consciousness as an intrinsic aspect of many of our cognitive functions. If we lose a particular function, we lose the consciousness that accompanies it, but we don’t lose consciousness altogether.

An early clue that consciousness is not tied to a specific neural network came from my own studies on split-brain patients. While there are more neural connections within a half brain than between the two halves, there are still massive connections across the hemispheres. Even so, cutting those connections does little to one’s sense of conscious experience. That is to say, the left hemisphere keeps on talking and thinking as if nothing had happened even though it no longer has access to half of the human cortex. More important, disconnecting the two half brains instantly creates a second, also independent conscious system. The right brain now purrs along carefree from the left, with its own capacities, desires, goals, insights, and feelings. One network, split into two, becomes two conscious systems. How could one possibly think that consciousness arises from a particular specific network? We need a new idea to cope with this fact.

Consider, too, what the conscious experience is like for the split-brain patient who wakes up from surgery, and each hemisphere now doesn’t know about the other hemisphere’s visual field. The left brain doesn’t see the left side of space, and the right brain doesn’t see the right side. Yet the patient’s speaking left hemisphere does not complain of any vision loss. In fact, the patient will tell you he doesn’t notice any difference in anything after the surgery. How can this be when half the visual field is gone? Like a patient with spatial hemi-neglect, the speaking left hemisphere never complains that it has lost half its visual field. The modules that are responsible for reporting the loss are over in the right hemisphere and can no longer communicate with the left. The left hemisphere neither misses them nor is aware that they were ever there. The memories of having had that visual field are also gone from the left hemisphere. The whole conscious experience of the left visual field is now enjoyed only by the right hemisphere and completely disappears from the left hemisphere’s experience. What does this tell us about consciousness?

After being weaned away from the idea of a single “conscious” module, we can begin to narrow in on what consciousness actually is. We know that local brain lesions can produce various specific cognitive disabilities. Yet such patients are still aware of the world around them. The patient with a severe spatial neglect is not aware of the left half of space, but is still aware of the right.

What if conscious experience is managed by each module? Lose a module to injury or stroke, and the consciousness that accompanies that module is gone, too. Remember: patients with hemi-neglect aren’t conscious of one-half of space because the module that processes that information is no longer working. Or, if the modules responsible for locating oneself in space are not being integrated properly, conscious experience is deeply affected, and one ends up with the feeling that someone else is there just over your shoulder. Or, take people with Urbach-Wiethe disease, which leads to deterioration of the amygdalae: they no longer experience the emotion of fear. One such patient, although she has been extensively studied for more than twenty years, has no insight into her deficit and frequently finds herself in scary situations.45 It seems that because she doesn’t have the conscious experience of fear, she doesn’t avoid those situations.

This idea that consciousness is a property of individual modules, not a single network a species might have, could explain the different types of consciousness that exist across species. Animals are not unconscious zombies, but what each is conscious of differs depending on the modules it has and how those modules are connected. Humans have a rich conscious experience because of the many kinds of modules we possess. Indeed, humans might well possess highly developed integrative modules, which allow us to combine information from various modules into abstract thoughts. It is difficult to decipher how consciousness arises in humans, but thinking about consciousness as an aspect of multiple functioning modules may guide us to the answer.

Even so, if consciousness is an aspect of several different cognitive domains, how is it that people with an intact corpus callosum still experience the world as a single entity rather than a world of randomly presented snippets at any given moment in time? To understand this, we can relate the brain’s processing to a competition. Modules vary in the amount of electrical activity they possess from moment to moment, with the result that their contributions to our conscious experiences vary. The idea here is that the most “active” module wins the consciousness competition, and its processing becomes the life experience, the “state” of the individual at a particular moment in time. Imagine that you are on a beach watching an exotic bird fly through the air. At that moment, the visual thrill, the sight of the bird and its colorful feathers, has won the conscious-experience rivalry. The next moment the competition has been won by the call of another bird, and the next by a surge of curiosity, so you turn your head to locate the source of the sound. All of a sudden a sharp pain in your foot has priority, which immediately causes you to look down to see a crab clamped onto your toe. At every moment in time, your single conscious experience is the cognitive aspect that is most salient in your external or internal environment; it is the “squeaky wheel.” All the various competitive processing was performed by different modules. How does this all work?

I propose that what we call “consciousness” is a feeling forming a backdrop to, or attached to, a current mental event or instinct. It is best grasped by considering a common engineering architecture called layering, which allows complex systems to function efficiently and in an integrated fashion, from atoms to molecules, to cells, to circuits, to cognitive and perceptual capacities. If the brain indeed consists of different layers (in the engineering sense), then information from a micro level may be integrated at higher and higher layers until each modular unit itself produces consciousness. A layer architecture allows for new levels of functioning to arise from lower-level functioning parts that could not create the “higher level” experience alone. It is time to learn more about layering and the wonders it brings to understanding brain architecture. We are on the road to realizing that consciousness is not a “thing.” It is the result of a process embedded in an architecture, just as a democracy is not a thing but the result of a process.