< Gary Small > < Gigi Vorgan >
your brain is evolving right now
Excerpted from iBrain (pp. 1–22).
GARY SMALL is the Parlow-Solomon Professor on Aging at the David Geffen School of Medicine at UCLA and Director of the UCLA Center on Aging. He has written more than 500 scientific works.
Scientific American magazine named him one of the world’s top innovators in science and technology. He is also the author or coauthor of five popular books, including
The Memory Bible (2003) and
iBrain: Surviving the Technological Alteration of the Modern Mind (2008). More information at
www.DrGarySmall.com.
GIGI VORGAN wrote, produced, and appeared in numerous feature films and television projects before joining her husband, Dr. Gary Small, to cowrite The Memory Bible. She also coauthored with him The Memory Prescription (2005), The Longevity Bible (2007), The Naked Lady Who Stood on Her Head: A Psychiatrist’s Stories of His Most Bizarre Cases (2010), and iBrain: Surviving the Technological Alteration of the Modern Mind. Contact: gigi@vorgan.com.
THE CURRENT EXPLOSION of digital technology not only is changing the way we live and communicate but is rapidly and profoundly altering our brains. Daily exposure to high technology—computers, smartphones, video games, search engines like Google and Yahoo—stimulates brain cell alteration and neurotransmitter release, gradually strengthening new neural pathways in our brains while weakening old ones. Because of the current technological revolution, our brains are evolving right now—at a speed like never before.
Besides influencing how we think, digital technology is altering how we feel, how we behave, and the way in which our brains function. Although we are unaware of these changes in our neural circuitry or brain wiring, these alterations can become permanent with repetition. This evolutionary brain process has rapidly emerged over a single generation and may represent one of the most unexpected yet pivotal advances in human history. Perhaps not since Early Man first discovered how to use a tool has the human brain been affected so quickly and so dramatically.
Television had a fundamental impact on our lives in the past century, and today the average person’s brain continues to have extensive daily exposure to TV. Scientists at the University of California, Berkeley, recently found that on average Americans spend nearly three hours each day watching television or movies, or much more time spent than on all leisure physical activities combined. But in the current digital environment, the Internet is replacing television as the prime source of brain stimulation. Seven out often American homes are wired for high-speed Internet. We rely on the Internet and digital technology for entertainment, political discussion, and even social reform as well as communication with friends and coworkers.
As the brain evolves and shifts its focus toward new technological skills, it drifts away from fundamental social skills, such as reading facial expressions during conversation or grasping the emotional context of a subtle gesture. A Stanford University study found that for every hour we spend on our computers, traditional face-to-face interaction time with other people drops by nearly thirty minutes. With the weakening of the brain’s neural circuitry controlling human contact, our social interactions may become awkward, and we tend to misinterpret, and even miss, subtle, nonverbal messages. Imagine how the continued slipping of social skills might affect an international summit meeting ten years from now when a misread facial cue or a misunderstood gesture could make the difference between escalating military conflict or peace.
The high-tech revolution is redefining not only how we communicate but how we reach and influence people, exert political and social change, and even glimpse into the private lives of coworkers, neighbors, celebrities, and politicians. An unknown innovator can become an overnight media magnet as news of his discovery speeds across the Internet. A cell phone video camera can capture a momentary misstep of a public figure and in minutes it becomes the most downloaded video on YouTube. Internet social networks like MySpace and Facebook have exceeded a hundred million users, emerging as the new marketing giants of the digital age and dwarfing traditional outlets such as newspapers and magazines.
Young minds tend to be the most exposed, as well as the most sensitive, to the impact of digital technology. Today’s young people in their teens and twenties, who have been dubbed Digital Natives, have never known a world without computers, twenty-four-hour TV news, Internet, and cell phones—with their video, music, cameras, and text messaging. Many of these Natives rarely enter a library, let alone look something up in a traditional encyclopedia; they use Google, Yahoo, and other online search engines. The neural networks in the brains of these Digital Natives differ dramatically from those of Digital Immigrants: people—including all baby boomers—who came to the digital/computer age as adults but whose basic brain wiring was laid down during a time when direct social interaction was the norm. The extent of their early technological communication and entertainment involved the radio, telephone, and TV.
As a consequence of this overwhelming and early high-tech stimulation of the Digital Native’s brain, we are witnessing the beginning of a deeply divided brain gap between younger and older minds—in just one generation. What used to be simply a generation gap that separated young people’s values, music, and habits from those of their parents has now become a huge divide resulting in two separate cultures. The brains of the younger generation are digitally hardwired from toddlerhood, often at the expense of neural circuitry that controls one-on-one people skills. Individuals of the older generation face a world in which their brains must adapt to high technology or they’ll be left behind—politically, socially, and economically.
Young people have created their own digital social networks, including a shorthand type of language for text messaging, and studies show that fewer young adults read books for pleasure now than in any generation before them. Since 1982, literary reading has declined by 28 percent in eighteen- to thirty-four-year-olds. Professor Thomas Patterson and colleagues at Harvard University reported that only 16 percent of adults age eighteen to thirty read a daily newspaper, compared with 35 percent of those thirty-six and older. Patterson predicts that the future of news will be in the electronic digital media rather than the traditional print or television forms.
These young people are not abandoning the daily newspaper for a stroll in the woods to explore nature. Conservation biologist Oliver Pergams at the University of Illinois recently found a highly significant correlation between how much time people spend with new technology, such as video gaming, Internet surfing, and video watching, and the decline in per capita visits to national parks.
Digital Natives are snapping up the newest electronic gadgets and toys with glee and often putting them to use in the workplace. Their parents’ generation of Digital Immigrants tends to step more reluctantly into the computer age, not because they don’t want to make their lives more efficient through the Internet and portable devices but because these devices may feel unfamiliar and might upset their routine at first.
During this pivotal point in brain evolution, Natives and Immigrants alike can learn the tools they need to take charge of their lives and their brains, while both preserving their humanity and keeping up with the latest technology. We don’t all have to become techno-zombies, nor do we need to trash our computers and go back to writing longhand. Instead, we all should help our brains adapt and succeed in this ever-accelerating technological environment.
>>> it’s all in your head
Every time our brains are exposed to new sensory stimulation or information, they function like camera film when it is exposed to an image. The light from the image passes through the camera lens and causes a chemical reaction that alters the film and creates a photograph.
As you glance at your computer screen or read this book, light impulses from the screen or page will pass through the lens of your eye and trigger chemical and electrical reactions in your retina, the membrane in the back of the eye that receives images from the lens and sends them to the brain through the optic nerve. From the optic nerve, neurotransmitters send their messages through a complex network of neurons, axons, and dendrites until you become consciously aware of the screen or page. All this takes a minuscule fraction of a second.
Perception of the image may stir intense emotional reactions, jog repressed memories, or simply trigger an automatic physical response—like turning the page or scrolling down the computer screen. Our moment-to-moment responses to our environment lead to very particular chemical and electrical sequences that shape who we are and what we feel, think, dream, and do. Although initially transient and instantaneous, enough repetition of any stimulus—whether it’s operating a new technological device or simply making a change in one’s jogging route—will lay down a corresponding set of neural network pathways in the brain, which can become permanent.
Your brain—weighing about three pounds—sits cozily within your skull and is a complex mass of tissue, jam-packed with an estimated hundred billion cells. These billions of cells have central bodies that control them, which constitute the brain’s gray matter, also known as the cortex, an extensive outer layer of cells or neurons. Each cell has extensions, or wires (axons), that make up the brain’s white matter and connect to dendrites, allowing the cells to communicate and receive messages from one another across synapses, or connection sites.
The brain’s gray matter and white matter are responsible for memory, thinking, reasoning, sensation, and muscle movement. Scientists have mapped the various regions of the brain that correspond to different functions and specialized neural circuitry. These regions and circuits manage everything we do and experience, including falling in love, flossing our teeth, reading a novel, recalling fond memories, and snacking on a bag of nuts.
The amount and organizational complexity of these neurons, their wires, and their connections are vast and elaborate. In the average brain, the number of synaptic connection sites has been estimated at 1,000,000,000,000,000, or a million times a billion. After all, it’s taken millions of years for the brain to evolve to this point. The fact that it has taken so long for the human brain to evolve such complexity makes the current single-generation, high-tech brain evolution so phenomenal. We’re talking about significant brain changes happening over mere decades rather than over millennia.
>>> young plastic brains
The process of laying down neural networks in our brains begins in infancy and continues throughout our lives. These networks or pathways provide our brains an organizational framework for incoming data. A young mind is like a new computer with some basic programs built in and plenty of room left on its hard drive for additional information. As more and more data enter the computer’s memory, it develops shortcuts to access that information. E-mail, word processing, and search engine programs learn the user’s preferences and repeated keywords, for which they develop shortcuts, or macros, to complete words and phrases after only one or two keys have been typed. As young malleable brains develop shortcuts to access information, these shortcuts represent new neural pathways being laid down. Young children who have learned their times tables by heart no longer use the more cumbersome neural pathway of figuring out the math problem by counting their fingers or multiplying on paper. Eventually they learn even more effective shortcuts, such as ten times any number simply requires adding a zero, and so on.
In order for us to think, feel, and move, our neurons or brain cells need to communicate with one another. As they mature, neurons sprout abundant branches, or dendrites, that receive signals from the long wires or axons of neighboring brain cells. The amount of cell connections, or synapses, in the human brain reaches its peak early in life. At age two, synapse concentration maxes out in the frontal cortex, when the weight of the toddler’s brain is nearly that of an adult’s. By adolescence, these synapses trim themselves down by about 60 percent and then level off for adulthood. Because there are so many potential neural connections, our brains have evolved to protect themselves from “overwiring” by developing a selectivity and letting in only a small subset of information. Our brains cannot function efficiently with too much information.
The vast number of potentially viable connections accounts for the young brain’s plasticity, its ability to be malleable and ever-changing in response to stimulation and the environment. This plasticity allows an immature brain to learn new skills readily and much more efficiently than the trimmed-down adult brain. One of the best examples is the young brain’s ability to learn language. The fine-tuned and well-pruned adult brain can still take on a new language, but it requires hard work and commitment. Young children are more receptive to the sounds of a new language and much quicker to learn the words and phrases. Linguistic scientists have found that the keen ability of normal infants to distinguish foreign-language sounds begins declining by twelve months of age.
Studies show that our environment molds the shape and function of our brains as well, and it can do so to the point of no return. We know that normal human brain development requires a balance of environmental stimulation and human contact. Deprived of these, neuronal firing and brain cellular connections do not form correctly. A well-known example is visual sensory deprivation. A baby born with cataracts will not be able to see well-defined spatial stimuli in the first six months of life. If left untreated during those six months, the infant may never develop proper spatial vision. Because of ongoing development of visual brain regions early in life, children remain susceptible to the adverse effects of visual deprivation until they are about seven or eight years old. Although exposure to new technology may appear to have a much more subtle impact, its structural and functional effects are profound, particularly on a young, extremely plastic brain.
Of course, genetics plays a part in our brain development as well, and we often inherit cognitive talents and traits from our parents. There are families in which musical, mathematical, or artistic talents appear in several family members from multiple generations. Even subtle personality traits appear to have genetic determinants. Identical twins who were separated at birth and then reunited as adults have discovered that they hold similar jobs, have given their children the same names, and share many of the same tastes and hobbies, such as collecting rare coins or painting their houses green.
But the human genome—the full collection of genes that produces a human being—cannot run the whole show. The relatively modest number of human genes—estimated at twenty thousand—is tiny compared with the billions of synapses that eventually develop in our brains. Thus, the amount of information in an individual’s genetic code would be insufficient to map out the billions of complex neural connections in the brain without additional environmental input. As a result, the stimulation we expose our minds to every day is critical in determining how our brains work.
>>> natural selection
Evolution essentially means change from a primitive to a more specialized or advanced state. When your teenage daughter learns to upload her new iPod while IM’ing on her laptop, talking on her cell phone, and reviewing her science notes, her brain adapts to a more advanced state by cranking out neurotransmitters, sprouting dendrites, and shaping new synapses. This kind of moment-to-moment, day-in and day-out brain morphing in response to her environment will eventually have an impact on future generations through evolutionary change.
One of the most influential thinkers of the nineteenth century, Charles Darwin, helped explain how our brains and bodies evolve through natural selection, an intricate interaction between our genes and our environment, which Darwin simply defined as a “preservation of favorable variations and the rejection of injurious variations.” Genes, made up of DNA—the blueprint of all living things—define who we are: whether we’ll have blue eyes, brown hair, flexible joints, or perfect pitch. Genes are passed from one generation to the next, but occasionally the DNA of an offspring contains errors or mutations. These errors can lead to differing physical and mental attributes that could give certain offspring an advantage in some environments. For example, the genetic mutation leading to slightly improved visual acuity gave the “fittest” ancestral hunters a necessary advantage to avoid oncoming predators and go on to kill their prey. Darwin’s principle of survival of the fittest helps explain how those with a genetic edge are more likely to survive, thrive, and pass their DNA on to the next generation. These DNA mutations also help explain the tremendous diversity within our species that has developed over time.
Not all brain evolution is about survival. Most of us in developed nations have the survival basics down—a place to live, a grocery store nearby, and the ability to dial 911 in an emergency. Thus, our brains are free to advance in creative and academic ways, achieve higher goals, and, it is hoped, increase our enjoyment of life.
Sometimes an accident of nature can have a profound effect on the trajectory of our species, putting us on a fast-track evolutionary course. According to anthropologist Stanley Ambrose of the University of Illinois, approximately three hundred thousand years ago, a Neanderthal man realized he could pick up a bone with his hand and use it as a primitive hammer. Our primitive ancestors soon learned that this tool was more effective when the other object was steadied with the opposite hand. This led our ancestors to develop right-handedness or left-handedness. As one side of the brain evolved to become stronger at controlling manual dexterity, the opposite side became more specialized in the evolution of language. The area of the modern brain that controls the oral and facial muscle movement necessary for language—Broca’s area—is in the frontal lobe just next to the fine muscle area that controls hand movement.
Nine out often people are right-handed, and their Broca’s area, located in the left hemisphere of their brain, controls the right side of their body. Left-handers generally have their Broca’s area in the right hemisphere of their brain. Some of us are ambidextrous, but our handedness preference for the right or the left tends to emerge when we write or use any handheld tool that requires a precision grip.
In addition to handedness, the coevolution of language and toolmaking led to other brain alterations. To create more advanced tools, prehuman Neanderthals had to have a goal in mind and the planning skills to reach that goal. For example, ensuring that a primitive spear or knife could be gripped well and kill prey involved planning a sequence of actions, such as cutting and shaping the tool and collecting its binding material. Similar complex planning was also necessary for the development of grammatical language, including stringing together words and phrases and coordinating the fine motor lingual and facial muscles, which are thought to have further accelerated frontal lobe development.
In fact, when neuroscientists perform functional magnetic resonance imaging (MRI) studies while volunteers imagine a goal and carry out secondary tasks to achieve that goal, the scientists can pinpoint areas of activation in the most anterior, or forward, part of the frontal lobe. This frontal lobe region probably developed at the same time that language and tools evolved, advancing our human ancestors’ ability to hold in mind a main goal while exploring secondary ones—the fundamental components of our human ability to plan and reason.
Brain evolution and advancement of language continue today in the digital age. In addition to the shorthand that has emerged through e-mail and instant messaging, a whole new lexicon has developed through text messaging, based on limiting the number of words and letters used when communicating on handheld devices. Punctuation marks and letters are combined in creative ways to indicate emotions, such as LOL = laugh out loud, and :-) = happy or good feelings. Whether our communications involve talking, written words, or even just emoticons, different brain regions control and react to the various types of communications. Language—either spoken or written—is processed in Broca’s area in our frontal lobes. However, neuroscientists at Tokyo Denki University in Japan found that when volunteers viewed emoticons during functional MRI scanning, the emoticons activated the right inferior frontal gyrus, a region that controls nonverbal communication skills.
>>> honey, does my brain look fat?
Natural selection has literally enlarged our brains. The human brain has grown in intricacy and size over the past few hundred thousand years to accommodate the complexity of our behaviors. Whether we’re painting, talking, hammering a nail or answering e-mail, these activities require elaborate planning skills, which are controlled in the front part of the brain.
As Early Man’s language and toolmaking skills gradually advanced, brain size and specialization accelerated. Our ancestors who learned to use language began to work together in hunting groups, which helped them survive drought and famine. Sex-specific social roles evolved further as well. Males specialized in hunting, and those males with better visual and spatial abilities (favoring the right brain) had the hunting advantage. Our female ancestors took on the role of caring for offspring, and those with more developed language skills (left brain) were probably more nurturing to their offspring, so those offspring were more likely to survive. Even now, women tend to be more social and talk more about their feelings, while men, no longer hunters, retain their highly evolved right-brain visual-spatial skills, thus often refusing to use the GPS navigation systems in their cars to get directions.
The printing press, electricity, telephone, automobile, and air travel were all major technological innovations that greatly affected our lifestyles and our brains in the twentieth century. Medical discoveries have brought us advances that would have been considered science fiction just decades ago. However, today’s technological and digital progress is likely causing our brains to evolve at an unprecedented pace....
>>> your brain, on google
We know that the brain’s neural circuitry responds every moment to whatever sensory input it gets, and that the many hours people spend in front of the computer—doing various activities, including trolling the Internet, exchanging e-mail, videoconferencing, IM’ing, and e-shopping—expose their brains to constant digital stimulation. Our UCLA research team wanted to look at how much impact this extended computer time was having on the brain’s neural circuitry, how quickly it could build up new pathways, and whether or not we could observe and measure these changes as they occurred.
I enlisted the help of Drs. Susan Bookheimer and Teena Moody, UCLA experts in neuropsychology and neuroimaging. We hypothesized that computer searches and other online activities cause measurable and rapid alterations to brain neural circuitry, particularly in people without previous computer experience.
To test our hypotheses, we planned to use functional MRI scanning to measure the brain’s neural pathways during a common Internet computer task: searching Google for accurate information. We first needed to find people who were relatively inexperienced and naive to the computer. Because the Pew Internet project surveys had reported that about 90 percent of young adults are frequent Internet users compared with less than 50 percent of older people, we knew that people naive to the computer did exist and that they tended to be older.
After initial difficulty finding people who had not yet used computers, we were able to recruit three volunteers in their midfifties and sixties who were new to computer technology yet willing to give it a try. To compare the brain activity of these three computer-naive volunteers, we also recruited three computer-savvy volunteers of comparable age, gender, and socioeconomic background. For our experimental activity, we chose searching on Google for specific and accurate information on a variety of topics, ranging from the health benefits of eating chocolate to planning a trip to the Galápagos.
Next, we had to figure out a way to do MRI scanning on the volunteers while they used the Internet. Because the study subjects had to be inside a long narrow tube of an MRI scanner during the experiment, there would be no space for a computer, keyboard, or mouse. To re-create the Google-search experience inside the scanner, the volunteers wore a pair of special goggles that presented images of website pages designed to simulate the conditions of a typical Internet search session. The system allowed the volunteers to navigate the simulated computer screen and make choices to advance their search by simply pressing one finger on a small keypad, conveniently placed.
To make sure that the functional MRI scanner was measuring the neural circuitry that controls Internet searches, we needed to factor out other sources of brain stimulation. To do this, we added a control task that involved the study subjects reading pages of a book projected through the specialized goggles during the MRI. This task allowed us to subtract from the MRI measurements any nonspecific brain activations, from simply reading text, focusing on a visual image, or concentrating. We wanted to observe and measure only the brain’s activity from those mental tasks required for Internet searching, such as scanning for targeted key words, rapidly choosing from among several alternatives, going back to a previous page if a particular search choice was not helpful, and so forth. We alternated this control task—simply reading a simulated page of text—with the Internet searching task. We also controlled for nonspecific brain stimulations caused by the photos and drawings that are typically displayed on an Internet page.
Finally, to determine whether we could train the brains of Internet-naive volunteers, after the first scanning session we asked each volunteer to search the Internet for an hour each day for five days. We gave the computer-savvy volunteers the same assignment and repeated the functional MRI scans on both groups after the five days of search-engine training.
As we had predicted, the brains of computer-savvy and computer-naive subjects did not show any difference when they were reading the simulated book text; both groups had years of experience in this mental task, and their brains were quite familiar with reading books. By contrast, the two groups showed distinctly different patterns of neural activation when searching on Google. During the baseline scanning session, the computer-savvy subjects used a specific network in the left front part of the brain, known as the dorsolateral prefrontal cortex. The Internet-naive subjects showed minimal, if any, activation in this region.
One of our concerns in designing the study was that five days would not be enough time to observe any changes, but previous research suggested that even Digital Immigrants can train their brains relatively quickly. Our initial hypothesis turned out to be correct. After just five days of practice, the exact same neural circuitry in the front part of the brain became active in the Internet-naive subjects. Five hours on the Internet, and the naive subjects had already rewired their brains.
This particular area of the brain controls our ability to make decisions and integrate complex information. It also controls our mental process of integrating sensations and thoughts, as well as working memory, which is our ability to keep information in mind for a very short time—just long enough to manage an Internet search task or dial a phone number after getting it from directory assistance.
The computer-savvy volunteers activated the same frontal brain region at baseline and had a similar level of activation during their second session, suggesting that for a typical computer-savvy individual, the neural circuit training occurs relatively early and then remains stable. But these initial findings raise several unanswered questions. If our brains are so sensitive to just an hour a day of computer exposure, what happens when we spend more time? What about the brains of young people, whose neural circuitry is even more malleable and plastic? What happens to their brains when they spend their average eight hours daily with their high-tech toys and devices?
>>> techno-brain burnout
In today’s digital age, we keep our smartphones at our hip and their earpieces attached to our ears. A laptop is always within reach, and there’s no need to fret if we can’t find a landline—there’s always Wi-Fi (short for wireless fidelity, which signifies any place that supplies a wireless connection to the Internet) to keep us connected. As technology enables us to cram more and more work into our days, it seems as if we create more and more work to do.
Our high-tech revolution has plunged us into a state of continuous partial attention, which software executive Linda Stone describes as continually staying busy—keeping tabs on everything while never truly focusing on anything. Continuous partial attention differs from multitasking, wherein we have a purpose for each task and we are trying to improve efficiency and productivity. Instead, when our minds partially attend, and do so continuously, we scan for an opportunity for any type of contact at every given moment. We virtually chat as our text messages flow, and we keep tabs on active buddy lists (friends and other screen names in an instant message program); everything, everywhere is connected through our peripheral attention. Although having all our pals online from moment to moment seems intimate, we risk losing personal touch with our real-life relationships and may experience an artificial sense of intimacy compared with when we shut down our devices and devote our attention to one individual at a time. But still, many people report that if they’re suddenly cut off from someone’s buddy list, they take it personally—deeply personally.
When paying partial continuous attention, people may place their brains in a heightened state of stress. They no longer have time to reflect, contemplate, or make thoughtful decisions. Instead, they exist in a sense of constant crisis—on alert for a new contact or bit of exciting news or information at any moment. Once people get used to this state, they tend to thrive on the perpetual connectivity. It feeds their egos and sense of self-worth, and it becomes irresistible.
Neuroimaging studies suggest that this sense of self-worth may protect the size of the hippocampus—that horseshoe-shaped brain region in the medial (inward-facing) temporal lobe, which allows us to learn and remember new information. Dr. Sonia Lupien and associates at McGill University studied hippocampal size in healthy younger and older adult volunteers. Measures of self-esteem correlated significantly with hippocampal size, regardless of age. They also found that the more people felt in control of their lives, the larger the hippocampus.
But at some point, the sense of control and self-worth we feel when we maintain partial continuous attention tends to break down—our brains were not built to maintain such monitoring for extended time periods. Eventually, the endless hours of unrelenting digital connectivity can create a unique type of brain strain. Many people who have been working on the Internet for several hours without a break report making frequent errors in their work. Upon signing off, they notice feeling spaced out, fatigued, irritable, and distracted, as if they are in a “digital fog.” This new form of mental stress, what I term techno-brain burnout, is threatening to become an epidemic.
Under this kind of stress, our brains instinctively signal the adrenal gland to secrete cortisol and adrenaline. In the short run, these stress hormones boost energy levels and augment memory, but over time they actually impair cognition, lead to depression, and alter the neural circuitry in the hippocampus, amygdala, and prefrontal cortex—the brain regions that control mood and thought. Chronic and prolonged techno-brain burnout can even reshape the underlying brain structure.
Dr. Sara Mednick and colleagues at Harvard University were able to experimentally induce a mild form of techno-brain burnout in research volunteers; they then were able to reduce its impact through power naps and by varying mental assignments. Their study subjects performed a visual task: reporting the direction of three lines in the lower left corner of a computer screen. The volunteers’ scores worsened over time, but their performance improved if the scientists alternated the visual task between the lower left and lower right corners of the computer screen. This result suggests that brain burnout may be relieved by varying the location of the mental task.
The investigators also found that the performance of study subjects improved if they took a quick twenty- to thirty-minute nap. The neural networks involved in the task were apparently refreshed during rest; however, optimum refreshment and reinvigoration for the task occurred when naps lasted up to sixty minutes—the amount of time it takes for rapid eye movement (REM) sleep to kick in.
>>> the new, improved brain
Young adults have created computer-based social networks through sites like MySpace and Facebook, chat rooms, instant messaging, videoconferencing, and e-mail. Children and teenagers are cyber-savvy too. A fourteen-year-old girl can chat with ten of her friends at one time with the stroke of a computer key and find out all the news about who broke up with whom in seconds—no need for ten phone calls or, heaven forbid, actually waiting to talk in person the next day at school.
These Digital Natives have defined a new culture of communication—no longer dictated by time, place, or even how one looks at the moment unless they’re video chatting or posting photographs of themselves on MySpace. Even baby boomers who still prefer communicating the traditional way—in person—have become adept at e-mail and instant messaging. Both generations—one eager, one often reluctant—are rapidly developing these technological skills and the corresponding neural networks that control them, even if it’s only to survive in the ever-changing professional world.
Almost all Digital Immigrants will eventually become more technologically savvy, which will bridge the brain gap to some extent. And, as the next few decades pass, the workforce will be made up of mostly Digital Natives; thus, the brain gap as we now know it will cease to exist. Of course, people will always live in a world in which they will meet friends, date, have families, go on job interviews, and interact in the traditional face-to-face way. However, those who are most fit in these social skills will have an adaptive advantage. For now, scientific evidence suggests that the consequences of early and prolonged technological exposure of a young brain may in some cases never be reversed, but early brain alterations can be managed, social skills learned and honed, and the brain gap bridged.
Whether we’re Digital Natives or Immigrants, altering our neural networks and synaptic connections through activities such as e-mail, video games, Googling (verb: to use the Google search engine to obtain information on the Internet [from Wikipedia, the free encyclopedia]), or other technological experiences does sharpen some cognitive abilities. We can learn to react more quickly to visual stimuli and improve many forms of attention, particularly the ability to notice images in our peripheral vision. We develop a better ability to sift through large amounts of information rapidly and decide what’s important and what isn’t—our mental filters basically learn how to shift into overdrive. In this way, we are able to cope with the massive amounts of information appearing and disappearing on our mental screens from moment to moment.
Initially, the daily blitz of data that bombards us can create a form of attention deficit, but our brains are able to adapt in a way that promotes rapid information processing. According to Professor Pam Briggs of North Umbria University in the United Kingdom, Web surfers looking for information on health spend two seconds or less on any particular website before moving on to the next one. She found that when study subjects did stop and focus on a particular site, that site contained data relevant to the search, whereas those they skipped over contained almost nothing relevant to the search. This study indicates that our brains learn to swiftly focus attention, analyze information, and almost instantaneously decide on a go or no-go action. Rather than simply catching “digital ADD,” many of us are developing neural circuitry that is customized for rapid and incisive spurts of directed concentration.
While the brains of today’s Digital Natives are wiring up for rapid-fire cybersearches, the neural circuits that control the more traditional learning methods are neglected and gradually diminished. The pathways for human interaction and communication weaken as customary one-on-one people skills atrophy. Our UCLA research team and other scientists have shown that we can intentionally alter brain wiring and reinvigorate some of these dwindling neural pathways, even while the newly evolved technology circuits bring our brains to extraordinary levels of potential.
Although the digital evolution of our brains increases social isolation and diminishes the spontaneity of interpersonal relationships, it may well be increasing our intelligence in the way we currently measure and define IQ. Average IQ scores are steadily rising with the advancing digital culture, and the ability to multitask without errors is improving. Neuroscientist Paul Kearney at Unitec in New Zealand reported that some computer games can actually improve cognitive ability and multitasking skills. He found that volunteers who played the games eight hours each week improved multitasking skills by two and a half times.
Other research at Rochester University has shown that video game playing can improve peripheral vision as well. As the modern brain continues to evolve, some attention skills improve, mental response times sharpen, and the performance of many brain tasks becomes more efficient. These new brain proficiencies will be even greater in future generations and alter our current understanding and definition of intelligence.